




Summary: Seeking an experienced Data Software Engineer to develop data-focused applications using Big Data tools and cloud technologies to solve complex business problems. Highlights: 1. Drive creation of data-focused applications with leading-edge Big Data tools 2. Collaborate across teams to deliver innovative solutions 3. Utilize AWS services to improve data workflow efficiency We are searching for a talented and experienced **Data Software Engineer** to join our team and drive the creation of data\-focused applications. You will utilize leading\-edge Big Data tools, cloud technologies, and collaborate across teams to deliver innovative solutions that solve intricate business problems. **Responsibilities** * Engineer and enhance data software applications that support Data Integration Engineers * Design and deploy advanced analytical solutions using Spark, PySpark, NoSQL, and other Big Data tools * Leverage AWS services to add cloud features that improve data workflow efficiency * Partner with product and engineering teams to gather requirements and enable decision support * Collaborate with architects, technical leads, and other teams to maintain solution consistency * Assess business goals and technical constraints to recommend the right implementations * Lead code review activities to encourage best practices and strong code quality * Execute validation and testing across functional, technical, and performance criteria * Produce and maintain detailed documentation to support ongoing development * Work directly with clients to understand needs and provide expert technical direction **Requirements** * Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related area * 2\+ years of hands\-on Data Software Engineering experience with Big Data technologies * Good command of data engineering principles covering data management, storage, visualization, operations, and security * Strong grasp of ingestion pipelines along with Data Warehousing and Data Lakes * Programming experience in Python, Java, Scala, or Kotlin * Advanced knowledge of SQL and NoSQL data stores * Proficiency with Spark and PySpark in production data solutions * Ability to architect and deploy AWS\-based solutions using Glue and RedShift * Familiarity with CI/CD approaches for integration and deployment workflows * Exposure to Docker, Kubernetes, and Yarn for containerized delivery * Working experience with Databricks for analytics and data engineering * English level of minimum B2 (Upper\-Intermediate) for effective communication **Nice to have** * Working knowledge of Hadoop, Hive, Flink, and related Big Data tooling * Understanding of SDLC methodologies with Agile as a key approach * Experience applying and governing SDLC implementation practices


