




Job Summary: We are seeking a Data Engineer with Azure experience to design, build, and operate scalable and secure data platforms and pipelines for banking use cases, enabling advanced analytics and reporting under DataOps practices. Key Highlights: 1. Design and construction of scalable data platforms and pipelines on Azure 2. Data modeling for consumption by Data Scientists, BI, and Risk/Compliance teams 3. Implementation of data security and observability standards Join Stefanini! At Stefanini, we are more than 30,000 geniuses connected from 41 countries, doing what we love and co-creating a better future. **Responsibilities and Duties** * Design, build, and operate scalable, secure, and auditable data platforms and pipelines on Azure for banking use cases. Responsible for data ingestion, standardization, modeling, performance, and reliability—enabling advanced analytics/ML and reporting under DataOps practices. Key Responsibilities: Design data architecture on Azure (raw, curated, serving) with lakehouse and/or DWH approaches as appropriate. * Build batch and near-real-time pipelines using ADF/Synapse Pipelines (and/or Databricks), including incremental loads, CDC, and error handling. * Model data (dimensional and/or data vault, depending on domain) for consumption by Data Scientists, BI, and Risk/Compliance teams. * Implement transformations and validations using SQL, Python (pandas), and KNIME when required (reproducible, parameterized workflows). * Optimize performance/costs: partitioning, file formats (Parquet/Delta), compaction, caching, query and load tuning. * Ensure observability: metrics, logs, alerts, SLAs/SLOs, and operational “runbooks”. * Implement security standards: RBAC, Key Vault, managed identities, private endpoints, environment segregation. * Collaborate with Governance for lineage, cataloging, retention, and classification; and with Data Scientists for feature datasets and scoring pipelines. * Define and maintain CI/CD for data artifacts (infrastructure as code, pipelines, notebooks, tests). **Requirements and Qualifications** Required Technical Skills: * Azure Data * ADLS Gen2, Azure SQL / MI, Synapse Analytics * Azure Data Factory / Synapse Pipelines * Azure Databricks (preferred) and foundational Spark knowledge (plus) * Messaging/streaming: Event Hubs / Kafka (plus, as needed) Data Engineering * Advanced SQL (modeling, performance, window functions, optimization) * Python (pandas, pyarrow, testing), notebooks * Data formats: Parquet/Delta; partitioning and evolutionary schemas * Orchestration and automation; version control (Git) and CI/CD (Azure DevOps) Quality & Operations * Data testing (Great Expectations or equivalent), reconciliations, data quality rules * Observability: logging/monitoring (Application Insights/Log Analytics), alerts Typical Deliverables: * Production pipelines with SLAs (ingestion, transformation, serving) * Data models (curated/semantic), data marts, and feature datasets * Technical documentation (architecture, data dictionary, runbooks, diagrams) * Testing framework, monitoring, and cost-control framework covering pipeline reliability, data models, performance, operations, and CI/CD Looking for a place where your ideas shine? With over 38 years of experience and a global presence, at Stefanini we transform tomorrow—together. Here, every action matters—and every idea can make a difference. Join a team that values innovation, respect, and commitment. If you are a disruptive individual, committed to continuous learning, and innovation is in your DNA—then we’re exactly what you’re looking for. Come and let’s build a better future—together!


