




Summary: Haipriori is seeking a security-conscious Data Engineer to design, build, and maintain data pipelines, models, and platforms in HIPAA-compliant environments. Highlights: 1. Work with sensitive data (PHI) in HIPAA-compliant environments 2. Design and implement scalable data architectures and pipelines 3. Collaborate with data scientists, analysts, and engineering teams About Haipriori Haipriori is a consulting and technology firm built on the belief that technology and business are inseparable. We partner with organizations to design, modernize, and implement software and data solutions aligned with business strategy, operational efficiency, and regulatory compliance. Our expertise spans enterprise architecture, software engineering, cloud platforms, and data-driven transformation. We specialize in building systems that are not only scalable and performant but also secure and compliant, especially in regulated industries such as healthcare. Shape Position Summary We are seeking a Data Engineer with strong experience in handling sensitive data (PHI) and working within HIPAA-compliant environments. In this role, you will design, build, and maintain data pipelines, data models, and data platforms that support analytics, operational systems, and business intelligence. You will be responsible for ensuring data security, privacy, masking, performance, and scalability across both on-premises and cloud environments. Key Responsibilities 1) Design and implement scalable data architectures, including data pipelines, storage systems, and processing frameworks. 2) Build and maintain batch and streaming data pipelines for real-time and near real-time data processing. 3) Develop and optimize data models (relational, dimensional, and NoSQL) to support analytics and operational use cases. 4) Ensure proper handling of sensitive data (PHI) in compliance with HIPAA and other regulatory standards. 5) Implement data masking, tokenization, encryption, and anonymization techniques to protect sensitive information. 6) Enforce data access controls, auditing, and governance policies across systems. 7) Design and implement streaming data solutions using technologies such as Kafka, Azure Service Bus, or similar platforms. 8)Optimize data pipelines, queries, and storage systems for performance, efficiency, and scalability. 9)Develop and maintain data solutions across hybrid environments, including on-premises and cloud platforms (Azure, AWS, or GCP). 10)Implement data validation, cleansing, and monitoring processes to ensure data accuracy, consistency, and completeness. 11)Integrate data pipelines and infrastructure into CI/CD workflows using modern DevOps practices. 12)Implement Infrastructure as Code (IaC) for data environments where applicable. 13)Work closely with data scientists, analysts, and engineering teams to deliver analytics-ready datasets. 14) Define and enforce data governance standards, including data classification, lineage, retention, and auditability. 15) Implement data observability practices, including monitoring, alerting, and anomaly detection across pipelines. 16) Document data architectures, pipelines, and governance processes. 17) Participate in Agile ceremonies and contribute to continuous improvement initiatives. Required Qualifications -Bachelor’s Degree in Computer Science, Information Systems, or equivalent experience. -5+ years of experience in Data Engineering or related roles. -Strong experience working with sensitive data (PHI) and HIPAA-compliant environments. -Experience building and maintaining data pipelines (ETL/ELT). -Strong knowledge of SQL and data modeling techniques. -Experience with data security practices, including encryption, masking, and access control. -Experience with streaming technologies (Kafka, Azure Service Bus, etc.). -Experience working with cloud platforms (Azure, AWS, or GCP). -Familiarity with CI/CD pipelines and DevOps practices. -Strong problem-solving, analytical, and communication skills. -Excellent collaboration, communication, and documentation skills. -Fluent in English (Spanish is a plus). Preferred Qualifications -Experience with big data technologies (Spark, Hadoop, Databricks, etc.). -Experience designing data lakes, data warehouses, or lakehouse architectures. -Familiarity with data governance frameworks and tools. -Experience working in healthcare, or other regulated industries. -Knowledge of API-based data integration and microservices architectures. -Experience with containerization and orchestration (Docker, Kubernetes). -Familiarity with Infrastructure as Code (Terraform, ARM, etc.). -Certifications in cloud platforms (Azure Data Engineer, AWS, etc.) are a plus. Who You Are A systems thinker who sees the connection between data architecture and business outcomes. A security-conscious engineer who prioritizes data privacy and compliance. A problem solver who thrives in complex, regulated environments. A collaborator and communicator who works effectively across technical and business teams. A continuous learner who stays up to date with modern data technologies and best practices. Why Haipriori At Haipriori, we cultivate a culture of excellence, accountability, and growth. You will work on meaningful projects where data, technology, and business strategy intersect—helping organizations transform while maintaining the highest standards of compliance and integrity. Physical & Work Environment Requirements Sedentary work. Must be able to access and navigate project facilities or client environments, when required. Ability to participate in hybrid or remote collaboration settings effectively. -Requerimientos- Educación mínima: Bachillerato / Educación Media 5 años de experiencia Idiomas: Inglés Conocimientos: Apache kafka, Aws, Azure, Base de datos, Control de acceso, Datos, Sql, Aplicaciones de big data Disponibilidad de viajar: Si
