Data Engineering Specialist
Calimala partners with enterprises across the Gulf and Europe to design, build, and scale Data & AI teams. As a Data Engineering Specialist, you’ll join a distributed group of practitioners who speak the language of both engineering and the business—helping clients move from fragmented systems to coherent, well-governed data platforms.
This role sits at the intersection of client delivery and platform thinking. From first ingestion to production-grade models, you’ll help establish the foundations that make analytics, reporting, and AI workloads possible -while reusing patterns and components across projects.
What you'll be doing
As a Data Engineering Specialist at Calimala, you’ll lead and support engagements that demand both technical depth and practical judgment. One week you might be designing an ingestion layer for a new data platform; the next, you’re refactoring legacy pipelines, improving performance and reliability, or helping define a canonical data model that different teams can build on.
“We’re not just moving data. We’re helping teams build the systems that can act on it—reliably, transparently, and at speed.”
You’ll also play a strategic role in how clients think about their stack—advising on tooling, integration patterns, and workflows that balance speed with governance. You’ll collaborate closely with our Talent Partners, architects, and analytics teams to make sure the data foundations match the realities of how people work.
Who we're looking for
You’re fluent in the core concepts of data engineering and comfortable working in environments where requirements, stakeholders, and priorities evolve. You can work independently, structure ambiguity into clear steps, and communicate your choices in a way that makes sense to both technical and non-technical audiences.
You’ve likely worked in consulting, product, or platform teams before, and you understand what it means to support multiple stakeholders at once. At Calimala, we value depth, accountability, and partnership—you build with care, and you’re motivated by seeing your work used in real decision-making.
Strong background in data engineering, data platforms, or a related field
Proficiency in SQL and at least one general-purpose language (Python preferred)
Hands-on experience with ETL/ELT tooling and workflows (e.g. dbt, Spark, Kafka, Airflow or similar)
Experience with at least one major cloud platform (AWS, Azure, or GCP) and modern data warehousing technologies (e.g. Snowflake, BigQuery, Redshift, Synapse, Databricks)
Able to design, document, and maintain pipelines end-to-end, and to explain trade-offs clearly to stakeholders
Experience working with analytics, BI, or data science teams; exposure to ML/AI workloads or feature pipelines is a plus
We don’t expect perfection - we look for people who ask good questions, care about the quality of what they ship, and leave the data landscape in a better state than they found it.

