MLOps Engineer
Calimala partners with enterprises across the Gulf and Europe to design, build, and scale Data & AI teams. As an MLOps Engineer, you’ll join a network of practitioners who understand both how models are built and how they need to run in production—securely, repeatedly, and with clear ownership.
This role sits at the heart of how ML and AI solutions are delivered and maintained. You’ll design and operate the tooling, pipelines, and platforms that support the full model lifecycle—from experimentation and validation to deployment, monitoring, and retraining.
What you'll be doing
As an MLOps Engineer at Calimala, you’ll lead and support engagements where the challenge isn’t just building models, but running them reliably over time. One project might involve setting up a standardized ML platform for multiple teams; another could focus on automating deployments, improving observability, or hardening governance and approvals around model changes.
“We treat the ML lifecycle as a first-class system: code, data, and models all need versioning, monitoring, and clear paths to production.”
You’ll work closely with ML Engineers, Data Engineers, architects, and security teams to define how models move from notebooks into production services. You’ll help implement CI/CD for ML, build feedback loops from monitoring back into experimentation, and ensure that platform decisions balance speed, cost, and control.
Who we're looking for
You’re comfortable working across infrastructure, ML tooling, and software engineering. You enjoy designing systems that other teams build on and have a strong instinct for automation, reliability, and clear standards.
You’ve likely worked in product, platform, or consulting environments where ML systems needed to be auditable and stable, not just impressive in a demo. At Calimala, we value depth, accountability, and partnership—you take ownership of the ecosystems you build and support teams in using them effectively.
Strong experience with MLOps practices and tooling across the model lifecycle
Proficiency in Python and familiarity with ML frameworks (e.g. scikit-learn, PyTorch, TensorFlow, XGBoost or similar)
Hands-on experience with CI/CD and infrastructure-as-code (e.g. GitHub Actions, GitLab CI, Azure DevOps, Jenkins, Terraform)
Experience implementing model tracking, experiment management, and deployment workflows (e.g. MLflow, Kubeflow, SageMaker, Vertex AI, Databricks, or similar)
Solid understanding of containerization and orchestration (e.g. Docker, Kubernetes) for serving models in production
Familiarity with monitoring, logging, and alerting for ML systems (data drift, model performance, service health)
Experience with at least one major cloud platform (Azure, AWS, or GCP) and its data/ML ecosystem
We’re looking for practitioners who see MLOps as an enabler for teams: people who build platforms and practices that make it easier, safer, and faster to turn promising models into dependable, production-grade systems.

