Now Hiring

ML Engineer

ML & LLM Ops
(
Remote
)

Now Hiring

ML Engineer

ML & LLM Ops
(
Remote
)

Now Hiring

ML Engineer

ML & LLM Ops
(
Remote
)

Sitting between data, product, and engineering, this role is about turning real-world problems into robust machine learning systems. With a mix of modeling, software engineering, and experimentation, you’ll help clients move from proof-of-concept notebooks to production-grade ML and AI solutions.

Sitting between data, product, and engineering, this role is about turning real-world problems into robust machine learning systems. With a mix of modeling, software engineering, and experimentation, you’ll help clients move from proof-of-concept notebooks to production-grade ML and AI solutions.

ML Engineer

Calimala partners with enterprises across the Gulf and Europe to design, build, and scale Data & AI teams. As an ML Engineer, you’ll join a network of practitioners who understand both the math behind the models and the engineering needed to make them work in production—inside complex, regulated environments.

This role sits at the intersection of models, data pipelines, and applications. You’ll design, build, and operate ML solutions—from classical models to modern deep learning and LLM-based systems—ensuring they are reliable, observable, and aligned with business outcomes.

What you'll be doing

As an ML Engineer at Calimala, you’ll lead and support engagements where AI is a central part of the solution. One project might involve building a forecasting model on top of an existing data platform; another could focus on designing inference services, setting up feature pipelines, or integrating LLMs into business workflows.

“We treat models as products, not experiments: they should be explainable, monitored, and trusted by the people who rely on them.”

You’ll work closely with data engineers, architects, and business stakeholders to frame problems, select approaches, and iterate on solutions. You’ll help define standards for experimentation, deployment, monitoring, and retraining—so that ML systems can evolve safely as data and requirements change.

Who we're looking for

You’re comfortable moving from exploratory notebooks to production code, and from metrics like AUC or BLEU to conversations about business impact. You enjoy collaborating with both technical and non-technical teams and can explain trade-offs between different approaches clearly.

You’ve likely worked in product, platform, or consulting environments where ML systems had to operate reliably over time, not just in demos. At Calimala, we value depth, accountability, and partnership—you care about building things that last and that people actually use.

  • Strong experience building and deploying ML solutions, ideally in production environments

  • Proficiency in Python and core ML/AI libraries (e.g. scikit-learn, PyTorch, TensorFlow, XGBoost or similar)

  • Hands-on experience with ML lifecycle practices: experimentation, model versioning, deployment, and monitoring

  • Familiarity with MLOps concepts and tools (e.g. MLflow, Kubeflow, SageMaker, Vertex AI, Databricks ML or similar)

  • Solid understanding of data pipelines and feature engineering, working closely with data engineering teams

  • Experience with at least one major cloud platform (Azure, AWS, or GCP) and its ML services

  • Exposure to NLP, LLMs, or retrieval-augmented generation (RAG) is a plus, especially where explainability and governance matter

We’re looking for practitioners who see ML as an engineering discipline as much as a research one: people who take ownership of the full lifecycle, from first prototype to stable, monitored systems in production.

Apply now

We’re building a network of people who know how to turn Data & AI programs into real outcomes. If you’re comfortable working with modern AI tools and embedded in enterprise teams, apply here to be considered for future Calimala projects.

By submitting this form I agree to Terms & Conditions