Senior Data Platform Engineer, Agentic AI
Reporting to: Director of Data Science
About ELSA
ELSA is a global leader in AI-powered English communication training, dedicated to transforming how people learn and speak English with confidence. Founded in 2016 and headquartered in San Francisco, we operate across the U.S., Vietnam, Portugal, Indonesia, Brazil and Japan.
Powered by proprietary speech-recognition technology and generative AI, ELSA delivers real-time, hyper-personalized feedback to help learners improve pronunciation, fluency, and overall communication effectiveness. With over 50 million learners and 1 billion hours of anonymized speech data, ELSAs depth of language training intelligence is unmatched in the industry.
Our B2B flagship platforms ELSA Enterprise and ELSA Schools empower organizations and educational institutions to elevate communication capabilities and unlock personal and professional opportunities for their people. We design engaging, bite-sized learning experiences that adapt to each learners goals and context, ensuring measurable improvement and lasting confidence.
Our vision is to become the global standard for real-time English communication training, enabling 1.5 billion language learners worldwide to speak clearly, be understood, and share their stories with the world.
Backed by world-class investors including Googles Gradient Ventures, Monks Hill Ventures, and SOSV, ELSA has been recognized among the top global AI innovators:
Forbes Top 4 Companies Using AI to Transform the World
Research Sniper Top 5 Best AI Apps
ASU+GSV EdTech 150
CB Insights Top 100 AI Companies
We’re building the next evolution of our AI stack — agentic AI systems that can interact with real-world data, tools, and workflows in production. To help us design, scale, and operate this platform, we’re looking for a Senior Data Platform Engineer to join our team.
Job overview:
As a Senior Data Platform Engineer — Agentic AI, you will design and implement the platform foundation for AI agents operating on real-world data and services at scale. This role sits at the intersection of Data Engineering, Backend Systems, and Applied AI — ideal for engineers who love ownership, reliability, and solving hard platform problems.
You will work closely with Data Science, Product, and Engineering to take agentic AI from prototype to secure, observable, production-grade systems.
This role is ideal for engineers who enjoy building production systems, collaborating cross-functionally, and owning end-to-end platform reliability — rather than focusing primarily on research or prototypes.
What You'll Do
Design and build scalable, reliable data pipelines and feature stores that power AI/ML & agent workflows
Productionize Agentic AI systems (tool-using LLM agents, workflow orchestration, retrieval, context management, guardrails)
Implement data-centric AI engineering best-practices: lineage, governance, testing, monitoring
Build server-side APIs, microservices & orchestration layers supporting AI agent workflows
Optimize data processing, storage, caching & latency across structured and unstructured data
Deploy and operate workloads across cloud environments (AWS/GCP/Azure)
Partner with ML engineers to enable experimentation, CICD, and observability for AI systems
Own platform reliability: performance tuning, alerting, SLAs
Establish security & compliance standards for AI-data interactions
Act as a senior technical contributor & mentor within the team
You'll Be Great In This Role If You Have:
Core Experience (Required)
5+ years in Data/Platform/Backend Engineering
Strong foundations in data engineering concepts — batch, streaming, schema design, ETL/ELT, orchestration
Hands-on experience with Python or Node.js (server-side scripting)
Expertise with SQL + modern data warehouses/lakes
Experience building & maintaining production systems at scale
Strong command of cloud platforms & containerization (Docker, Kubernetes etc.)
Agentic AI & AI Platform Experience (Preferred): You don't need to be a researcher — but you understand how this runs in PROD
Working knowledge of
LLM orchestration frameworks (LangChain, LlamaIndex, DSPy etc.)
RAG pipelines & vector databases
Agent-tool execution & sandboxing
Prompt/response handling
Experience deploying AI services behind APIs / microservices
Understanding of model & data governance, cost management, safety & guardrails
Bonus Skills
Experience with:
Airflow / Prefect / Dagster
Spark / Flink / distributed compute
Snowflake / BigQuery / Redshift
Feature stores / ML pipelines
Data observability platforms
Experimentation platforms (AB testing)
Soft Skills We Value
Bias for action & ownership mindset
Product-thinking and customer empathy
Ability to collaborate across data science, product & engineering
Comfort balancing speed with engineering rigor
Clear & structured communication
What We Offer
Work with a lean, high-impact Data & AI team
Ownership, autonomy & visibility
Supportive learning culture
Competitive compensation & benefits
Opportunity to shape the next-gen AI data platform
Comprehensive employee well-being benefits.
Free ELSA Premium courses to polish your language skills
Opportunity to contribute to a fast-growing, well-funded Silicon Valley startup with global impact.
- Department
- Product, Engineering & Data Science
- Remote status
- Fully Remote
Already working at ELSA ?
Let’s recruit together and find your next colleague.