Product announcements

Elementary 2.0: Trusted Data for the AI Era

The data & AI control plane that unifies observability, governance, and discovery, helping teams scale data and AI products with confidence.

Author
Maayan Salom
Date
Dec 1, 2025

Most data teams are feeling the same pressure right now: AI is advancing rapidly, expectations are rising even faster, and the foundations were not built for this level of scale.

Elementary 2.0 is our response.

It is a data and AI control plane that brings data engineering, data science, and analytics teams into one place so they can scale data and AI products with confidence.

At the center is a shared context engine and a network of AI agents that handle observability, quality, governance, and discovery. Everything is designed so data reliability is something the system maintains, not something teams chase.

Why we built the Enterprise Data & AI Control Plane

Leveraging AI is no longer limited by the quality of models. Models are already strong and useful. The real blockers are upstream.

  • AI demands solid foundations. If the inputs and definitions behind data and metrics are shaky, trust collapses and adoption stalls.
  • AI demands more context. LLMs are only as good as the data and metadata you provide them.
  • AI demands more data, faster. Pipelines, data products, and dependencies grow faster than any team can keep up with manually.

Data teams are doing heroic work, but the workload scales faster than headcount. Data tests need maintenance. Data incidents need investigation. Pull requests need impact analysis. Documentation needs upkeep. Naming, tagging, lineage, performance tuning, cost reviews, it all compounds. The gap is structural.

The control plane is meant to close that gap. It gives every part of the data team one shared understanding of lineage, tests, incidents, performance, definitions, and usage, and layers AI on top so reliability becomes proactive instead of reactive.

What changes with Elementary 2.0

An end-to-end context engine at the core

Everything starts with a unified layer of observability and metadata, built on our open core.

It learns continuously from lineage, test results, incidents, performance signals, semantics, and usage patterns. That context powers AI agents and gives every team, and every workflow, the same understanding of what the data is, how it is built, and whether it is healthy.

Because the engine is open core, all metadata stays in the customer’s environment, and data teams can customize it, extend it, and build their own dashboards or agents on top.

We’re extending this engine in two major ways:

1. Monitoring Python pipelines and workloads

Data teams can now capture context and metadata from any Python workload.

That includes ingestion before data lands in a platform, Python jobs that consume data downstream, analytical and data science workloads, and AI pipelines written in Python.

It can also capture results from existing test frameworks like Great Expectations and DQX.

2. Coverage across all major cloud data platforms and data lakes

Elementary now monitors and captures metadata on any data asset in data warehouses and lakes like Snowflake, Databricks, BigQuery, Dremio, Athena, and Iceberg-based data lakes.

This closes a long-standing gap, connecting SQL and Python, data engineering, science and analytics, and AI pipelines and the data they consume.

AI agents for reliability, built into the workflow

Elementary introduces agents that take on the operational reliability work while engineers keep ownership and approvals.

  • Triage & Resolution: Investigates failures using lineage and incident history, identifies the cause, and drafts pull requests with proposed fixes.
  • Coverage: Flags missing or insufficient tests, recommends the right ones, and prioritizes them based on downstream impact.
  • Performance & Cost: Detects inefficient queries and pipelines and generates optimized SQL or configuration changes.
  • CI Prevention (coming soon): Analyzes pull requests before merge, checks downstream impact, compares dev to prod assets, and prevents regressions from being deployed.

Engineers decide the rules, guardrails, and approvals. The agents handle the heavy lifting.

Code is (still) the way to scale

From day one, we built a code-first experience. Reliability should be part of the development process, not an afterthought. You build data pipelines with engineering best practices like versioning, collaboration, and automation. Reliability should be treated the same way.

Elementary’s source of truth is your codebase, which makes it scalable and maintainable. Our core is open-source packages that plug directly into your pipelines: dbt, and now Python as well. This lets data teams manage SQL, Python, and AI pipelines in a single platform, connecting data engineering and data science.

In Elementary 2.0, this code-first experience gets a major upgrade:

  • AI agents that code with you: Agents don’t just “assist”, they open pull requests and propose code changes alongside your team.
  • An MCP server wired into your dev workflow: All Elementary context, tests, incidents, lineage, health, is available directly from your editor and tools that speak MCP.

We meet engineers where they actually work: in code. Elementary 2.0 doubles down on that principle, while also introducing a new experience for business users.

Bringing business users into the reliability loop

Business users often feel data quality issues first, but until today, they have no real way to help improve them. As a result, data often lags behind the business, slowing down the very decisions it is meant to empower.

Elementary is introducing an AI-first experience, allowing them to:

  • Contribute data quality rules
  • Discover and understand data
  • Participate in governance

Without needing engineering skills or new workflows.

It connects their context to the same shared engine used by engineers and data scientists.

Scaling data and AI products with confidence

Elementary extends reliability across the entire data and AI lifecycle.

By combining shared context with proactive AI agents, organizations can scale pipelines, data products, and AI initiatives without slowing down teams or increasing operational overhead.

This is just the start.

We will continue to expand integrations, deepen automation, and raise the baseline for how reliable data systems should work.

See Elementary in action