Airflow 2 end-of-life lands on April 22. That's nine days from now. If your team hasn't started the migration to Airflow 3, you're running out of runway — and the upgrade is not a version bump you can sleepwalk through.

The Architecture Shifted Under You

Airflow 3 isn't Airflow 2 with a fresh coat of paint. The core architecture changed in ways that break assumptions baked into years of DAG code.

The biggest shift: workers no longer talk directly to the metadata database. In Airflow 2, every task had full access to the metastore — a convenience that teams leaned on for custom operators querying DAG state, homegrown monitoring scripts hitting metadata tables, even ad-hoc analytics on pipeline performance. Airflow 3 introduces a Task Execution Interface that routes all communication through a dedicated API server. The metastore sits behind a wall now.

This is genuinely better for security. Multi-tenant Airflow deployments no longer hand every task the keys to the kingdom. It's also the single change most likely to break your existing code.

The Migration That Actually Hurts

If you've ever written a custom operator that does something like this:

from airflow.settings import Session

with Session() as session:
    result = session.execute(
        text("SELECT state FROM dag_run WHERE dag_id = :dag_id"),
        {"dag_id": "my_pipeline"}
    ).fetchall()

That code dies on Airflow 3. No deprecation warning, no fallback — it errors out immediately because workers can't see the database anymore.

The replacement uses the Airflow Python Client:

from airflow_client.client.api.dag_run_api import DAGRunApi

dag_run_api = DAGRunApi(api_client)
runs = dag_run_api.get_dag_runs("my_pipeline", limit=10)

Looks straightforward in a blog post. In practice, it means auditing every custom operator, every helper function, every monitoring script your team accumulated over years of Airflow 2 usage. The official client covers DagRuns, TaskInstances, Variables, Connections, and XComs — but if you were doing anything creative with raw SQL against the metastore, you'll need to find the equivalent API endpoint or rethink your approach entirely.

I've seen teams discover dozens of these patterns scattered across their codebase, many written by engineers who left years ago. The metastore was the escape hatch everyone reached for when the official API didn't have what they needed.

Ruff Will Find Most of It

Before you panic-audit hundreds of DAG files by hand:

ruff check --preview --select AIR3 --fix --unsafe-fixes dags/

The AIR3 rule set flags breaking changes. AIR301 and AIR302 catch the hard breaks — removed imports, deleted operators, renamed APIs. AIR311 and AIR312 flag softer changes that aren't breaking yet but will be. The --fix flag auto-remediates what it can, which handles a surprising amount of the import reshuffling.

This won't catch your raw SQL metastore queries. Those you'll need to grep for yourself.

Five Gotchas That Won't Show Up in the Linter

The default schedule changed. DAGs without an explicit schedule_interval used to default to timedelta(days=1). Now the default is None. Your DAGs will go silent unless triggered manually. If you relied on implicit daily scheduling, every one of those pipelines stops producing data after the upgrade.

execution_date is gone. Replaced by logical_date. Jinja templates referencing {{ execution_date }} break. The semantics also shifted with the new CronTriggerTimetable — if your pipeline logic depends on the relationship between the execution date and the actual data interval, test carefully. Set create_cron_data_intervals = True to preserve Airflow 2 behavior while you sort it out.

Common operators moved packages. BashOperator, PythonOperator, and other staples got split into apache-airflow-providers-standard. One pip install you'll forget until your first DAG import fails.

XCom pickling is disabled. Serializing arbitrary Python objects through XCom stops working. You need a custom backend or switch to JSON-serializable data. Teams passing Pandas DataFrames through XCom — you know who you are.

SubDAGs are deleted. Not deprecated. Deleted. If your team never refactored them into task groups during the five years they were deprecated, this upgrade forces the issue.

The Prerequisite Most People Skip

You can't jump from any 2.x to 3.0. The minimum starting version is 2.6.3, and the recommended path goes through 2.11 first. That intermediate version emits deprecation warnings for everything that breaks in 3.0 — it's your canary release.

The upgrade involves a metadata schema migration that can take hours on large installations. Back up your metastore before you start. If the migration fails mid-way — network blip, disk space, anything — you end up in a half-migrated state that's brutal to recover from without a backup.

The sequence Astronomer recommends:

  1. Upgrade to 2.11

  2. Fix every deprecation warning

  3. Run ruff with AIR3 rules

  4. Clean the metadata DB (airflow db clean)

  5. Back up everything

  6. Upgrade to 3.x during a low-traffic window

  7. Watch for stuck tasks post-migration

So Should You Panic?

Nine days isn't enough for a careful migration. If you're starting today, you'll miss the EOL date — and honestly, that's fine. The orchestrator won't stop working on April 23. It just stops getting security patches. You have a grace period. Use it to do the migration right instead of fast.

The honest assessment: Airflow 3 is a genuine improvement. Task isolation makes shared deployments safer. The React UI is noticeably faster. DAG versioning — linking every run to a specific DAG version — solves a debugging headache that's plagued teams since 1.x. Event-driven scheduling with Assets (the rebranded Datasets) closes ground on Dagster and Prefect for reactive workflows.

But treat this as a real migration project. Budget two to four weeks for a team running a few hundred DAGs. If you're running thousands with custom operators and metastore queries, double that. The teams that started in January are wrapping up now. The teams starting today will be fine by May. The teams that haven't heard about the EOL date yet — forward them this post.