Q2 2026 AI Hiring Market Pulse

What we’re seeing across 50+ live searches at frontier AI labs, physical AI companies, inference platforms, and quant ML teams.

The AI hiring market has been described as “hot” for three years. That’s useful to no one. Here’s what’s actually happening underneath, based on the searches we’ve run in the last 90 days.

1. Physical AI is absorbing AV talent at speed

The strongest pattern we’ve seen this quarter: engineers leaving self-driving companies for physical AI and robotics foundation model labs. Post-GM-Cruise-shutdown alumni are especially active, and they’re landing at Genesis AI, Skild, Physical Intelligence, Moonlake, and Applied Intuition’s newer programs.

The pitch that’s working: “Your AV sensor-fusion and motion-planning skills transfer directly to embodied agents, and the companies hiring for them are actually going to ship this decade.”

One client explicitly told us: “Any strong ML software engineer from self-driving would be awesome, even if they don’t neatly fit the role we posted.”

 

2. Compensation bands are bifurcating

There is no longer “a market rate” for senior ML engineers. There are two.

Frontier track ($400K–$500K+ base + equity): Senior ICs at Moonlake, Anthropic, OpenAI, xAI, Genesis. Real cash compensation, usually on-site in SF or Bay Area, often no remote. Candidates at this tier frequently turn down $300K roles before they reach the interview loop.

Scale-up track ($250K–$380K base + equity): Senior engineers at Applied Intuition, Nuance Labs, Twelve Labs, Baseten, Modal. Usually 5-day on-site, comp competitive but not frontier-lab level. Visa sponsorship often gated by defense or regulatory adjacency.

Remote quant / applied ML track ($500K–$1M+ total comp): Hedge funds and AI-native trading firms hiring general deep-learning people from ads and recommendations backgrounds. Explicit preference for “outside finance” candidates.

The bands don’t cross. A candidate anchored at $250K base will be priced out of the frontier track; a candidate anchored at $500K base will never accept a scale-up offer no matter how good the mission.

3. On-site is back, and it’s a real filter

Five days a week on-site is now the default for frontier AI and physical AI roles. It’s not a negotiating stance — it’s a stated core value at Moonlake, Nuance Labs, Applied Intuition, and most well-funded seed labs.

Candidates who won’t relocate are being filtered out at the screening stage, not in the final round. We’ve seen strong profiles pass on $500K offers because they won’t leave Austin, Pittsburgh, or San Diego.

Remote is still the norm in quant ML and at inference-platform companies, but even there, remote candidates are increasingly expected to co-locate for 1–2 weeks per quarter.

4. Specialist stacks are replacing generalist ML as the sourcing axis

Two years ago, “ML engineer” was a useful sourcing term. Today it’s useless. The searches that close fast are tied to specific stacks:

  • Diffusion / flow matching / DiT — for image and video generation labs
  • VLA + diffusion policy + world models — for embodied AI and robotics foundation models
  • vLLM / TensorRT-LLM / SGLang + Kubernetes GPU orchestration — for inference platforms
  • FSDP / ZeRO / Triton kernels / FlashAttention — for training infrastructure
  • LLVM / MLIR / JIT + CUDA codegen — for ML compiler teams
  • DLRM / two-tower / sequence models + recsys at scale — for quant ML teams sourcing from ads backgrounds

A candidate who fits one of these stacks is 5–10× more hire-able than a “strong generalist ML engineer.” And hiring managers know it — every client we work with can tell you exactly which stack they need.

5. The “out-of-the-box thinker” premium is back at non-frontier labs

A pattern we’ve seen repeatedly this quarter: scale-up and quant clients explicitly asking us not to pitch candidates with the standard frontier-lab pedigree. One client’s phrasing: “We want out-of-the-box thinkers, not super heavy research publications or only big companies.”

This is partly a comp-realism filter (a candidate commanding $700K+ at OpenAI will not take a $350K scale-up role), and partly a culture signal. The companies saying this tend to be engineering-first rather than research-first, and they’re often a better fit for strong junior ICs with unusual backgrounds than for PhDs fresh from DeepMind.

If you’re a candidate with 3–5 years of experience, strong OSS output, and no paper trail at a frontier lab — this is a good quarter for you.

6. What we expect in Q3

  • More physical AI talent wars. Skild, Physical Intelligence, Genesis, and at least two stealth robotics foundation model labs are all hiring into the same 500-person talent pool.
  • Inference monetization roles emerging as a new category. The “Head of Inference” role — someone who turns GPU capacity into profit — didn’t exist as a distinct job title 12 months ago. We now see it at AI data campus operators, GPU cloud providers, and some well-funded consumer AI companies.
  • Continued compression at the mid-level. The middle of the ML engineering market (3–5 YoE generalists without specialist stacks) is the hardest place to be right now. Candidates here are being squeezed by juniors who know one specialist stack deeply and seniors who’ve been through a full scaling cycle.
  • More hiring pauses followed by sharp sprints. We’ve seen multiple clients go quiet for 2–4 weeks ahead of a product launch, then re-open with a highly specific role. If you’re a candidate, expect a lumpy market. If you’re a client, plan your pipelines ahead of your launches.

Who we are and why this data is real

Adapt Talent runs live searches across AI, physical AI, inference infrastructure, and quant ML. Everything in this post is drawn from our current pipeline — not from job-board scraping or LinkedIn filters.

If you’re hiring in any of these areas, or if you’re a candidate exploring what’s out there,  get in touch.