
Sector: Multimodal AI avatars
Stage: Seed ($10M, Accel + South Park Commons + Lightspeed)
Location: Seattle, on-site 5 days
Roles: ML Research Engineer + ML Infra Engineer + Operations Lead
The brief
The client is building the first human foundation model — an AI that understands and expresses emotion in real time across speech, facial expression, and body language. Founded by ex-Apple PhDs from MIT, UW, and Oxford. They explicitly asked us for “really strong Video SWE from Robotics, AR/VR, Drones, and Gaming companies” — a target pool most AI recruiters wouldn’t even think to hit.
The challenge
Three concurrent roles at up to $450K base each, all requiring a willingness to relocate to Seattle and work 5 days on-site. The ML infra role had a 2+ YoE floor, meaning we couldn’t just source senior staff engineers — we needed sharp early-career infra engineers with WebRTC or video/audio production experience.
What we did
Outcome
Searches progressed in parallel across research, infrastructure, and operations, with shortlists covering all three target verticals (robotics, gaming, and consumer video).