I read Packy McCormick’s Not Boring newsletter this morning, and it sparked a realization about an inevitable billion-dollar business. It’s not imminent, but looking at the trajectory of AI, it feels like it’s bound to happen.
Packy noted a critical bottleneck in the coming wave of robotics: Training Data.
We know how this played out for LLMs. The internet provided a massive, pre-existing corpus of text to train the base models. Then, to refine them, companies began paying humans (from gig workers to PhDs) to review outputs and/or feed even more data into the system. We are essentially paying people to teach the AI how to think.
Robots don’t need to know how to create text. They need to know how to move. And unlike text, there currently is no “internet of physical motion” out there for crawlers to scrape and use to feed into their training sets. The training data for messy, real-world tasks (e.g. fixing a sink, wiring a breaker box, perfectly flipping a burger) simply doesn’t exist at scale. We see Waymo and Tesla using cars to create massive data sets to train their autonomous driving systems, but there’s nothing like the scale of the internet for human motion.
This is the gap. Am pretty sure someone will inevitably fill it.
Just as there is now a whole industry to pay white-collar workers (bankers, lawyers) to train LLMs (even as gig workers), we are likely to see a market for paying workers who use their bodies to wear motion capture gear to train humanoid robots.

The technology is pretty much the solved part. We see motion-capture used all the time in entertainment (e.g. Avatar movies, games, etc.). I think the “new” part is the business model, and the conflict it’s about to trigger. I see two ways this plays out.
1. The “Guerilla” Model (let’s call it “Uber for Motion”) Imagine a gig-economy platform that sends you a low-profile mocap shirt and gloves. You wear them under your uniform while you work your shift at Starbucks or Burger King on a construction site. You do your job, the sensors capture the data, and the gig-data marketplace pays you $5/hour on top of your wages for the rights to that data.
You are effectively “data moonlighting” and monetizing your physical labor twice. Once for the coffee you serve, and once for the motion of serving it. The gig economy platform then sells that aggregated motion data to the robotics makers.
The gig workers make a couple of bucks; the platform makes billions.
2. The Enterprise Model (McDonald’s as a Data Farm) Alternatively, large employers realize they are sitting on a goldmine. McDonald’s doesn’t just sell burgers; they own a proprietary process for making burgers. In this model, they require employees to wear the sensors, capturing petabytes of training data (including some “perfect execution” data) every day. McDonald’s then packages this dataset and sells it to Tesla/Optimus or Figure for their bots, unlocking a massive B2B revenue stream that has nothing to do with food.
The Coming Conflict: Who Owns Your Movement?
If a master welder wears a “Guerilla” suit and sells the data of their specific, highly skilled hand movements to a robotics company, did they steal a trade secret? Does the employer own the “how” of the job, or does the worker own their own biomechanics? Is the “Kinetic IP” of workers a thing?
Conversely, if a corporation forces the “Enterprise” model, are they harvesting biometric data without fair compensation?
We are moving from an era where we battled over intellectual property in the form of words and images and sounds to an era where we will battle over the physical and the proprietary value of human motion.
If the robots are coming, who is going to get paid to teach them how to move?
