How digital brains for humanoid robots are being built

Every year, the highlight of Nvidia’s GTC developer event is a cool robot chilling out with CEO Jensen Huang during his keynote.

At last year’s event, Blue a bipedal robot, fumbled around the stage, disobeyed Huang’s simple commands and navigated in seemingly random directions.

This year, the robot was Olaf from the animated movie Frozen. Olaf was more animated than Blue was in 2025, walking, answering questions and behaving much like its character in the movie.

Olaf offered a real-world glimpse at how much humanoid robots have improved in just one year.

Not surprisingly, enterprises are chasing the idea of humanoid robots that embody human characteristics and work in real-world conditions. “We have reached a tipping point where robots are getting out of the lab and making their way into the messy physical world,” Amit Goel, head of robotics and edge computing ecosystem at Nvidia, said during a panel discussion at GTC.

Morgan Stanley has said that as many as 1 billion robots could be on earth by 2050. But robot makers still face sizable hurdles in their efforts to ensure humanoid robots are safe for the real world.

In particular, building the robot brain has been more challenging than expected, the panelists said. While AI can help robots reason, decide and act, there’s still a gap between that and the complex calculations brains continually make.

With that in mind, the panelists talked about the various approaches to building a brain for humanoid robots.

Data collection: Robot makers first need to train and simulate robots for real-world environments. That led some of the panelists to collect internet videos to get an idea of human behavior.

“Learning from just watching other humans from just any sort of camera or internet video datasets and so on — those are very rich datasets,” said Ashok Elluswamy, vice president of AI at Tesla, which is building the Optimus humanoid robot.

The collection of human-scale data, Elluswamy said, helped build Tesla’s autonomous driving capabilities. Among the techniques used was teleoperation, which collects data from a human controlling a robot’s operations and uses that information for training.

Brain architecture: Not all robots are alike. Some companies, such as Skild.AI and Tesla, are building universal robots that can do everything. Others, such as Hexagon, build robots for specialized tasks.

Tesla’s architecture resembles the human brain, in which every bit of information in different regions is shared across units. For example, information gathered from cameras, sensors, or other data sources is shared across the system, which helps train the robot and determine its next steps.

Hexagon focuses on locomotion and high-precision robots, isolating the brain activity to specific tasks. It uses agentic AI and large language models (LLMs) to drive its brain architecture.

“If you have many different models…, you need to orchestrate them so that you always take the best model for the right task, given the right environment,” said Arnaud Robert, president of Hexagon.

Agility Robotics and Physical Intelligence are using modular brain architecture, which separates the brain into hierarchies to conduct different tasks. Agility’s Digit robot, for instance, has a “task” layer that describes what needs to be done, a “skill” layer that covers how to do it, and a “control” layer that executes the job, including locomotion and task completion.

“We have a combination of AI-learned and also engineered skills…. We can mix and match those different layers together…, which is really useful for practical deployment,” said Pras Velagapudi, CTO at Agility Robotics.

Simulating the real world: Simulation helps evaluate robot behaviors and whether robots stick to policies and adhere to safety requirements. “As policies get more and more general, you need to be testing them in more and more situations…. Doing that in the real world can be increasingly costly, increasingly challenging,” said Chelsea Finn, assistant professor at Stanford and co-founder of Physical Intelligence.

Simulation provides real-world validation and can be an important data source to fine-tune robotic behavior. “We simulate, we go in reality. We capture what it is, and then we feed it back to the simulator to converge the gap towards zero,” said Robert.

For example, Hexagon wanted to teach its Aeon robot how to climb stairs. Engineers suggested locking the wheel it uses to move, then decided on the optimal leg motor movement to go up stairs.

“If you use simulation and reinforcement learning, the best way for a wheel-based humanoid to go up the stairs is actually to have slow speed and never go to zero,” Robert said.

Safety: Robotic safety can’t be simulated — it needs to be done in the real world. “Having a fleet of robots practice…tasks in the real world and using the data to make sure the simulation is grounded in reality is quite helpful,” Elluswamy said.

Safety testing needs to be at every level of the robotic stack, panelists said. “A lot of the safety measures are at the lowest level, because that’s where you can really guarantee and ensure that it will be operating in the ways that you expect,” Finn said.

Agility Robotics’ Velagapudi said it might not be obvious that a robot can slip on a dusty warehouse floor. That means adding a controller that can maintain robustness on various types of surfaces and textures, and resistance to being pushed, pulled or caught on objects.

Safety is critical if the robot is expected to autonomously execute tasks for a very long time, Elluswamy said.

“Robotics’ last mile is extremely hard,” said Skild.AI CEO Deepak Pathak said during the panel discussion.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top