Figure 01, the robot closest to the humanoid machines of science fiction
Robotics is advancing towards embodiment and learning formulas that allow android robots to be developed with aspects and behaviors typical of people
Figure 01 is the closest prototype to the humanoid dreamt up by science fiction. The robot, which received investment and technological support in March from the artificial intelligence company Open AI, the processor giant Nvidia, and Amazon founder Jeff Bezos, is able to discern objects not only by their shape but also by their functionality, perform different tasks by adjusting its movements to the resistance of what it touches, interact with the environment and even evaluate its performance. Figure 01 is similar in appearance to the machines featured in the movie I, Robot and while still far from the imagining of Paul Verhoeven’s Robocop, it is an example of a dazzling technological race that, according to Luis Merino, professor and director of the Service Robotics Lab at the Pablo de Olavide University in Seville, is breaking the limits of the “passivity of automatic learning” to approach human capabilities, where interaction with the environment — embodiment or personalization — is the key.
The commitment of large companies to this technology is evident. Nvidia, in addition to its financial support to Figure 01, has announced GR00T, a specific platform for humanoid robots, the development of which is an accelerated race involving companies such as 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics.
Dennis Hong is the founder of RoMeLa and creator of Artemis, an android robot that plays soccer as a demonstration of the versatility achieved in its movement capabilities. Hong explains the qualitative leap in new developments: “99.9% of the robots in existence today use servo motors and are very rigid. They are great for factory automation or single household tasks (such as autonomous vacuum cleaners), but this robot [Artemis] mimics biological muscle and allows it to be agile, fast, robust, and quite intelligent.”
This intelligence, as Hong explains, allows the robot to recognize a good plan and make decisions autonomously. “The future,” he concludes, “is that it can do anything a human can do.” To demonstrate this, Hong holds Artemis from behind and pushes it to force it to react to an unexpected event, a test that the robot passes.
This is a significant step up from models such as those developed by DEEP Robotics, which develops quadrupeds for industrial and rescue work. Marketing director Vera Huang highlights the “motor advances, such as the ability to jump or climb stairs,” but admits that they are not equipped with the latest generation of intelligence.
Cassie, developed by Agility Robotics, has been trained to travel across different surfaces and perform large jumps without prior knowledge of the terrain. It does this through the technique of reinforcement learning. “The high-level goal was to teach the robot to learn how to do all kinds of dynamic motions the way a human does. We allowed the robot to utilize the history of what it’s observed and adapt quickly to the real world,” Zhongyu Li, a PhD student at the University of California, Berkely, and a participant in the robot’s development, explained to MIT technology review.
Reinforcement learning works by rewarding or penalizing an AI as it attempts to carry out a task. In this case, the approach taught the robot to generalize and respond in new scenarios, rather than freeze as its predecessors would have done.
“The next big step is for humanoid robots to do real work, plan activities, and interact with the physical world in ways beyond interacting with their feet and the ground,” says Alan Fern, professor of computer science at Oregon State University.
Figure, a 1.70-meter-tall, 60-kilogram (132-pound) robot that can carry a third of its weight, is electric, with five hours of autonomy and a speed of 1.2 meters per second. But what makes it different is its ability to perform different tasks, discern people and objects, act autonomously and, above all, learn. The company argues that its human appearance is necessary because “the world is designed for it.”
Figure is an example of embodiment or personalization. “We cannot separate mind and body. Learning brings them together. Most robots process images and data. You train them and they don’t have interaction. However, humans learn by interacting with our environment, because we have a body and we have senses,” explains Merino.
His team has already developed assistance robots that, when acting as tour guides, adapt their explanations to people’s reactions, or act according to the feelings of an elderly person they are helping, as well as avoid violating the social distance of the humans they are working with.
But in most of today’s robots, even those with artificial intelligence, “learning is passive,” according to Merino. Cassie, in addition to the development of the artificial neural network, has developed its skills through reinforcement learning, a technique similar to that used for training animals.
Merino elaborates on this: “We don’t give the robot an explicit description of what it has to do, but we provide a signal when it misbehaves and, henceforth, it will avoid doing so again. And the other way around, if it does well, we give it a reward.” In the case of pets, this can be a toy, a cuddle, or a treat. For robots, it is an algorithm that they will try to achieve as many times as possible through their behavior.
The researcher clarifies that this system means — in addition to an advance in robotic capabilities — a formula to make them more efficient, as they require less energy to process millions of data linked to all possible variables. “It is very difficult to program a robot for all the circumstances it may face,” Merino adds.
“We’ve had robots in factories for dozens of years doing things in an algorithmic and repetitive way. But if we want them to be more general, we have to go a step further,” he concludes. The robotics race is heading in this direction.
And, like any digital advance, security will be a determining element. Any system, even a simple household appliance connected to the cloud, can be the victim of attacks. In this sense, Nvidia, which is engaged in the most advanced robotics developments, has signed an agreement with Check Point to improve the security of artificial intelligence infrastructure in the cloud.
Amazon Web Services has also announced a collaboration with Nvidia to use the latter company’s platform, Blackwell, presented this year at its GTC 2024 developer conference. The agreement includes the joint use of both companies’ infrastructures in developments that include robotics.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.