_
_
_
_
_

Screens in the palm of the hand and tamagotchi assistants: The race to imagine devices beyond the cell phone

Technology companies are gambling on artificial intelligence to help develop new smartphones that will revolutionize the world of personal assistants

Broche IA (AI Pin)
Ai Pin placed on a user's sweatshirt in an image provided by the American company Humane.Humane
Luis Alberto Peralta

For global technology giants, using artificial intelligence (AI) in devices is nothing new. Companies like Amazon, Samsung, Apple, and Microsoft claim to have worked with tools such as machine learning for more than a decade. What has changed is the degree of sophistication that these programs have in their interactions with users, especially since the launch of platforms like ChatGPT. Along these lines, a handful of start-ups are fuelling the debate, arguing that smartphones and virtual assistants will soon be displaced by new devices that put AI front and center.

Companies are incorporating artificial intelligence into glasses, watches, and all kinds of products. However, there are devices that go much further. One of the newest and that has attracted the most attention in the industry is the Ai Pin from the American company Humane. Developed in collaboration with Microsoft and OpenAI, the device is a wearable virtual assistant that you can wear on your clothing. It has no screen but it does have a microphone and a speaker. In addition, there is a touch panel and a camera to record visual information and gestures, and it also has its own “unlimited” internet connection, and it can link to other devices (such as headphones) via Bluetooth.

According to Humane, you can use your voice to interact with the brooch, or you can manipulate a type of laser screen that is projected onto the palm of your hand. The gadget is able to take photos and reply to messages automatically. It also performs complex functions, such as calculating the calories in food, translating dialogues in real time, summarizing conversations, or answering questions as ChatGPT would do. However, to view content such as photos or videos in more detail, you have to use a computer to access the “center” platform, where all the device’s information is stored.

Humane device's screen projected onto the palm of a hand. Company's own image.
Humane device's screen projected onto the palm of a hand. Company's own image.Humane

The device’s creators are Imran Chaudhri and Bethany Bongiorno, two of the minds that designed the iPhone and Apple’s iOS operating system. A Humane spokesperson tells EL PAÍS that the company’s objective is to enhance “human” experiences over digital ones, reducing the time spent looking at screens. As explained by the start-up, the brooch collects information about “every aspect of our lives” through our daily interactions. With this data, artificial intelligence can “learn” the user’s habits to create personalized “AI experiences” and solve problems without the need for a mobile phone or applications. At the moment, the company has not published details of how these “experiences” will work, but they will be linked to its own Cosmos operating system.

The company has started an aggressive marketing campaign to position itself in the wearable tech market. In fact, the brooch was showcased by supermodel Naomi Campbell during Paris Fashion Week 2023, and will be presented during the Mobile World Congress in Barcelona this week. The device will go on sale in the United States in April, and the expected price is $699, but you will also have to pay a monthly subscription of $24 for the services. The spokesperson for the start-up says that they have not yet set a launch date for Europe.

A tamagotchi as an assistant

This gadget is not the only one of its kind. The American company Rabbit presented its R1 device in January. Its virtual assistant and “pocket companion” has been created with the help of the well-known design firm Teenage Engineering. This device, whose appearance is inspired by the Bandai tamagotchi popular in the 2000s, received 40,000 orders at the launch of its pre-sale. Unlike Humane’s device, this project does have a (4.88-inch) screen, and its creators present it as a logical continuation of virtual assistants and cell phones. Along these lines, they ensure that it is able to do everything a smartphone can, but also identify objects in real time and execute functions as complex as buying a flight by itself. It is operated through a touch screen and voice commands.

Rabbit proposes its R1 as a necessary response to the complexity of interfaces. According to the company, the logic behind the device is simple: it uses artificial intelligence models so that its system “learns” to operate programs that already exist. In this way, it allows users to interact quickly and easily with mobile applications by voice, or solve their queries through automatic internet searches. In fact, its creators promise that in the future you can be personally taught how to use almost any program through its “learning web mode.”

“We’ve reached a point where we have hundreds of apps on our phones with complicated designs that don’t talk to each other. As a result, end users become frustrated with their devices and often get lost,” said company founder Jesse Lyu in January. Rabbit also reported that the cost of its device will be $199, although more than one industry analyst has calculated that this figure would not be enough to make their business sustainable if additional payments are not included, as is the case with the Humane assistant.

Beyond mobile

Big technology companies are also in a race to make the most of this technology. For example, Meta announced in September 2023 that they would incorporate their new artificial intelligence assistant (Meta AI) into the smart glasses project they have with the Ray Ban brand. According to Mark Zuckerberg’s company, this addition will allow users to control different features of the glasses through voice commands, as well as being able to ask questions.

David Alonso, director of Samsung’s Mobility Business in Spain and Portugal, explains to EL PAÍS that some of its devices already have a specific processor for these functionalities. The company recently announced that its “mobile AI” will be present on both phones and tablets, and that it will allow functions such as real-time call translation, image editing with generative AI, and assisted internet searches. Alonso states that the “ecosystem” concept will gain value with AI, because this technology will link devices as different as tablets, watches, televisions, and household appliances.

Alonso also highlights that this is the first generation of devices that have AI as a priority, and he does not think that possible new devices will displace those that currently rule the market.

“They have been killing the mobile phone for many years now. There is talk in many forums about them disappearing altogether. But what we see every day is that the smartphone is more alive than ever, and it is an essential tool. Now, with these new artificial intelligence functionalities, we are witnessing the beginning of a new era of telephony, which will go far beyond a device or a screen,” says the Samsung executive. In this context, Alonso adds that AI is not only helping to personalize experiences, but is also making devices more accessible and inclusive for people with visual impairments, as well as making their products more sustainable in terms of energy consumption.

Beyond devices

For Andrés Pazos, senior director of business development at Alexa in Spain, the way in which AI can enrich user experiences goes beyond the devices themselves. An example of how AI is already changing the relationship with the real world, Pazos tells EL PAÍS, is the way in which voice commands are reducing the digital divide. The executive highlights that increasingly fluid interactions through dialogue allow older people or people with physical impairments to interact more easily and efficiently with devices, without having to learn commands or interact with a screen. Pazos says that artificial intelligence has been key in this process, since it has allowed devices to learn to interpret different ways of speaking in order to process user requests.

Another example is the so-called “smart properties,” which allow users to interact with different spaces using their voice, through assistants like Alexa. This technology is already being implemented in hotels in Spain to allow guests to request recommendations, room service, or dry cleaning. In the short term, it is hoped that factors such as the light and temperature in a room can be controlled and save guests time. For Pazos, the future is an AI that will function like a “brain,” that connects to a network of devices to provide each user with a personalized experience, and even anticipate their needs.

For the Amazon executive, the future will consist of achieving what he calls “environmental intelligence,” that is, ensuring that devices make our lives easier by working harmoniously and fluidly through AI and interconnectivity. “We are talking about a step beyond artificial intelligence. It would be interacting with all the services organically, and when they are not needed they go into the background. We would not have to learn to use devices, but thanks to artificial intelligence we would interact in such a natural way that we forget that they are there,” he concludes.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_