_
_
_
_
_

Colin Murdoch, from Google DeepMind: ‘Gemini will transform the way billions of people live and work’

The chief business officer of Google’s AI research laboratory says that artificial intelligence is accelerating scientific research, but that we ‘must be careful because it is a very powerful technology’

El responsable de negocio de Google DeepMind, Colin Murdoch, en las oficinas londinenses de la compañía.
Google DeepMind business manager Colin Murdoch at the company's London offices.
Manuel G. Pascual

Google has been dominating the development of artificial intelligence (AI) systems for years. This has undoubtedly been helped by its 2014 acquisition of DeepMind, the London-based startup focused on AI research that developed AlphaGo, a program capable of defeating a grand champion of complex Asian board game Go, which opened debate on whether the AI would eventually surpass the human mind.

But Google’s unquestioned dominance was interrupted last year by another startup — OpenAI. The launch of ChatGPT, the most successful application in history, caught big technology companies off guard, and forced them to accelerate their AI programs. In April of this year, DeepMind — which until then had functioned as a relatively independent research laboratory— and Google Brain — the technology company’s other major research division — merged into a single organization: Google DeepMind, which has some of the best AI scientists in the world.

Colin Murdoch, 45, is the chief business officer of Google’s new AI super division, which has just presented its first toy: Gemini, a multimodal generative AI platform that can process and generate text, code, images, audio and video from different data sources. Those who have used it say that it far surpasses the latest version of ChatGPT, and that it puts Google back in the fight to dominate the market.

An electronic and computer engineer by training, Murdoch joined DeepMind nine years ago, after building up experience managing startups and large corporations. His job is to ensure that the AI advances made by Google’s scientific team have an impact on the general public. He talks to EL PAÍS from London by video call.

Question. Is Gemini the definitive answer to ChatGPT? What is new about it compared to the popular OpenAI application?

Answer. Gemini is a significant advance in AI development. It’s our largest and most capable model to date: it understands and reasons text, images, audio, video and code, so it can help people be more creative or learn. For example, let’s say your child brings home physics homework and needs help understanding what they have done right and wrong. If you took a photo of the page, Gemini would not only give you the correct answer to the problem, but would read the document and explain what the child has done right, what they have done wrong, and the underlying concepts. Users can also interact with Gemini through Bard, which now works with Gemini Pro and is more effective for understanding, summarizing, reasoning, coding and planning. It is already available in English in more than 170 countries, and in the coming months it will be available to billions of people through other Google core products such as Search, Ads, Chrome and Duet AI. In the long term, tools like Gemini will transform the way billions of people live and work around the world.

Q. What do you think of the commotion caused by the departure and subsequent reinstatement of Sam Altman as CEO of OpenAI?

A. They were very interesting days in the industry. But we remain focused on our work of launching world-class products and research. We’ve had an incredibly busy few months — from the announcement of Lyria, our advanced AI music generation model, which will increase creativity and drive new forms of innovation for artists, creators and fans in the future, to the release of GraphCast, our state-of-the-art weather forecasting system, and Gemini. We are very confident in our technology portfolio and are excited about the year ahead.

Q. What is artificial intelligence capable of at this moment?

A. Our research laboratory seeks to improve people’s lives, and I believe that AI is a good tool for doing this. As long as you work carefully, which is in DeepMind’s DNA. One of the areas I’m most excited about is what we call science at digital speed: AI helping to advance scientific progress. I’ll give you an example. Proteins are the building blocks of cells. When they malfunction, they can cause problems or illness. So science has been studying the structure of proteins for years, specifically the shape of those structures, which is what really tells us how they work. The number of shapes that these structures can acquire exceeds the number of atoms in the universe. Well, two years ago, a DeepMind team managed to develop an algorithmic model, AlphaFold, which is capable of determining the structure and appearance of amino acids, the essential element of proteins. We know the structure of 200 million proteins, and we have unlocked that knowledge. Our tool is being used to accelerate research into methods to combat antibiotic resistance. It is also being used to accelerate research into enzymes that eat plastic in the oceans. And in cancer vaccine research.

Q. Have you made progress in other areas?

A. We are hopeful that there will be progress with nuclear fusion, the cleanest energy source that exists. To achieve this, plasma is circulated through superconductors at high speeds, about 10,000 revolutions per second. Magnets are used to reduce the friction of the plasma in the tubes. We are using AI to try to optimize, in real time, the calibration of these magnets so that the resistance is as low as possible.

Q. The interest of the general public, and it seems also that of companies, has turned towards generative AI. Do you think that could harm progress in many other areas of AI, like the ones you just mentioned?

A. We have been working on generative AI for a long time. In fact, the models that are now successful are based on an architecture called Transformer that Google scientists developed five years ago. What has happened in the last 12 or 18 months is that things have escalated very quickly: we have bigger models and more data. The fundamental change is that we can relate to these models with conventional language, like the one you and I are using right now, and that makes it more accessible. Before, only computer scientists could relate to this technology; now, anyone who knows how to speak and write can do so.

Q. You have mentioned the importance of being careful when developing AI. What kind of rules are you following?

A. AI is very powerful and promising, but we must be very careful because it is a very powerful technology. We have a number of operating principles for how we can conduct our investigations. A second element is that we do research ourselves in areas such as bias and equity, to ensure that we address those challenges correctly. Thirdly, it is important to have a correct institutional configuration in the organization and the appropriate culture. We have multidisciplinary groups that include ethicists, engineers and a wide range of professionals with different specializations, who test and analyze the benefits and risks of each system we develop. We also invite external specialists to help us.

Q. How do you think this technology should be regulated?

A. Regulation is important. I think it has to be measured and proportionate so as not to constrain innovation and at the same time, mitigate the big risks, because I think this is an exceptionally promising technology.

Q. Does the approach of the artificial intelligence regulation that the EU has just adopted seem correct to you?

A. I think so, it establishes a proportionate and risk-based approach to each tool. It seems to me that this is a good starting point for the global debate. It is important that we try to promote this kind of coordinated approach to regulation and policy around the world so that we can maximize the benefits for everyone — and there are many — and so we can also adequately mitigate their risks.

Q. DeepMind was until now Google’s advanced AI laboratory. Has it changed after its business integration with Google? Do you now have to orient your work more towards commercial results?

A. I think the merger has been a very successful move. On the one hand, we have a scientific team unparalleled in the field of AI; on the other, a gigantic market that we can access thanks to Google, which offers us the possibility of trying to solve people’s problems. My job is to find ideas at the intersection between these two spheres. And, when we find them, incubate each idea and take it forward.

Q. Do you have an example of when these two spheres coincided?

A. At DeepMind we have software, MuZero, capable of playing chess, Go and other complex games. One day, talking to someone at YouTube, they told us that they needed to reduce the bandwidth needed to get our videos to people all over the world, so that they can be watched regardless of the speed of the internet connection. There was a very creative moment when we realized that a video, in essence, is like a game of chess: it can be viewed as a succession of individual still images, and there are transitions between those images. Each of those images can be a position on the chess board, and the transitions can be seen as chess moves. So we applied MuZero to a video and gave it the goal of reducing its size, of compressing it. We saw that it had a dramatic impact on the weight of those videos, and now that technology is built into YouTube.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_