_
_
_
_
_

Floods, fires, smog: AI delivers images of how climate change could affect your city

A lab led by Yoshua Bengio has developed a tool whose powerful algorithms can simulate the effects of extreme weather events in any part of the world

Manuel G. Pascual
A simulation of New York's Times Square affected by flooding.
A simulation of New York's Times Square affected by flooding.

The full brunt of the devastating effects of climate change is still a long way off. If we don’t experience the impact directly, it’s difficult to fully internalize the extreme seriousness of the climate crisis.

That’s why a team at the Mila-Quebec Artificial Intelligence Institute, led by Professor Yoshua Bengio, wants to bring it home – right to your doorstep in fact. His team has developed a tool that makes it possible to visualize the effects of floods, wildfires and smog anywhere in the world. Their simulation does this by making use of a generative adversarial network (GAN), a type of machine-learning algorithm. GANs can also produce things such as deepfake images, which are digitally composed of millions of images to create realistic photos of something (or someone) new.

For two years, 30 scientists have worked on the project, which is named after thispersondoesnotexist.com, a website portfolio of deepfake faces. Bengio’s version is called “This Climate Does Not Exist.” All a user has to do is type in an address or select a marker on Google Street View, and then indicate what kind of catastrophe they want to see: flood, wildfire or smog. The algorithm works its magic and returns the image with the requested effect. These images are not intended to be an accurate portrayal of what would happen at each specific location if no action on climate change is taken, but rather are a recreation of the worst possible effects in the scenario of the user’s choice.

The realism is particularly striking in the flooding option, which was the most difficult for Bengio’s team to produce. The algorithm takes the location proposed by the user, automatically places a layer of water on it and then adapts it to the environment of the image itself. The result is hyperrealistic.

img-beforeimg-after
Cibeles Square in Madrid, before and after a hypothetical flooding created by 'This Climate Does Not Exist.' The original image is from Google Street View. This Climate Does Not Exist

“One of the most important challenges has been getting the algorithm to simulate flooding in a wide variety of images,” explains Alex Hernandez-Garcia, one of the project’s lead researchers. “One module of the algorithm is in charge of detecting which parts of the image should be covered with water and another module is in charge of generating the water texture by incorporating the context of the image, for example, the reflection of buildings. Finally, these results are combined to generate the final image.”

img-beforeimg-after
The Capitol in Washington DC laboring under the effects of a toxic cloud and flooding, in a simulation created by the team at MILA. The original image is from Google Street View. This Climate Does Not Exist

To detect which parts to cover with water and which to leave unscathed, Hernandez-Garcia and his colleagues combined several artificial intelligence (AI) and machine-learning techniques. “We generated a virtual city that allowed us to make a series of images with and without water. We also adjusted an algorithm that was able to make good predictions in that virtual world, detecting the different parts of a scene: the ground, cars, buildings, trees, people and so on,” he explained. “However, the algorithm must be able to make good predictions based on real images [those from Google Street View].” For the latter, they used generative adversarial networks.

img-beforeimg-after
Mexico City's enormous Constitution Square, popularly known as El Zócalo, might look like this in a scenario of wildfires and flooding. The original image is from Google Street View. This Climate Does Not Exist

The process is completed in a few seconds, and before displaying the image to the user some information is provided about the causes and consequences of the selected weather phenomenon, and its relationship to climate change. For example, if a flood is chosen, it indicates that flash floods kill about 5,000 people a year, that sea levels are expected to rise by two meters by the end of the century and that this major disruption to the planet will forever alter the lives of at least one billion people by the end of 2050. “If we do nothing, soon we will face major climate catastrophes,” says Professor Bengio, the institute’s scientific director. “This website makes the risks of climate change much more real and personal to people,” he argues.

Generative adversarial networks

The quality of AI took a giant leap forward about a decade ago with the emergence and consolidation of machine learning and deep learning. These techniques are based on training a machine so that it is capable of performing complex tasks after reaching certain conclusions on its own. For example, if you want the algorithm to distinguish between blueberry muffins and chihuahuas, the programmer will feed it a series of examples of each category, followed by thousands of images that are not pre-sorted. The machine will establish which is which, and when it gets it wrong and is made aware of the error, will refine its criteria.

Bengio won the 2018 Turing Award, considered the Nobel Prize of computer science, along with Geoffrey Hinton and Yann LeCun, for their contribution to the development of neural networks. This is a further step in machine learning that attempts to mimic the functioning of the human brain: applying several simultaneous layers of processing to increase performance. Neural networks are behind the most complex classification systems, such as voice assistants or advanced prediction models.

Generative adversarial networks (GANs) go even further. They were invented at the Mila-Quebec Artificial Intelligence Institute in 2014 and are capable of generating new content that looks faultlessly real to the human eye. GANs are behind the increasingly sophisticated deepfake videos of Tom Cruise or Donald Trump now circulating online, in which politicians or celebrities say or act in whichever way their creator likes. They work thanks to competition between two neural networks: one tries to produce images that are as realistic as possible and the other tries to detect whether they are real or a fabrication. This tension is replicated thousands or millions of times and during this process, the generating network learns to create more and more successful images. When the first network succeeds in fooling the second, we have a winning image. From there, a perfectly rendered image of New York City’s Times Square inundated by flooding is just a click away.

The Quebec lab is now using a new type of GAN they have developed to generate the climate change images seen on their website. “In general, the limited availability of images and the need to adapt the algorithm to a multitude of situations have been the main technical challenges we have faced,” says Hernandez-Garcia.


More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_