AI on the battlefield: Next stop for Peter Thiel after PayPal, Hulk Hogan, Trump and Facebook
Palantir, a firm co-founded by the billionaire, has seen its share price soar since it announced the creation of AI software capable of managing warfare scenarios
Sparked by the emergence of the chatbot ChatGPT, AI fever is spreading fast — despite the ethical doubts that artificial intelligence throws up, and warnings from authorities such as the World Health Organization over its potentially harmful effects if used improperly. Indeed, Sam Altman, the CEO of ChatGPT creators Open AI, last week acknowledged in testimony before a U.S. Senate committee: “I think if this technology goes wrong, it can go quite wrong.” Among the range of sectors in which AI is gaining in influence, one particularly sensitive area stands out: defense. To make battlefield operations more efficient, governments and companies are going to great lengths to explore ways of making the most of this technology. It’s a goal that Peter Thiel has made it his mission to achieve and, judging by the way investors are reacting, he is well on his way to doing just that.
Thiel is an American entrepreneur and venture capitalist whose net worth was estimated at around $8 billion by Bloomberg at the end of March. One of the founders of PayPal, he showed a keen eye for an opportunity when, in 2004, he used his cut from the $1.5 billion sale of the payment platform to become the chief investor in a nascent company called Facebook. Thiel remained on the board at the social media giant for years, until he left in 2022 to support Trump-aligned candidates ahead of the U.S. midterms, having in 2016 become one of the first figures to sponsor Trumpism.
Within the already peculiar universe that is Silicon Valley, Thiel has made himself a particularly controversial figure. He orchestrated a campaign to bring down the news portal Gawker, which in 2007 published reports about his sexual orientation. As Forbes Magazine outlines, although he lost his lawsuit against the media outlet, Thiel then spent years funding the legal action that others brought against it.
One of these suits against Gawker, to which Thiel contributed $10 million in legal fees, was filed by Hulk Hogan over the publication of a sex tape involving the wrestler. Hogan won the case, and Gawker was ordered to pay him $140 million in damages — an amount that took the website to bankruptcy and forced it to close. In an interview with the New York Times, Thiel described his crusade against Gawker as a bid to help those attacked by a “singularly terrible bully,” declaring it “one of my greater philanthropic things that I’ve done.” Stubborn and persistent, the billionaire also has another notable entry on his resumé: he’s the co-founder of Palantir.
Created in 2004, Palantir is a company that has grown on the back of contracts with government agencies. Specializing in sensitive big data, its products focus on offering a better understanding of the data and executing analysis and predictions based on it, according to its website. It’s an explanation that fails to lift the shroud of mystery surrounding its activities. Palantir received funding from the CIA in its early stages, and has since regularly worked for the intelligence agency. As a contractor not only for the CIA, but also the FBI and other government branches such as the Pentagon and Immigration and Customs Enforcement, Palantir has become a hugely important ally to the U.S. administration.
Embracing AI on the battlefield
Earlier this month, the company released its financial results for the first quarter of 2023. That day, its shares rose in value by 23.39%, taking Palantir’s overall market worth over $20 billion. In addition to announcing that the firm had turned a quarterly profit for only the second time — and predicting that it would remain profitable throughout the year — Palantir’s other chief founder, CEO Alex Karp, discussed Artificial Intelligence Platform (AIP), the software at the center of the company’s efforts to take AI into the arena of warfare.
Conceived as a system that offers decision-making assistance in combat scenarios, AIP is unlike anything ever previously created, judging by the information that the company has released about the software so far. Although we have already witnessed the use of artificial intelligence in warfare — for example, in drones such as those built by the Turkish company Baykar, and in automated turrets manufactured by the South Korean firm Dodaam — there is no known precedent for a tool that can harness AI to provide all-round management of what happens on the battlefield.
In a promotional video on its website, Palantir explains that the platform, which is still being rolled out, is capable of recognizing the enemy, organizing automatic responses with the relevant authorization, and suggesting courses of action by calculating the variables involved in each one. In a meeting with analysts to present the company’s financial results, Karp said demand for AIP had exceeded all expectations, mentioned the term “AI” or a synonymous phrase more than 50 times, and summed up Palantir’s strategy in this field: “Just to take the whole market.” Since the company announced the creation of the platform, its shares have continued to rise in value. So far this year, their worth has increased by 80%.
In a report on Palantir, the analysts Mizuho Securities describe the software as “unique” and “capable of creating significant operational value for its customers.” “We expect ongoing global disruptions can help to further catalyze adoption [of AIP],” Mizuho said. “However, growth across both its government and commercial businesses has slowed significantly, and an uncertain macro environment makes meaningful near-term reacceleration much more difficult.”
In its most recent results, Palantir said that $282 million of the $525.2 million it accrued in quarterly revenue came from the U.S. government.
“A new world”
Artificial intelligence has undeniable potential on the battlefield. In a document published by Spain’s Joint Center of Concept Development, a body that belongs to the country’s Ministry of Defense, experts reached the conclusion that the use of AI in the military arena opens up a “new world of possibilities.”
“The amount of information it can have at its disposal is immense, and it can help in decision-making,” the report says. “Among many other possibilities, it offers the potential to increase the security of soldiers, to improve humanitarian aid in terms of the tracking of catastrophic areas [and] the search and rescue of people in danger, and to study the enemy to mitigate its attacks. However, it is a double-edged sword: with such intelligent systems, it is possible to develop weapons with a limitless destructive capacity.” The Joint Center of Concept Development adds: “It will not be long before these new systems are autonomous; in other words, they will be capable of making their own decisions, as long as ethics allow it.”
It is the ethical implications of using artificial intelligence in war that raise suspicions among experts in both fields. The Stockholm International Peace Research Institute has created a section on its website specifically dedicated to AI. It outlines the organization’s commitment to carrying out research into the risks and opportunities, the impact its arrival has from the point of view of legislation, and the effects that it could have on traditional combat scenarios, cyber warfare and the management of nuclear arms (a function that it is destined to take on, experts predict, much to the unease of anyone who has seen Terminator). However, on top of the implicit risk of adding a new innovation to warfare, there’s also the question of who is using it.
For example, Lucía Ortiz de Zárate, a researcher in AI ethics and governance at the Autonomous University of Madrid, notes that the Chinese government’s efforts in the field of artificial intelligence are well documented. In the U.S. and elsewhere in the West, it is the private sector that is taking the initiative. There are certain consequences if it is companies that are developing the technology, according to Marta Galcerán, a lead researcher at the Barcelona Center for International Affairs. Galcerán explains that this makes the development process faster, but it comes at a cost of shorter trial periods and reduced transparency.
Are we headed for increased cruelty in warfare?
Ortiz de Zárate warns that the use of artificial intelligence on the battlefield has serious potential repercussions. From an ethical point of view, warfare was already complex; now, AI introduces new moral dilemmas.
The expert lists some of them. “Accountability issues; who is responsible for those systems — whether they work well or not,” she notes. “Privacy problems: remember, for these systems to work precisely, they need an enormous amount of data. Armed conflicts are situations in which human rights are violated to a huge degree, because they are chaotic and unstable contexts. So in these types of scenarios, the collection and use of personal data that violates civilians’ privacy is highly likely. What’s more, nobody can be sure that AI won’t confuse a civilian with a soldier, and there are also major issues when it comes to de-humanization. AI doesn’t feel empathy; it could lead to an increase in violence if there is nobody there to supervise each and every one of its actions. Tying in closely with this last point is the rise in the cruelty and frequency of armed conflicts. The introduction of AI on the battlefield would be the starting gun for a potential arms race. In fact, it may well have already started, because in the Ukraine war there is speculation about China giving Russia weapons that use AI.”
Asked if the world is doomed to witness such an arms race, Ortiz de Zárate replies: “As I say, I think that race is already underway. We know that China and the United States have been developing AI technologies for defense and for warfare for a while now. What isn’t clear is whether or not these types of weapons have already entered into combat. But there’s no disputing that work is underway on them.”
Galcerán offers a more cautious take, explaining that a good way to see whether such a race is taking place is to analyze the flow of investment. “When you look at what governments are investing in AI for military purposes, the amounts aren’t that high,” she says. “We tend to focus our attention on robots and those types of things, but often it’s more general AI-based tools that also have military applications.”
When it comes to analyzing investment in AI for military use, Precedence Research estimates that in 2023 the global market in this field will be worth $8.81 billion. In 2025, $10.86 billion. By 2032, this amount will have grown to $22.62 billion, the market-insights firm predicts. Its calculations are in line with the forecast offered by MarketsAndMarkets, which estimates that military AI will be a $11.6 billion-a-year industry by 2025. If Karp’s words are to be taken literally, Palantir aims to have complete control over this figure.
Speaking at the Spanish International Defense and Security Exhibition in Madrid last week, the Palantir CEO perfectly summed up the atmosphere that surrounds the development of military AI. Although Karp admitted that it is “very dangerous, potentially,” adding that extreme caution must be observed at every step, he warned that China and Russia have established an advantage in the field of AI – and that the West simply cannot afford not to invest in such technology.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
FlechaTu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.