_
_
_
_

Tech giants pursue ethical and legal AI development

The new European standard will drive the industry to develop bias-free applications that ensure privacy and transparency

Carme Artigas Inteligencia Artificial
Carme Artigas, Spain's Secretary of State for Digitalization and Artificial Intelligence, and European Commissioner Thierry Breton (foreground) after finalizing Europe's artificial intelligence regulation; December 9, 2023.SIERAKOWSKI FREDERIC (Unión Europea)
Raúl Limón

Europe has taken the significant step of approving the world’s first regulation on artificial intelligence. This regulation categorizes AI applications based on their risks and imposes strict sanctions for violations, with penalties of up to €35 million ($38.3 million) or 7% of gross income. At the bottom range, penalties of €7.5 million ($8.2 million) or 1.5% of gross income can be imposed for non-compliance There is also a transition period until 2026 for companies to come into compliance with the law. Tech giants like IBM, Intel and Google that favor AI regulation have already developed platforms and systems to ensure ethical, transparent and bias-free development of AI.

Technology consultancy Entelgy highlights three important considerations for AI companies to comply with the new law. First, organizations handling personal, medical, recruitment or decision-making data must disclose in a European registry how their algorithms work to generate content. Second, while not mandatory, mechanisms for human supervision should be implemented. And third, security systems should be implemented for large language models (LLMs), and developers must be transparent about the copyrighted material they use.

“We need to ensure responsible and ethical development of technology from the start. It’s a tremendous opportunity, but it also comes with challenges,” said Christina Montgomery, IBM’s Chief Privacy & Trust Officer. Many companies are in favor of unregulated AI development, and 150 European executives have opposed Europe’s new AI law. IBM, on the other hand, believes in intelligent regulation that balances innovation with social protections. Similarly, Intel’s Chief Technology Officer, Greg Lavender, stresses the need for “responsible deployment of artificial intelligence for everyone’s benefit.” Both companies have created platforms to comply with its own standards and government regulations.

IBM offers Watsonx.governance, a platform that encompasses ethical data management, risk management, and regulatory compliance. “It has been created to assist organizations in responsibly applying AI, adhering to current policies, and preparing for future regulations,” said Montgomery.

Ana Paula Assis, IBM’s General Director for Europe, the Middle East and Africa, says this type of tool is much needed, citing a survey of 1,600 business leaders from Germany, France, Italy, Spain and Sweden. The results show that 82% of the corporate executives surveyed have already adopted or are planning to implement AI within the next year. Additionally, 95% of them believe AI is effective for decision-making, management and commercial strategy development. According to Hazem Nabih, Microsoft’s Technology Director for the Middle East, AI can help companies achieve 30%-50% increases in productivity.

However, despite its immense potential, AI poses several challenges. Its development requires an ethical framework, the acquisition of new skills, and greater investment to ensure that the product is not only effective but also fair, transparent, and devoid of bias. Moreover, it is crucial to guarantee security and privacy.

IBM’s solution is compatible with any company, regardless of the computer model used. This includes both open source and individually or externally developed models. “Our strategy and architecture are open, hybrid and multimodel. This means we offer clients the flexibility to implement our solutions in the environments that suit them best,” said Assis.

Intel’s solution is called Intel Trust Authority, which follows a similar philosophy. It aims to create an open, developer-focused ecosystem that makes artificial intelligence opportunities accessible to all. “These tools simplify the development of secure AI applications and support the investment needed to maintain and scale these solutions, making AI accessible everywhere,” said Lavender. “Limited hardware and software options for developers can reduce global use cases for AI, potentially impacting the social value it provides.”

Intel’s strategy isn’t just for big companies. During its Innovation 2023 event, Intel introduced the AI PC Acceleration Program, designed to speed up the development of artificial intelligence using personal computers. The program aims to connect independent hardware and software vendors with Intel resources, such as AI tools, co-engineering teams, design resources, technical expertise, and marketing opportunities. Intel believes that these resources will speed up the development of new use and facilitate industry-wide adoption of AI solutions. Program partners include Adobe, Audacity, BlackMagic, BufferZone, CyberLink, DeepRender, MAGIX, Rewind AI, Skylum, Topaz, VideoCom, Webex, Wondershare Filmora, XSplit and Zoom.

Google has developed specific protection systems for Gemini — its latest AI model — to safeguard personal data in compliance with the new European standard. “We offer comprehensive controls to ensure that your data remains exclusively yours when using Vertex for your business. Your data is not shared with anyone, including Google. In addition to these controls, Vertex offers compliance and audit capabilities,” said Google Cloud CEO Thomas Kurian during a presentation of Gemini developer tools.

AI bias

A major challenge in artificial intelligence are biases — flaws in algorithms that can permeate the system and underestimate the complexity of human beings. At the International Conference on Computer Vision (ICCV) in October, researchers from Sony and Meta each presented methods to measure bias in computer vision systems. These methods aim to promote diversity in input data for machine training and output data for decision-making.

Traditionally, skin-tone bias in computer vision is measured using a scale that measures from light to dark. The scale has since been adopted widely as a tool to determine ethnicity, says William Thong, an AI ethics researcher at Sony. In an article for MIT Technology Review, Thong explained that Sony’s method to evaluate bias in computer vision systems expands the skin-tone scale into two dimensions, measuring both skin color (from light to dark) and skin hue (from red to yellow).

To simplify bias assessments, Meta has created the Fairness in Computer Vision Evaluation (FACET) tool. According to Meta researcher Laura Gustafson, FACET uses 32,000 human-labeled images with 13 key parameters, including age, skin tone, gender, hair color and texture. This data is freely available online to support researchers.

Widespread and uncontrolled use

A recent report by the Kaspersky cybersecurity firm stresses the importance of a cautious approach to AI adoption. Kaspersky surveyed Spanish managers and found that 96% acknowledged regular use of generative artificial intelligence among their employees. Of these companies, 45% lacked measures to mitigate the associated risks. Another Kaspersky study found that 25% of generative AI users are unaware that it can store identifying information such as IP address, browser type, user settings, and data about the most frequently used AI functions.

“The growth of generative artificial intelligence systems poses a clear challenge. If left uncontrolled, safeguarding crucial areas of the business will become increasingly difficult,” said Kaspersky security analyst David Emm.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition


Tu suscripción se está usando en otro dispositivo

¿Quieres añadir otro usuario a tu suscripción?

Si continúas leyendo en este dispositivo, no se podrá leer en el otro.

¿Por qué estás viendo esto?

Flecha

Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.

Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.

En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.

Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

More information

Archived In

Recomendaciones EL PAÍS
Recomendaciones EL PAÍS
_
_