© 2024 CoolTechZone - Latest tech news,
product reviews, and analyses.

Over a hundred companies sign EU AI Pact for safe AI development


Adobe, Amazon, Google, Microsoft, Nokia, OpenAI: these are just a few companies that have signed the EU AI Pact. All signatories voluntarily promise to stimulate trustworthy and safe AI development.

The endorsers include international business conglomerates, but also European small and medium-sized enterprises (SMEs) from a plethora of industries, including IT, telecom, healthcare, banking, automotive, and aeronautics.

By signing the EU AI Pact, all signatories dedicate themselves to apply the principles of the AI Act ahead of time, and enhance all engagements between the EU AI Office and relevant stakeholders.

All parties that have signed the EU AI Pact have committed themselves to work towards future compliance, to identify AI systems that are likely to be labeled high-risk, and to promote AI literacy and awareness among employees.

In addition, over half of the signatories promised to invest in additional pledges, including human oversight, mitigating risks, and transparently labeling certain types of AI-generated content, such as deepfakes.

To boost AI innovation in the EU, the European Commission introduced AI Factories earlier this month. They will provide a one-stop-shop for start-ups, scale-ups and other businesses to innovate and develop AI applications, including data, talent and computing power. The Commission will also set up a European AI Research Council to exploit the potential of data, and the Apply AI Strategy to boost new industrial uses of AI.

The AI Act entered into force on August 1. Its goal is to restrict the use of AI technology in Europe in order to safeguard fundamental civil rights, like privacy, and to put a stop to potential dangers, like discrimination and exclusion.

To achieve this, artificial intelligence is divided into three risk levels:

Low risk AI applications, like chatbots;

High risk AI applications, like smart cameras equipped with facial recognition software; and

Banned AI technology, like scraping software and systems capable of assessing and scoring people’s mood and behavior.

Policy makers agreed that the AI Act will be implemented in stages. All banned AI applications are forbidden starting February 2025. Six months later, data protection authorities (DPAs) in all 27 EU member states must have appointed a regulator to supervise public AI systems. Lastly, the supervision of high risk AI models will commence a year later.


Leave a Reply

Your email address will not be published. Required fields are marked