Logged-out Icon

Emerging Technology Takes Center Stage as EU and US Sign AI Agreement

The European Commission and the US administration have signed an administrative agreement to collaborate on AI for the public good, with a focus on five priority areas including climate change, health and medicine, and emergency response management

The European Commission and the US administration signed an agreement on Artificial Intelligence (AI) for the Public Good on Friday, 27 January. The agreement was signed during a virtual ceremony as part of the EU-US Trade and Technology Council (TTC). Launched in 2021, the TTC is a permanent platform for transatlantic cooperation across several areas, including emerging technologies. AI was presented as one of the most advanced areas in terms of cooperation during the last high-level meeting of the TTC in December. The two blocs endorsed a joint roadmap to reach a common approach on critical aspects of AI, such as trustworthiness metrics and risk management methods.

Building on the AI roadmap, the US and EU executive branches are increasing their collaboration to develop AI research that addresses global and societal challenges such as climate change and natural disasters. Five priority areas have been identified: extreme weather and climate forecasting, emergency response management, health and medicine improvements, electric grid optimisation, and agriculture optimisation. While the two partners will build joint models, they will not share the training data sets due to the lack of a legal framework for sharing personal data across the Atlantic.

The Commission stressed that the two partners will share the findings and resources with international partners that share their values but lack the capacity to address these issues. Signatories to the Declaration for the Future of the Internet are likely to benefit from the outcome of this research.

The US Department of Commerce’s National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework, which sets guidelines for AI developers on mapping, measuring, and managing risks, the day before the agreement was signed. The framework is a voluntary one, developed in consultation with private companies and public administration bodies, and represents the American non-binding approach to new technologies.

In contrast, the EU is advancing the work on the AI Act, horizontal legislation that regulates all AI use cases based on their level of risk, including a list of high-risk areas like health, employment, and law enforcement. The AI Act is expected to be highly influential and set international standards on several regulatory aspects. The US administration has been trying to shape the AI Act, as most of the world’s leading AI companies are American. The publication of the NIST Framework comes at a critical time for the AI Act, as EU lawmakers are finalising their position before starting interinstitutional negotiations.

This website uses cookies to ensure you get the best experience on our website