Skip to content

Risk Management

AI models should ‘self-mitigate’ systemic risks, EU Code of Practice says

By 0 minute read

July 25, 2025

General-purpose artificial intelligence (GPAI) models should “self-mitigate” systemic risks with potential “future-proof” updates, according to the EU’s GPAI Code of Practice.

The European Commission issued the code on July 10 as a voluntary guideline designed to “help industry” comply with the EU AI Act. Prepared by independent experts and supported by the commission, it covers three main areas of transparency, copyright, and safety and security.

At its core, the code focuses on addressing “systemic risks” in GPAI models by promoting “comprehensive and structured” risk assessments — defined as a multi-step process with model evaluations being integrated throughout the entire model life cycle. It also recommends that any “serious incident” or vulnerability should be documented and reported immediately to the EU AI Office and relevant national authorities.

However, not everyone agrees with the code of practice, and tech giant Meta has already declined to sign, according to the company’s chief global affairs officer, Joel Kaplan, in a LinkedIn post on July 17.

“Europe is heading down the wrong path on AI,” he said. “Meta won’t be signing it. This code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

‘Risk-based’ legislation

The code of practice is only a voluntary tool intended to support the transition to the AI Act, which is the EU’s primary legal framework on AI.

While some provisions on high-risk AI systems came into force in 2024, the requirements around GPAI will be applied on August 2, 2025, and the Act will be fully enforced by August 2, 2026.

“The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes,” said a European Commission statement.

The legislative framework adopts a “risk-based” approach to both AI developers and deployers, categorising them into four levels of risk: unacceptable, high, limited and minimal.

Key provisions on high-risk AI systems are already in place, including obligations — before the systems can be placed on the market — on datasets, human oversight and compliance requirements.

According to paragraph 58 of the Act, AI systems used to evaluate credit scores or creditworthiness are classified as high-risk, as they may perpetuate historical discrimination based on economic background. AI systems used for pricing in health and life insurance are also considered high-risk since they have an impact on individuals’ livelihoods.

Traceability pledges

The European Commission previously published the AI Pact in September 2024, calling on firms to proactively work towards the AI Act. Although not part of the core commitment, the pact primarily encouraged AI developers to improve “traceability” in their systems.

For example, systems should clearly label AI-generated content, including deepfake images, audio, or video. Developers should also design generative AI systems to mark and detect AI-generated or manipulated content using technical solutions such as watermarks and metadata.

Around 200 companies signed the pact — including big names such as Amazon, Google and eBay — though Meta and Apple declined to do so.