General-Purpose AI Code of Practice (GPAI CoP)

Jurisdiction:
European Union
enforcing
Effective:
Aug 2, 2025
Authority:
European Commission
Official text Verified Mar 26, 2026

Obligations Covered

Transparency & Disclosure Data Governance Record-Keeping & Documentation Risk Assessment

Regulatory Crosswalk

Binding regulations that require the same obligations this standard addresses. Implementing this standard can help satisfy these regulatory requirements.

RegulationJurisdictionShared Obligations
Work Health and Safety Amendment (Digital Work Systems) Act 2026 New South Wales 2
Privacy Act 1988 — Automated Decision-Making Reforms Australia 3
Brazil AI Bill (PL 2338/2023) Brazil 2
California AB 3030 (AI in Health Care Services) California 1
California Employment Regulations Regarding Automated-Decision Systems California 1
California AI Transparency Act (SB 942) California 1
California CCPA ADMT Regulations California 2
California SB 53 (Frontier AI Transparency Act) California 1
Provisions on the Management of Algorithmic Recommendations China 2
Provisions on the Management of Deep Synthesis China 2
Interim Measures for Generative AI Services China 2
Framework Convention on AI, Human Rights, Democracy and Rule of Law (CETS 225) Council of Europe 3
Colorado Privacy Act Rules (4 CCR 904-3) Colorado 1
Colorado Protecting Consumers from Unfair Discrimination in Insurance Practices Colorado 1
Colorado ADMT (SB 24-205) Colorado 1
EU AI Act European Union 3
Digital Operational Resilience Act (DORA) European Union 2
Digital Personal Data Protection Act 2023 (DPDP) India 1
IT (Intermediary Guidelines) Amendment Rules 2026 — Synthetic Media India 2
Law on Artificial Intelligence Italy 2
AI Promotion Act Japan 2
AI Basic Act South Korea 2
Law on Artificial Intelligence Kazakhstan 3
Artificial Intelligence Regulations 2025 Malta 1
Federal Law on the Protection of Personal Data (LFPDPPP) — 2025 AI Provisions Mexico 1
New York RAISE Act New York 1
NYC Local Law 144 (Automated Employment Decision Tools) New York 1
QCB Artificial Intelligence Guideline Qatar 2
Law for the Promotion of Artificial Intelligence and Technologies El Salvador 2
Artificial Intelligence Basic Act Taiwan 1
UK Data Protection Act 2018 — Automated Decision-Making United Kingdom 1
UK Online Safety Act 2023 United Kingdom 2
EO 14319 — Preventing Woke AI in the Federal Government United States 3
Executive Order on AI State Law Preemption United States 2
Utah AI Policy Act (SB 149) Utah 1
Law on Artificial Intelligence Vietnam 2

GPAI Transparency and Documentation (Article 53) #

Obligation:
Transparency
enforcing
Effective:
Aug 2, 2025
Risk tier:
all
Scope:
providers
high-impactcross-domain
The GPAI Code mandates a public-facing Model Documentation Form for every GPAI model — a standardized disclosure covering technical specs, training data, compute, and energy use. This is the first binding-effect transparency template for foundation models globally, operationalizing an EU obligation that applies to providers worldwide.

Requirements

RequirementDetails
Model Documentation FormDraft and maintain a comprehensive Model Documentation Form covering technical specifications, training data characteristics, computational resources, and energy consumption
Downstream disclosureProactively provide documentation to downstream providers integrating the GPAI model into AI systems
Authority disclosureMake documentation available on request to the European AI Office and national competent authorities
Contact publicationPublicly disclose contact information (e.g., website) for documentation requests
GPAI TemplateComplete and publicly disclose a mandatory GPAI Template with training data details

Penalties

ViolationFine
AI Act Article 53 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)

Training Data and Copyright Governance (Article 53) #

Obligation:
Data Governance
enforcing
Effective:
Aug 2, 2025
Risk tier:
all
Scope:
providers
high-impactcross-domain
All GPAI providers must implement copyright-compliant training data policies — including robots.txt compliance, mechanisms to prevent infringing outputs, and public training data disclosure. This directly affects every foundation model provider operating in or serving the EU, making EU copyright law a de facto data governance standard for global AI training pipelines.

Requirements

RequirementDetails
Copyright compliance policyImplement and maintain a policy for compliance with EU copyright law throughout the training data pipeline
Robots.txt complianceHonor robots.txt opt-out protocols when crawling data for training
Infringing output preventionEstablish mechanisms to prevent generation of copyright-infringing outputs
Complaint mechanismCreate a complaint mechanism for rights holders regarding copyright infringements
Training data disclosurePublicly disclose a summary of training data used, including data sources and characteristics

Penalties

ViolationFine
AI Act Article 53 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)

Technical Documentation and Record-Keeping (Article 53) #

Obligation:
Record Keeping
enforcing
Effective:
Aug 2, 2025
Risk tier:
all
Scope:
providers

Requirements

RequirementDetails
Model Documentation Form maintenanceKeep the Model Documentation Form current and updated as the model evolves
Training recordsMaintain records of training data characteristics, sources, and processing
Compute and energy recordsDocument computational resources and energy consumption used in training
Confidential disclosureProvide documentation to AI Office under confidentiality protections when requested

Penalties

ViolationFine
AI Act Article 53 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)

Systemic Risk Assessment (Article 55) #

Obligation:
Risk Assessment
enforcing
Effective:
Aug 2, 2025
Risk tier:
high
Scope:
providers
high-impact
Applies only to the most powerful GPAI models (above 10²⁵ FLOPs training compute, or Commission-designated). The Safety and Security chapter operationalizes the most demanding tier of EU AI regulation — requiring state-of-the-art adversarial testing, red-teaming, and cybersecurity measures for models that pose systemic risks to the EU.

Requirements

RequirementDetails
Systemic risk assessmentAssess and mitigate systemic risks arising from the GPAI model, including risks to health, safety, fundamental rights, society, and democracy
Adversarial testingConduct adversarial testing and red-teaming to identify dangerous capabilities
Cybersecurity measuresImplement cybersecurity controls appropriate to the model's risk level
Safety practicesApply state-of-the-art safety practices for high-capability model development and deployment
Ongoing monitoringContinuously monitor for emerging systemic risks post-deployment

Penalties

ViolationFine
AI Act Article 55 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)