Insights

Provisions worth knowing about — sleeper regulations, upcoming deadlines, and high-impact requirements that compliance teams often miss.

15
sleeper
10
upcoming
19
high-impact
21
cross-domain

sleeper 15 provisions

Regulations not branded as AI-specific but that catch AI use — privacy laws, financial rules, and sector regulations with provisions that apply to automated decision-making.

Work Health and Safety Amendment (Digital Work Systems) Act 2026 New South Wales Risk Assessment enforcing
NSW's Digital Work Systems Act catches every employer using algorithmic scheduling, AI-driven monitoring, automated performance management, or platform-based work allocation. The definition of "digital work system" explicitly includes algorithms, AI, automation, and online platforms — making this one of the broadest workplace AI laws globally. Any company operating in NSW with AI-assisted HR, logistics, or workforce management tools must assess and manage WHS risks from those systems.
WHS entry permit holders (typically union officials) gain rights to access and inspect digital work systems — including AI algorithms and monitoring tools. Employers must provide "reasonable assistance" on 48 hours' notice. This creates a transparency obligation where the AI/algorithmic logic behind workplace decisions becomes inspectable by worker representatives, not just regulators.
Australia's Privacy Act reforms make AI transparency mandatory through privacy law — not AI-specific legislation. Any organization using personal information in automated decisions must disclose the types of data used, the logic applied, and the most influential factors. Even "human in the loop" doesn't exempt you if the algorithm plays a substantial role. The OAIC has stated that "the algorithm decided" is not an acceptable explanation.
Privacy Act 1988 — Automated Decision-Making Reforms Australia Data Governance enacted
The reformed Privacy Act explicitly prohibits collecting broad datasets "in case they might be useful" for AI training. Each data input to an AI system must be demonstrably necessary for the specific purpose. This directly impacts how organizations build training datasets and deploy AI models using personal information.
Colorado Privacy Act Rules (4 CCR 904-3) Colorado Human Oversight enforcing
These privacy-law definitions directly govern AI-driven profiling in hiring, lending, and insurance — even though the rules predate and never mention AI. The three-tier automation framework determines consent and opt-out requirements, making this one of the most consequential provisions for organizations using automated decision-making in Colorado.
Colorado Privacy Act Rules (4 CCR 904-3) Colorado Risk Assessment enforcing
Any organization using AI for profiling in Colorado — credit scoring, insurance underwriting, employment screening — must conduct a Data Protection Assessment under this rule, regardless of whether the AI system was the target of the regulation. This is the provision a lawyer friend called a "real sleeper" that many compliance teams miss.
Predates most US AI laws; sector-specific but establishes an early template for algorithmic discrimination regulation. Insurance industry must proactively demonstrate non-discrimination.
Commissioner rules require governance and testing frameworks for algorithmic models used in insurance underwriting and claims.
Digital Personal Data Protection Act 2023 (DPDP) India Data Governance enforcing
India's foundational data protection law applies to all automated processing of personal data — including AI inference, profiling, and recommendation systems. No explicit ADM opt-out right (unlike GDPR Article 22), but data accuracy and consent obligations bind AI deployers handling Indian user data. Penalties reach ₹250 crore (~$30M USD) per breach.
Law on Artificial Intelligence Italy Data Governance enforcing
Italy's secondary-use pathway for health data is a sleeper provision with global reach: any organisation conducting AI research using Italian patient data — including non-Italian researchers accessing Italian health datasets — must satisfy both the GDPR and a 30-day Garante notification before processing. This covers clinical AI model training, drug discovery AI, and public health AI research.
Mexico's revised data protection law requires controllers to disclose in privacy notices the use of AI, automated decision-making systems, or algorithms — including the algorithmic logic, significance of processing, and potential consequences. This catches any AI system processing personal data of Mexican residents, even if the deployer is not Mexico-based.
The revised LFPDPPP mandates human-in-the-loop processes for automated decision-making, particularly in high-risk scenarios. Combined with the right to object to ADM, this creates a dual obligation: deploy human oversight AND honor opt-out requests. Secondary regulations (pending) may further define high-risk thresholds.
EO 14319 — Preventing Woke AI in the Federal Government United States Transparency enforcing
Creates de facto compliance obligations for any AI vendor selling LLMs to the US federal government. Agencies must require vendor documentation including model cards, data cards, acceptable use policies, and risk disclosures. Agencies must reject non-compliant models. Not branded as AI regulation, but effectively mandates transparency for a significant market segment.
EO 14319 — Preventing Woke AI in the Federal Government United States Data Governance enforcing
Vendors must disclose training data provenance, limitations, and risk mitigations as a condition of federal procurement. While framed as ensuring "unbiased AI," the practical effect is a data governance disclosure requirement for the federal AI supply chain.

upcoming 10 provisions

Provisions approaching their enforcement date. Worth tracking now to prepare for compliance.

Australia's Privacy Act reforms make AI transparency mandatory through privacy law — not AI-specific legislation. Any organization using personal information in automated decisions must disclose the types of data used, the logic applied, and the most influential factors. Even "human in the loop" doesn't exempt you if the algorithm plays a substantial role. The OAIC has stated that "the algorithm decided" is not an acceptable explanation.
The first binding international treaty to require notification when a person is interacting with an AI system rather than a human. Applies across all sectors in all ratifying states — including the US, UK, and EU — creating a truly transatlantic baseline for AI disclosure.
Article 16 goes further than most voluntary frameworks by requiring States to assess whether specific AI uses should be subject to moratoria or outright bans — a tool available under binding international law that has no equivalent in current national AI regulations.
CETS 225 is the first international treaty to establish a right to contest AI decisions. Articles 14–15 create binding remedies and procedural safeguards — including appeal rights and notification — that States must embed in domestic law, surpassing any existing voluntary framework on human oversight.
EU AI Act European Union Record Keeping enacted

high-impact 19 provisions

Provisions with significant penalties, broad scope, or sweeping requirements that affect many organizations.

Work Health and Safety Amendment (Digital Work Systems) Act 2026 New South Wales Risk Assessment enforcing
NSW's Digital Work Systems Act catches every employer using algorithmic scheduling, AI-driven monitoring, automated performance management, or platform-based work allocation. The definition of "digital work system" explicitly includes algorithms, AI, automation, and online platforms — making this one of the broadest workplace AI laws globally. Any company operating in NSW with AI-assisted HR, logistics, or workforce management tools must assess and manage WHS risks from those systems.
The first binding international treaty to require notification when a person is interacting with an AI system rather than a human. Applies across all sectors in all ratifying states — including the US, UK, and EU — creating a truly transatlantic baseline for AI disclosure.
Article 16 goes further than most voluntary frameworks by requiring States to assess whether specific AI uses should be subject to moratoria or outright bans — a tool available under binding international law that has no equivalent in current national AI regulations.
CETS 225 is the first international treaty to establish a right to contest AI decisions. Articles 14–15 create binding remedies and procedural safeguards — including appeal rights and notification — that States must embed in domestic law, surpassing any existing voluntary framework on human oversight.
EU AI Act European Union Record Keeping enacted
India's first binding synthetic media obligations: intermediaries enabling AI-generated content (deepfakes, audio/video synthesis) must embed permanent provenance metadata and prominent labels — and prevent their removal. Non-compliance forfeits safe harbor under the IT Act 2000.
Law on Artificial Intelligence Italy Human Oversight enforcing
Italy is the first EU member state to legislate sector-specific AI rules beyond the EU AI Act. For healthcare AI, the law establishes a hard prohibition on AI making autonomous clinical decisions — physicians retain ultimate authority regardless of AI recommendation quality. Any healthcare organisation deploying diagnostic or treatment AI in Italy must build physician-override workflows into every clinical AI deployment.
Law on Artificial Intelligence Italy Transparency enforcing
Italy extends AI transparency duties into employment and child contexts that sit beyond the EU AI Act's direct scope. Employers using AI in recruitment or performance evaluation must disclose AI involvement to workers — creating a specific notification duty for HR technology deployments. The parental consent requirement for under-14s applies to any AI-powered product or service used by children, including education platforms, apps, and consumer AI.
Law on Artificial Intelligence Kazakhstan Risk Assessment enforcing
Kazakhstan's AI law is the first in Central Asia, establishing a three-tier risk framework (minimum/medium/high) that directly mirrors the EU AI Act's approach. High-risk AI systems must use the state National AI Platform for development and testing — a unique state-platform requirement not seen in Western AI laws.
Law on Artificial Intelligence Kazakhstan Transparency enforcing
Kazakhstan mandates machine-readable markings on all distributed synthetic AI outputs (images, text, video) — a technically specific requirement that affects any AI system generating content for Kazakh users. Combined with advance user notification of AI involvement, this creates dual transparency obligations covering both the content itself and the service interaction.
Law on Artificial Intelligence Kazakhstan Bias Prevention enforcing
Kazakhstan's prohibition list covers social scoring and biometric discrimination — two categories that directly constrain AI systems used in hiring, lending, and public services. The ban on subconscious manipulation techniques is broadly worded and could catch persuasion AI, recommender systems, and targeted advertising tools.
Artificial Intelligence Regulations 2025 Malta Risk Assessment enforcing
Malta's dual-authority model (MDIA for market surveillance, IDPC for fundamental rights) creates a practical enforcement structure that other small EU member states may follow. The early classification requirement means deployers must proactively assess whether their AI systems fall under Annex III high-risk categories before placing them on the Maltese market — not wait for a regulator to classify them.
QCB Artificial Intelligence Guideline Qatar Risk Assessment enforcing
First binding AI-specific regulation in the GCC. All QCB-licensed financial entities must establish AI governance frameworks, risk management systems, and obtain QCB pre-approval before deploying any AI system. High-risk AI systems face additional scrutiny and may require sandbox testing.
Law for the Promotion of Artificial Intelligence and Technologies El Salvador Conformity Assessment enforcing
El Salvador's AI law is the first in Latin America to require registration with a national AI authority. Registration with ANIA unlocks liability protections (immunity for third-party misuse if reasonable safety measures were taken) and access to tax incentives — creating a meaningful incentive structure for compliance. All natural and legal persons engaged in AI research, development, or deployment must register.
Mandatory impact assessments apply to AI systems handling confidential, personal, or restricted data, or deployed in critical sectors (healthcare, finance, public administration). ANIA establishes the risk-assessment framework. This is a risk-based approach analogous to the EU AI Act but with a pro-innovation tilt: lighter obligations for lower-risk systems.
EO 14319 — Preventing Woke AI in the Federal Government United States Transparency enforcing
Creates de facto compliance obligations for any AI vendor selling LLMs to the US federal government. Agencies must require vendor documentation including model cards, data cards, acceptable use policies, and risk disclosures. Agencies must reject non-compliant models. Not branded as AI regulation, but effectively mandates transparency for a significant market segment.
EO 14319 — Preventing Woke AI in the Federal Government United States Risk Assessment enforcing
Establishes two Unbiased AI Principles — truth-seeking and ideological neutrality — that federal LLM procurements must comply with. Agencies must adopt procedures to enforce compliance and hold vendors accountable. Effectively creates a content-level compliance standard for the federal market.
Executive Order on AI State Law Preemption United States Risk Assessment enforcing
Does not create compliance obligations for AI companies. Instead, directs DOJ to form a task force to challenge state AI laws on preemption, interstate commerce, and First Amendment grounds. Directly threatens enforceability of state laws tracked in this reference (Colorado SB 24-205, Illinois HB 3773, California ADS regs, NYC LL144, and others). Carveouts preserve state authority on child safety, AI infrastructure, and government procurement.
Executive Order on AI State Law Preemption United States Transparency enforcing
Directs the Secretary of Commerce to evaluate existing state AI laws within 90 days and identify those that conflict with federal objectives. The evaluation includes a BEAD Program policy notice making states with conflicting AI laws ineligible for broadband funding. Also directs FTC to issue a policy statement on how the FTC Act preempts state laws mandating alterations to truthful AI outputs. No direct obligations for AI developers — but the evaluation results will shape which state laws survive federal challenge.

cross-domain 21 provisions

Provisions that span multiple industries. A privacy rule that affects AI in hiring, lending, and insurance simultaneously.

Work Health and Safety Amendment (Digital Work Systems) Act 2026 New South Wales Risk Assessment enforcing
NSW's Digital Work Systems Act catches every employer using algorithmic scheduling, AI-driven monitoring, automated performance management, or platform-based work allocation. The definition of "digital work system" explicitly includes algorithms, AI, automation, and online platforms — making this one of the broadest workplace AI laws globally. Any company operating in NSW with AI-assisted HR, logistics, or workforce management tools must assess and manage WHS risks from those systems.
WHS entry permit holders (typically union officials) gain rights to access and inspect digital work systems — including AI algorithms and monitoring tools. Employers must provide "reasonable assistance" on 48 hours' notice. This creates a transparency obligation where the AI/algorithmic logic behind workplace decisions becomes inspectable by worker representatives, not just regulators.
Australia's Privacy Act reforms make AI transparency mandatory through privacy law — not AI-specific legislation. Any organization using personal information in automated decisions must disclose the types of data used, the logic applied, and the most influential factors. Even "human in the loop" doesn't exempt you if the algorithm plays a substantial role. The OAIC has stated that "the algorithm decided" is not an acceptable explanation.
Privacy Act 1988 — Automated Decision-Making Reforms Australia Data Governance enacted
The reformed Privacy Act explicitly prohibits collecting broad datasets "in case they might be useful" for AI training. Each data input to an AI system must be demonstrably necessary for the specific purpose. This directly impacts how organizations build training datasets and deploy AI models using personal information.
The first binding international treaty to require notification when a person is interacting with an AI system rather than a human. Applies across all sectors in all ratifying states — including the US, UK, and EU — creating a truly transatlantic baseline for AI disclosure.
Article 16 goes further than most voluntary frameworks by requiring States to assess whether specific AI uses should be subject to moratoria or outright bans — a tool available under binding international law that has no equivalent in current national AI regulations.
CETS 225 is the first international treaty to establish a right to contest AI decisions. Articles 14–15 create binding remedies and procedural safeguards — including appeal rights and notification — that States must embed in domestic law, surpassing any existing voluntary framework on human oversight.
The Convention's foundational human rights framework (Article 3) explicitly incorporates non-discrimination as a core principle, and Article 16's risk management mandate covers impacts on equality rights. As a treaty built on the European Convention on Human Rights, it binds AI use to existing ECtHR jurisprudence on discrimination.
Colorado Privacy Act Rules (4 CCR 904-3) Colorado Human Oversight enforcing
These privacy-law definitions directly govern AI-driven profiling in hiring, lending, and insurance — even though the rules predate and never mention AI. The three-tier automation framework determines consent and opt-out requirements, making this one of the most consequential provisions for organizations using automated decision-making in Colorado.
Colorado Privacy Act Rules (4 CCR 904-3) Colorado Risk Assessment enforcing
Any organization using AI for profiling in Colorado — credit scoring, insurance underwriting, employment screening — must conduct a Data Protection Assessment under this rule, regardless of whether the AI system was the target of the regulation. This is the provision a lawyer friend called a "real sleeper" that many compliance teams miss.
Digital Personal Data Protection Act 2023 (DPDP) India Data Governance enforcing
India's foundational data protection law applies to all automated processing of personal data — including AI inference, profiling, and recommendation systems. No explicit ADM opt-out right (unlike GDPR Article 22), but data accuracy and consent obligations bind AI deployers handling Indian user data. Penalties reach ₹250 crore (~$30M USD) per breach.
Law on Artificial Intelligence Italy Human Oversight enforcing
Italy is the first EU member state to legislate sector-specific AI rules beyond the EU AI Act. For healthcare AI, the law establishes a hard prohibition on AI making autonomous clinical decisions — physicians retain ultimate authority regardless of AI recommendation quality. Any healthcare organisation deploying diagnostic or treatment AI in Italy must build physician-override workflows into every clinical AI deployment.
Law on Artificial Intelligence Italy Transparency enforcing
Italy extends AI transparency duties into employment and child contexts that sit beyond the EU AI Act's direct scope. Employers using AI in recruitment or performance evaluation must disclose AI involvement to workers — creating a specific notification duty for HR technology deployments. The parental consent requirement for under-14s applies to any AI-powered product or service used by children, including education platforms, apps, and consumer AI.
Law on Artificial Intelligence Italy Data Governance enforcing
Italy's secondary-use pathway for health data is a sleeper provision with global reach: any organisation conducting AI research using Italian patient data — including non-Italian researchers accessing Italian health datasets — must satisfy both the GDPR and a 30-day Garante notification before processing. This covers clinical AI model training, drug discovery AI, and public health AI research.
Mexico's revised data protection law requires controllers to disclose in privacy notices the use of AI, automated decision-making systems, or algorithms — including the algorithmic logic, significance of processing, and potential consequences. This catches any AI system processing personal data of Mexican residents, even if the deployer is not Mexico-based.
The revised LFPDPPP mandates human-in-the-loop processes for automated decision-making, particularly in high-risk scenarios. Combined with the right to object to ADM, this creates a dual obligation: deploy human oversight AND honor opt-out requests. Secondary regulations (pending) may further define high-risk thresholds.
EO 14319 — Preventing Woke AI in the Federal Government United States Transparency enforcing
Creates de facto compliance obligations for any AI vendor selling LLMs to the US federal government. Agencies must require vendor documentation including model cards, data cards, acceptable use policies, and risk disclosures. Agencies must reject non-compliant models. Not branded as AI regulation, but effectively mandates transparency for a significant market segment.
EO 14319 — Preventing Woke AI in the Federal Government United States Data Governance enforcing
Vendors must disclose training data provenance, limitations, and risk mitigations as a condition of federal procurement. While framed as ensuring "unbiased AI," the practical effect is a data governance disclosure requirement for the federal AI supply chain.
EO 14319 — Preventing Woke AI in the Federal Government United States Risk Assessment enforcing
Establishes two Unbiased AI Principles — truth-seeking and ideological neutrality — that federal LLM procurements must comply with. Agencies must adopt procedures to enforce compliance and hold vendors accountable. Effectively creates a content-level compliance standard for the federal market.
Executive Order on AI State Law Preemption United States Risk Assessment enforcing
Does not create compliance obligations for AI companies. Instead, directs DOJ to form a task force to challenge state AI laws on preemption, interstate commerce, and First Amendment grounds. Directly threatens enforceability of state laws tracked in this reference (Colorado SB 24-205, Illinois HB 3773, California ADS regs, NYC LL144, and others). Carveouts preserve state authority on child safety, AI infrastructure, and government procurement.
Executive Order on AI State Law Preemption United States Transparency enforcing
Directs the Secretary of Commerce to evaluate existing state AI laws within 90 days and identify those that conflict with federal objectives. The evaluation includes a BEAD Program policy notice making states with conflicting AI laws ineligible for broadband funding. Also directs FTC to issue a policy statement on how the FTC Act preempts state laws mandating alterations to truthful AI outputs. No direct obligations for AI developers — but the evaluation results will shape which state laws survive federal challenge.