About

Why This Exists

AI regulation is changing fast. New laws are introduced, amended, and replaced across dozens of jurisdictions simultaneously. Keeping up is a full-time job, and most organizations don't have someone dedicated to it.

This project started as a practical need. While advising clients on AI governance at Snap Synapse, we needed to know which regulations applied, when they took effect, and what they required. No single resource gave us a clear, structured answer. Existing trackers offered editorial commentary or jurisdiction-by-jurisdiction narratives, but nothing that could answer the basic question: what changed this week that affects what I need to do?

We automated the research, structured the data, and realized the result was useful beyond our own work. EveryAILaw.com is that result: a free, open, structured reference designed for anyone trying to comply with AI regulations and having trouble making sense of it all.

Previously known as the AI Regulation Reference at aireg.snapsynapse.com. Same data, same team, better name.

How It's Different

Most regulation trackers organize by jurisdiction or by law. We organize by obligation: the thing you actually have to do. Transparency, human oversight, risk assessment. These requirements are stable even as the specific laws implementing them change. A regulation can be amended or replaced overnight (as Colorado demonstrated in March 2025, replacing its entire AI regulatory framework mid-session), but the underlying compliance obligations persist.

This obligation-first approach means you can ask "Which jurisdictions require explainability?" rather than reading through a dozen different laws to piece it together yourself.

Authority → Regulation → Provision → Obligation

Who It's For

Anyone responsible for understanding or complying with AI regulations: compliance teams, GRC practitioners, product managers building AI features, legal counsel advising on AI risk, policy researchers tracking the regulatory landscape, and executives making go/no-go decisions about AI deployment.

Whether you're a startup deploying your first AI feature or an enterprise operating across multiple jurisdictions, the goal is the same: make it easy to see what applies to you and what's changed.

Coverage

This reference currently tracks 42 regulations across 31 jurisdictions spanning the EU (plus Italy, Malta, Hungary), United Kingdom, United States (federal, 8 states, and NYC), China, India, South Korea, Vietnam, Japan, Australia (including NSW), Mexico, Qatar, El Salvador, Kazakhstan, and Taiwan — with 257 jurisdictions assessed globally, including all 193 UN member states.

Coverage includes binding legislation (EU AI Act, South Korea AI Basic Act, Vietnam AI Law), executive orders (US AI state law preemption, federal AI procurement requirements), sector-specific rules (DORA for financial services, China's generative AI measures), voluntary frameworks (Singapore's AI Governance Framework, NIST AI RMF), and "sleeper" provisions in privacy and sector laws that catch AI use even though they weren't written as AI regulation (India DPDP Act, Colorado CPA Rules, Australia Privacy Act reforms). See the Insights page for tagged provisions worth knowing about.

Scope & Exclusions

A law or regulation is in scope when it creates ongoing compliance obligations for AI developers or deployers that map to at least one tracked obligation (transparency, bias prevention, risk assessment, human oversight, etc.). Before adding a regulation, it must pass six tests:

  1. New obligation? Does it create a new compliance process, or merely extend an existing prohibition to AI content?
  2. Ongoing compliance? Does it require sustained processes (audits, assessments, disclosures), or just impose a one-time penalty?
  3. Broad enough? Is the obligation general enough to affect AI governance, or limited to a single content type or election window?
  4. Right audience? Does it impose obligations on private-sector developers or deployers, not just government agencies?
  5. Real obligation? Does it create enforceable requirements, not just definitions or declarations?
  6. Not already covered? Does it add a new compliance dimension, or duplicate a pattern already tracked?

Laws that fail any test are catalogued in the exclusions list with the specific principle applied. This serves as a decision cache — when new laws are proposed or feedback comes in, the exclusions list is checked first rather than re-evaluating from scratch. The full exclusions list with all 156 evaluated laws is available via the exclusions.json API endpoint.

Watch List

Some jurisdictions are developing AI regulatory frameworks that aren't yet actionable for compliance but are worth monitoring. We maintain a watch list for these:

A watch list item is promoted to a full regulation entry when binding legislation is enacted, enforceable rules are published, or voluntary frameworks gain sufficient adoption to be considered de facto standards.

Data Model

Instrument Types

Every instrument in this reference is classified by type. Understanding the distinction helps you assess how much compliance weight to give each:

Law
Enacted by a legislative body (parliament, congress, diet). Primary legislation with binding force. Examples: EU AI Act, South Korea AI Basic Act, Brazil AI Bill.
Regulation
Issued by an executive or administrative agency under authority delegated by a law. Sometimes called "rules" or "secondary legislation." Examples: Colorado CPA Rules, CMS Medicare Advantage Rule, China's Generative AI Interim Measures.
Decree / Executive Order
Directive issued by a head of state or government. Binding on the executive branch and, through procurement or enforcement directives, can create de facto compliance obligations for the private sector. Examples: EO 14319 (Preventing Woke AI — federal LLM procurement requirements), EO on AI State Law Preemption.
Standard
A defined set of criteria or specifications by a recognized body that organizations measure against or certify to. Voluntary unless referenced by a binding regulation. Examples: ISO 42001 (AI Management System), OECD AI Principles.
Framework
A structured approach or methodology for organizing thinking and action, without specific pass/fail criteria. Provides a process, not a checklist. Examples: NIST AI RMF (Govern, Map, Measure, Manage), Singapore Model AI Governance Framework.

The key distinction: a standard says "meet these criteria," a framework says "follow this process." Both are voluntary unless a binding regulation references them, which happens frequently. The EU AI Act references ISO 42001 for conformity assessment, and several US state laws offer safe harbor for NIST AI RMF alignment. The standards crosswalk maps which standards address which regulatory obligations, so you can see which standards help you comply.

Verification & Update Schedule

This site currently tracks 42 regulations, 10 obligations, 120 provisions, 42 authorities, and 9 standards. Data freshness matters for regulatory compliance. Here's how we keep it current:

Built for Humans and Agents

This site is designed to be consumed by AI agents as easily as by humans. If you use AI assistants, compliance copilots, or agentic workflows, point them here:

JSON API

All data is available as structured JSON. No authentication, no rate limits:

Sister Sites

EveryAILaw.com is part of a family of structured references from Snap Synapse:

Other Resources

This site focuses on structured, machine-readable obligation tracking. For complementary perspectives (editorial analysis, policy commentary, and interactive exploration), these resources are excellent:

Contributing

Data corrections and feedback are welcome. Contact PAICE to report errors or suggest additions.

Disclaimer

Nothing on this site constitutes legal advice. This is a reference tool designed to help you track what's changing and understand when you may need to seek qualified legal counsel. Always consult the actual regulatory text and a qualified attorney for compliance decisions.