About
Why This Exists
AI regulation is changing fast. New laws are introduced, amended, and replaced across dozens of jurisdictions simultaneously. Keeping up is a full-time job, and most organizations don't have someone dedicated to it.
This project started as a practical need. While advising clients on AI governance at Snap Synapse, we needed to know which regulations applied, when they took effect, and what they required. No single resource gave us a clear, structured answer. Existing trackers offered editorial commentary or jurisdiction-by-jurisdiction narratives, but nothing that could answer the basic question: what changed this week that affects what I need to do?
We automated the research, structured the data, and realized the result was useful beyond our own work. EveryAILaw.com is that result: a free, open, structured reference designed for anyone trying to comply with AI regulations and having trouble making sense of it all.
Previously known as the AI Regulation Reference at aireg.snapsynapse.com. Same data, same team, better name.
How It's Different
Most regulation trackers organize by jurisdiction or by law. We organize by obligation: the thing you actually have to do. Transparency, human oversight, risk assessment. These requirements are stable even as the specific laws implementing them change. A regulation can be amended or replaced overnight (as Colorado demonstrated in March 2025, replacing its entire AI regulatory framework mid-session), but the underlying compliance obligations persist.
This obligation-first approach means you can ask "Which jurisdictions require explainability?" rather than reading through a dozen different laws to piece it together yourself.
Authority → Regulation → Provision → Obligation
Who It's For
Anyone responsible for understanding or complying with AI regulations: compliance teams, GRC practitioners, product managers building AI features, legal counsel advising on AI risk, policy researchers tracking the regulatory landscape, and executives making go/no-go decisions about AI deployment.
Whether you're a startup deploying your first AI feature or an enterprise operating across multiple jurisdictions, the goal is the same: make it easy to see what applies to you and what's changed.
Coverage
This reference currently tracks 42 regulations across 31 jurisdictions spanning the EU (plus Italy, Malta, Hungary), United Kingdom, United States (federal, 8 states, and NYC), China, India, South Korea, Vietnam, Japan, Australia (including NSW), Mexico, Qatar, El Salvador, Kazakhstan, and Taiwan — with 257 jurisdictions assessed globally, including all 193 UN member states.
Coverage includes binding legislation (EU AI Act, South Korea AI Basic Act, Vietnam AI Law), executive orders (US AI state law preemption, federal AI procurement requirements), sector-specific rules (DORA for financial services, China's generative AI measures), voluntary frameworks (Singapore's AI Governance Framework, NIST AI RMF), and "sleeper" provisions in privacy and sector laws that catch AI use even though they weren't written as AI regulation (India DPDP Act, Colorado CPA Rules, Australia Privacy Act reforms). See the Insights page for tagged provisions worth knowing about.
Scope & Exclusions
A law or regulation is in scope when it creates ongoing compliance obligations for AI developers or deployers that map to at least one tracked obligation (transparency, bias prevention, risk assessment, human oversight, etc.). Before adding a regulation, it must pass six tests:
- New obligation? Does it create a new compliance process, or merely extend an existing prohibition to AI content?
- Ongoing compliance? Does it require sustained processes (audits, assessments, disclosures), or just impose a one-time penalty?
- Broad enough? Is the obligation general enough to affect AI governance, or limited to a single content type or election window?
- Right audience? Does it impose obligations on private-sector developers or deployers, not just government agencies?
- Real obligation? Does it create enforceable requirements, not just definitions or declarations?
- Not already covered? Does it add a new compliance dimension, or duplicate a pattern already tracked?
Laws that fail any test are catalogued in the exclusions list with the specific principle applied. This serves as a decision cache — when new laws are proposed or feedback comes in, the exclusions list is checked first rather than re-evaluating from scratch. The full exclusions list with all 156 evaluated laws is available via the exclusions.json API endpoint.
Watch List
Some jurisdictions are developing AI regulatory frameworks that aren't yet actionable for compliance but are worth monitoring. We maintain a watch list for these:
- Malaysia: AI Governance Bill nearly complete (Cabinet presentation planned June 2026). Moving toward prescriptive high-risk controls.
- Indonesia, Philippines, Thailand: Piloting sectoral AI rules and voluntary frameworks. Early stage.
- Latin America: Colombia and Chile have emerging frameworks.
- Middle East & Africa: UAE is building AI governance infrastructure rapidly — claims of a 2026 federal AI Act are circulating but unverified against official sources. Saudi Arabia has a national AI strategy but no binding legislation. South Africa and Nigeria have data protection frameworks that may expand.
- ISO/IEC JTC 1/SC 42 standards portfolio: Several SC 42 standards beyond the four currently tracked (ISO/IEC 42001, 23894, 38507, 42005) are cited alongside AI laws: ISO/IEC 5259 series (data quality), TR 24027 (bias), TS 8200 (controllability), 5338 (AI lifecycle), 12792 (transparency taxonomy), 25059 (quality model), 24029 (robustness), and 42006 (conformity assessment certification, in development). See the watch list for the full portfolio table.
A watch list item is promoted to a full regulation entry when binding legislation is enacted, enforceable rules are published, or voluntary frameworks gain sufficient adoption to be considered de facto standards.
Data Model
- Obligation: Vendor/jurisdiction-neutral compliance requirement (the stable anchor)
- Provision: Specific regulation article implementing obligation(s)
- Regulation: Binding legislative, administrative, or executive instrument
- Standard / Framework: Voluntary instrument (see definitions below)
- Authority: Regulatory body responsible for enforcement
- Jurisdiction: Five-level hierarchy: supranational, national, subnational, regional, municipal
- Evidence: Source citations linking every claim to official text
Instrument Types
Every instrument in this reference is classified by type. Understanding the distinction helps you assess how much compliance weight to give each:
- Law
- Enacted by a legislative body (parliament, congress, diet). Primary legislation with binding force. Examples: EU AI Act, South Korea AI Basic Act, Brazil AI Bill.
- Regulation
- Issued by an executive or administrative agency under authority delegated by a law. Sometimes called "rules" or "secondary legislation." Examples: Colorado CPA Rules, CMS Medicare Advantage Rule, China's Generative AI Interim Measures.
- Decree / Executive Order
- Directive issued by a head of state or government. Binding on the executive branch and, through procurement or enforcement directives, can create de facto compliance obligations for the private sector. Examples: EO 14319 (Preventing Woke AI — federal LLM procurement requirements), EO on AI State Law Preemption.
- Standard
- A defined set of criteria or specifications by a recognized body that organizations measure against or certify to. Voluntary unless referenced by a binding regulation. Examples: ISO 42001 (AI Management System), OECD AI Principles.
- Framework
- A structured approach or methodology for organizing thinking and action, without specific pass/fail criteria. Provides a process, not a checklist. Examples: NIST AI RMF (Govern, Map, Measure, Manage), Singapore Model AI Governance Framework.
The key distinction: a standard says "meet these criteria," a framework says "follow this process." Both are voluntary unless a binding regulation references them, which happens frequently. The EU AI Act references ISO 42001 for conformity assessment, and several US state laws offer safe harbor for NIST AI RMF alignment. The standards crosswalk maps which standards address which regulatory obligations, so you can see which standards help you comply.
Verification & Update Schedule
This site currently tracks 42 regulations, 10 obligations, 120 provisions, 42 authorities, and 9 standards. Data freshness matters for regulatory compliance. Here's how we keep it current:
- Weekly automated verification (Saturdays): A multi-model AI consensus cascade checks all provisions against official sources. Three independent models must agree before flagging a change.
- Human review: All flagged changes are reviewed by a human before merging. No auto-merge of verification results.
- 30-day staleness threshold: Any provision not verified within 30 days is automatically flagged for re-verification.
- Feedback: Data corrections and regulation suggestions are accepted via PAICE and reviewed before inclusion.
Built for Humans and Agents
This site is designed to be consumed by AI agents as easily as by humans. If you use AI assistants, compliance copilots, or agentic workflows, point them here:
- llms.txt: Structured context so LLMs understand the site and its data
- agents.json: Agent discovery, capabilities, API endpoints, available actions
- index.xml: RSS feed for regulation updates
JSON API
All data is available as structured JSON. No authentication, no rate limits:
- api/v1/index.json: API manifest
- api/v1/obligations.json: All obligations
- api/v1/regulations.json: All regulations
- api/v1/provisions.json: All provisions
- api/v1/obligation-matrix.json: Coverage matrix
- api/v1/exclusions.json: Evaluated laws excluded from tracking (with exclusion principles and categories)
- api/v1/crosswalk.json: Standards-to-obligations crosswalk (which standards help satisfy which regulatory obligations)
Sister Sites
EveryAILaw.com is part of a family of structured references from Snap Synapse:
- AI Capability Reference: Structured tracking of AI model capabilities, benchmarks, and performance across providers. The same approach applied to making sense of the fast-moving AI capability landscape.
- Siteline: For teams building the agentic web. Standards, protocols, and implementation guidance for AI-native web experiences.
Other Resources
This site focuses on structured, machine-readable obligation tracking. For complementary perspectives (editorial analysis, policy commentary, and interactive exploration), these resources are excellent:
- IAPP Global AI Law & Policy Tracker: Broad horizon scanning across jurisdictions with strong contextual commentary.
- White & Case AI Watch: In-depth legal analysis of regulatory approaches across core markets, with nuance on cross-domain overlaps.
- OECD AI Policy Observatory: Authoritative macro-level policy landscape, strategies, and cross-country benchmarking.
- Witness.ai AI Regulation Tracker: Interactive map-based explorer with business-impact framing and compliance timelines.
- FairNow Global AI Regulation Tracker: Practitioner-focused with applicability logic and operational governance workflows.
- Orrick US AI Law Tracker: Comprehensive US state-level AI law database with detailed bill tracking and citation links.
- TechTarget AI Legislation Tracker: Accessible overview for IT leaders and practitioners new to the regulatory landscape.
Contributing
Data corrections and feedback are welcome. Contact PAICE to report errors or suggest additions.
Disclaimer
Nothing on this site constitutes legal advice. This is a reference tool designed to help you track what's changing and understand when you may need to seek qualified legal counsel. Always consult the actual regulatory text and a qualified attorney for compliance decisions.