AI Regulation: EU AI Act, GDPR & Global Rules
If you remember one regulation from this course, make it the EU AI Act. It is the world's first comprehensive AI law, it applies to anyone selling AI to European users, and it is shaping policy globally. Combined with GDPR, NYC's bias audit law, and a wave of new state-level rules in the U.S., the regulatory landscape in 2026 is finally real — and it shows up in job descriptions across responsible-AI roles.
What You'll Learn
- The structure of the EU AI Act and its risk-based categories
- How GDPR intersects with AI (and what changes with the AI Act)
- A short tour of major AI laws in the U.S., U.K., and Asia-Pacific
- How to read any AI regulation in 10 minutes using a simple framework
The EU AI Act in One Diagram
The EU AI Act takes a risk-based approach: the higher the risk an AI system poses, the stricter the obligations.
| Risk Tier | Examples | Obligations |
|---|---|---|
| Unacceptable risk (prohibited) | Social scoring, manipulative AI, real-time biometric ID in public (with narrow exceptions) | Banned |
| High risk | Hiring AI, credit scoring, medical AI, education grading, critical infrastructure | Risk management, documentation, transparency, human oversight, registration |
| Limited risk (transparency) | Chatbots, deepfakes, emotion recognition, generative AI content | Must disclose AI involvement to users |
| Minimal risk | Spam filters, AI in video games | Voluntary codes of conduct |
If you ever build, deploy, or even use an AI system in any of those categories with EU users, the AI Act applies.
Who the AI Act Applies To
A common misconception is that the AI Act only applies to EU companies. It doesn't.
It applies to:
- Anyone placing AI systems on the EU market
- Anyone using AI systems whose output is used in the EU
- Providers, deployers, importers, distributors, and (in some cases) end users
So a U.S. startup selling AI hiring tools to a German company is squarely within the AI Act's reach. So is a UK consultancy whose AI summary touches an EU customer.
Key Obligations You Should Know
For high-risk systems, the AI Act requires (broadly):
- Risk management system — ongoing identification and mitigation of risks
- Data governance — quality and bias assessment of training data
- Technical documentation — detailed records on how the system was built
- Record-keeping — automatic logs of system operation
- Transparency — users informed they are interacting with AI
- Human oversight — designed-in ability for humans to intervene
- Accuracy, robustness, cybersecurity — must meet specified standards
- Conformity assessment — third-party or self-assessment depending on category
- Registration — many high-risk systems must be entered in an EU database
If you ever see a job ad asking for someone who can "support technical documentation under the EU AI Act," this is what they are talking about.
GDPR + AI Act: How They Stack
GDPR (the EU's data privacy regulation, effective 2018) governs how personal data is collected, used, and stored. The AI Act sits on top — it governs the AI systems that often process that data.
Key GDPR concepts that hit AI hard:
- Article 22: People have the right not to be subject to solely automated decisions that significantly affect them.
- Right to explanation: Often interpreted to require some explanation of automated decisions.
- Data minimization: Only collect data you actually need.
- Right to erasure: Users can request data deletion — including from AI training datasets, in some interpretations.
A high-risk AI system in the EU must satisfy both GDPR and the AI Act. The same data point — a job applicant's CV — is subject to GDPR rules about processing and AI Act rules about how the AI scores it.
The U.S. Patchwork
The U.S. has no comprehensive AI law (as of mid-2026), but there is a fast-growing patchwork:
- NYC Local Law 144 — automated employment decision tools must undergo annual bias audits.
- Illinois AI Video Interview Act — disclose AI in video job interviews and get consent.
- Colorado AI Act (2026) — first U.S. state-wide AI consumer protection law, EU-AI-Act-style obligations.
- California SB 1047 / 942 / multiple bills — various AI safety, watermarking, and labeling rules.
- FTC enforcement — using existing consumer protection authority for "AI deception" cases.
- Federal sector rules — HIPAA (health), GLBA (finance), ECOA (credit), Title VII (employment) all apply when AI is used in regulated decisions.
If you work in U.S. AI roles, expect to learn 2–3 of these laws specific to your industry.
U.K., Canada, China, and Beyond
Brief tour:
- U.K. — pro-innovation, sector-by-sector approach. The AI Safety Institute focuses on frontier model evaluation.
- Canada — AIDA (Artificial Intelligence and Data Act) is in development.
- China — strong regulations on generative AI, content labeling, training data, recommendation algorithms (since 2023).
- Brazil, India, Japan, South Korea, Australia — all have AI strategies, several with binding rules in pilot stages.
Many of these laws borrow structure from the EU AI Act, which is why learning the AI Act first is so leveraged.
How to Read Any AI Regulation in 10 Minutes
Use this five-question framework:
- Who does it apply to? Builders, deployers, users, regulators?
- What does it cover? Specific industries? All AI? Only generative AI?
- What's banned vs regulated vs allowed?
- What's the enforcement mechanism? Fines? Bans? Audits?
- When does it take effect? Many AI laws have phased timelines.
You can drop any AI law's text into Claude or ChatGPT and ask:
"Summarize this regulation along these five dimensions: who does it apply to, what does it cover, what is banned vs regulated, what is the enforcement mechanism, when does it take effect. Be specific. \n\n[paste the text]"
You will produce a useful one-page summary in five minutes. Verify the key claims against the official source.
Why This Matters for Your Career
Companies are now hiring "Responsible AI" or "AI governance" specialists specifically to keep them compliant. Salaries are strong because supply is short. A junior candidate who can confidently say "I understand the EU AI Act's risk categorization and where my company's products would land" stands out enormously.
Add a line to your resume like:
"Familiar with EU AI Act risk tiers, GDPR Article 22, and U.S. state-level AI laws (NYC LL 144, Colorado AI Act). Completed structured analysis of [tool] under these frameworks."
That is much more concrete than "AI literate."
Hands-on: AI Act Risk Classification
Pick five AI tools or use cases you know — examples below — and classify each into AI Act risk tiers. Justify each classification in one sentence.
- A school's AI tool that grades essays
- A music streaming service's recommendation algorithm
- A bank's AI fraud-detection system that triggers account holds
- A children's storytelling app
- A real-time facial recognition system in a train station
Use Claude or ChatGPT as a sounding board:
"I'm classifying these five AI use cases under the EU AI Act risk tiers (unacceptable, high, limited, minimal). For each, suggest the most likely tier, the key reason, and any uncertainty. \n\n[paste the list]"
You now have a concrete portfolio piece showing regulatory fluency.
Key Takeaways
- The EU AI Act uses four risk tiers; "high risk" carries the heaviest obligations.
- It applies extraterritorially — EU users matter, not where you are based.
- GDPR sits underneath the AI Act; both apply together for personal-data-processing AI.
- The U.S. has a patchwork of state, city, and sector-specific AI rules.
- Use the five-question framework to read any AI regulation quickly.

