Applied Intelligence
Module 7: Data Privacy and Compliance

EU AI Act and Regulatory Landscape

The EU AI Act: first comprehensive AI regulation

The European Union's AI Act, which entered into force on August 1, 2024, represents the world's first comprehensive legal framework for artificial intelligence. Unlike sector-specific regulations, the AI Act applies horizontally across industries and establishes obligations based on risk levels rather than use cases. For enterprise development teams using AI coding tools, understanding this regulation is essential even for organizations outside the EU.

The Act's extraterritorial reach means it applies to any organization that:

  • Places AI systems on the EU market
  • Deploys AI systems in the EU
  • Uses AI systems whose outputs are used in the EU

A US-based development team using Claude Code or Codex to build software deployed to European customers may fall within scope, regardless of where the developers are located.

Implementation timeline

The AI Act implementation follows a phased approach:

DateMilestone
August 1, 2024AI Act enters into force
February 2, 2025Prohibited AI practices banned; AI literacy obligations begin
August 2, 2025General-Purpose AI (GPAI) model obligations apply
August 2, 2026High-risk AI system rules apply; full enforcement begins
August 2, 2027Existing GPAI models must achieve compliance

As of January 2026, organizations are past the first two enforcement dates. The prohibited practices ban and AI literacy requirements are now in effect. GPAI obligations are active. High-risk system rules take effect in August 2026.

Risk categories and AI coding tools

The AI Act establishes four risk tiers:

Unacceptable risk (prohibited): AI systems that manipulate human behavior, exploit vulnerabilities, enable social scoring, or provide real-time remote biometric identification in public spaces. These practices are banned outright.

High risk (Annex III): AI systems in specified domains including biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. These require conformity assessments, registration, and ongoing compliance obligations.

Limited risk: AI systems with transparency requirements primarily chatbots and deepfake generators. Users must be informed they are interacting with AI.

Minimal risk: All other AI systems. No mandatory obligations beyond existing laws.

Where do AI coding tools fall? Claude Code, Codex, and GitHub Copilot are not listed in Annex III's high-risk categories. They would typically classify as limited risk (due to their conversational interfaces) or minimal risk.

Article 6(3) provides additional clarity. An AI system listed in Annex III is not considered high-risk if it:

  1. Performs a narrow procedural task
  2. Improves results of previously completed human activity
  3. Detects decision-making patterns without replacing human judgment
  4. Performs a preparatory task for human assessment

AI coding tools fit these exemptions. They perform narrow procedural tasks (code generation), improve human coding work, detect patterns without replacing decisions, and prepare code for human review. The critical qualifier: these exemptions never apply if the AI performs profiling of natural persons. Code generation does not constitute profiling.

General-Purpose AI model obligations

The AI Act introduces specific requirements for General-Purpose AI (GPAI) models foundation models capable of general competency across diverse tasks. Claude, GPT-4, and similar models underlying coding tools qualify as GPAI.

Since August 2025, all GPAI providers must:

RequirementDetails
Technical documentationMaintain and provide to downstream deployers on request
Training data summaryPublicly disclose data sources using mandatory AI Office template
Copyright compliancePublish copyright policy; respect robots.txt; avoid circumventing paywalls
Output safeguardsMitigate copyright-infringing outputs

GPAI models trained with computational resources exceeding 10^25 FLOPs are presumed to have "systemic risk" and face additional requirements:

  • State-of-the-art safety and security frameworks
  • Model evaluations
  • Risk identification and mitigation throughout lifecycle
  • Incident reporting processes
  • Cybersecurity protections
  • Commission notification within two weeks of reaching the threshold

These obligations fall on GPAI providers (Anthropic, OpenAI, Google), not on developers using their tools. However, enterprises deploying AI coding tools should understand their providers' compliance status. Procurement due diligence should verify that the underlying GPAI provider meets these requirements.

Penalties

The AI Act establishes a three-tier penalty structure:

ViolationMaximum Fine
Prohibited AI practices€35 million or 7% of worldwide annual turnover
Non-compliance with obligations€15 million or 3% of worldwide annual turnover
Supplying incorrect information€7.5 million or 1% of worldwide annual turnover

For large enterprises, the percentage thresholds typically exceed fixed amounts. A company with €10 billion revenue faces up to €700 million for prohibited practice violations.

GPAI-specific penalties (€15 million or 3% of turnover) become enforceable in August 2026. SMEs and startups receive more favorable treatment the lower of the fixed amount or percentage applies.

GDPR intersection

The AI Act does not replace the General Data Protection Regulation. Organizations processing personal data with AI systems must comply with both frameworks.

The European Data Protection Board's Opinion 28/2024, adopted in December 2024, clarifies how GDPR applies to AI:

AI models are not automatically anonymous. For an AI model to be considered anonymous under GDPR, both conditions must be met:

  1. The likelihood of extracting personal data from training data must be insignificant
  2. The likelihood of obtaining personal data through queries must be insignificant

Large language models rarely meet this threshold.

Legitimate interest can serve as legal basis for AI development and deployment, but requires a three-step assessment:

  1. Pursuit of a legitimate, lawful, specific, and real interest
  2. Necessity can the purpose be achieved without personal data?
  3. Balancing do data subject rights outweigh controller interests?

Unlawful training has consequences. If an AI model was trained unlawfully on personal data, supervisory authorities may order corrective measures including fines, processing limitations, data erasure, or model retraining.

Enforcement is active. In December 2024, the Italian Garante fined OpenAI €15 million for processing personal data without adequate legal basis during ChatGPT's training. Clearview AI has accumulated over €90 million in GDPR fines across European jurisdictions.

For development teams, the practical implication is clear: avoid sending personal data to AI coding tools. GDPR applies to any personal data processed by the AI, regardless of the AI Act's risk classification.

US regulatory landscape

The United States lacks comprehensive federal AI legislation equivalent to the EU AI Act. Instead, a patchwork of executive orders, agency guidance, and state laws creates the regulatory environment.

NIST AI Risk Management Framework

The National Institute of Standards and Technology's AI Risk Management Framework (AI RMF), released January 2023 and updated to version 2.0 in February 2024, provides voluntary guidance for AI governance. While not mandatory, the framework increasingly serves as a compliance benchmark.

The AI RMF's four core functions Govern, Map, Measure, Manage provide a structure for AI risk management. NIST also released the Generative AI Profile (NIST-AI-600-1) in July 2024, addressing risks specific to generative AI systems.

Federal contractors and organizations working with government agencies face increasing pressure to demonstrate NIST AI RMF alignment. FedRAMP's AI prioritization initiative, launched August 2025, fast-tracks authorization for AI tools that meet federal requirements. OpenAI, Google, and Perplexity are pursuing FedRAMP authorization for their enterprise AI offerings.

State-level regulations

Several US states have enacted AI-specific legislation:

Colorado AI Act (effective June 30, 2026): Requires risk management programs for high-risk AI systems, annual impact assessments, algorithmic discrimination safeguards, and disclosure requirements. Enforcement by the Colorado Attorney General.

California (multiple laws effective January 1, 2026):

  • SB 53: Large frontier AI developers (>$500M revenue) must publish frameworks for managing catastrophic risks
  • AB 2013: Public generative AI developers must publish training data information
  • AB 316: Defendants cannot claim AI acted autonomously to avoid liability
  • AB 489: Prohibits AI chatbots from presenting as licensed healthcare professionals

Texas TRAIGA (effective January 1, 2026): Healthcare providers must disclose AI use to patients. Government entities must provide notice when AI interacts with consumers. Prohibits AI systems designed to harm, discriminate, or infringe constitutional rights. Safe harbors exist for NIST AI RMF compliance.

Illinois HB 3773 (effective January 1, 2026): Prohibits discriminatory AI use in employment decisions. Requires notice to employees when AI is used for hiring, promotions, or discipline.

Federal preemption remains contested. Executive Order 14365, signed December 2025, directs federal agencies to identify "onerous" state AI laws and conditions federal funding on alignment with federal AI policy. However, an executive order cannot directly override existing state law that requires Congressional action or court decisions. Organizations should continue complying with state AI laws pending legal resolution.

Sector-specific requirements

HIPAA: The proposed HIPAA Security Rule changes (January 2025) establish that AI tools accessing electronic protected health information (ePHI) must comply with HIPAA Privacy and Security Rules. ePHI used in AI training data, prediction models, and algorithms is protected. The final rule is expected May 2026. Civil penalties reach $50,000 per violation; criminal penalties for knowing violations include one to ten years imprisonment.

PCI-DSS: AI systems handling payment card data must comply with PCI DSS 4.0. The PCI Security Standards Council's AI guidance establishes that AI systems must not be trusted with high-impact secrets, cannot hold formal security roles, and must follow least-privilege access principles. AI handling cardholder data requires tokenization or single-use PANs rather than full PANs.

FedRAMP: Government agencies increasingly require FedRAMP authorization for AI tools. The FedRAMP 20x program provides expedited authorization paths for AI services meeting federal security requirements.

Building a compliance strategy

For enterprise development teams, navigating this regulatory landscape requires a structured approach:

Know your jurisdictional exposure. If your software serves EU customers, EU AI Act obligations likely apply. If your code processes personal data, GDPR applies. US state laws apply based on where you operate or serve customers.

Understand the supply chain. AI coding tools involve multiple parties: GPAI providers (Anthropic, OpenAI), tool developers, and deployers. Each has distinct obligations. Your procurement due diligence should verify compliance throughout the chain.

Classify your AI usage. Map how AI coding tools are used against regulatory risk categories. Code generation for internal development differs from AI embedded in customer-facing products.

Document everything. Regulatory enforcement increasingly requires audit trails. Document which tools are approved, how they are configured, what data can be processed, and how outputs are reviewed.

Stay current. The regulatory landscape continues evolving. EU AI Act implementing guidance arrives throughout 2026. US state laws are proliferating. International frameworks are emerging.

The final section of this module addresses building organizational policies that address these compliance requirements systematically.

On this page