Applied Intelligence
Module 7: Data Privacy and Compliance

SOC 2 and ISO 42001 Compliance

SOC 2 as the enterprise baseline

When enterprise procurement evaluates AI coding tools, the first compliance question is usually simple: do you have a SOC 2 Type II report?

Service Organization Control 2 (SOC 2) is an attestation framework developed by the American Institute of Certified Public Accountants (AICPA). It examines whether a service organization's controls meet one or more Trust Services Criteria: Security (mandatory), plus optional criteria for Availability, Processing Integrity, Confidentiality, and Privacy. A Type II report covers controls over a period of time typically six to twelve months rather than a point-in-time snapshot (Type I).

Major AI coding tools have achieved SOC 2 Type II certification:

PlatformSOC 2 Type IIReport Access
Anthropic ClaudeCertifiedTrust portal (NDA required)
OpenAI API/ChatGPT EnterpriseCertifiedTrust portal
GitHub Copilot Business/EnterpriseCertifiedEnterprise Trust Center
Amazon Q DeveloperCertifiedAWS Artifact
Augment CodeCertifiedDirect request

Consumer tiers generally fall outside SOC 2 scope. OpenAI's certification covers the API Platform, ChatGPT Enterprise, ChatGPT Edu, and ChatGPT Team not ChatGPT Free or Plus. GitHub Copilot's certification covers Business and Enterprise tiers, not Individual. Procurement teams evaluating AI tools should verify that their specific deployment model falls within the vendor's certified scope.

What SOC 2 covers and what it does not

SOC 2 certification for an AI vendor provides assurance about the vendor's operational controls. Auditors examine access controls, change management, incident response, encryption, monitoring, and related security practices. For AI coding tools specifically, SOC 2 audits typically cover:

  • How customer prompts and code are transmitted and stored
  • Access controls preventing unauthorized access to customer data
  • Data retention and deletion processes
  • Encryption at rest and in transit
  • Audit logging and monitoring
  • Incident response procedures

SOC 2 does not cover:

  • The correctness or security of code the AI generates
  • Bias, fairness, or explainability of AI outputs
  • Whether generated code infringes intellectual property
  • AI-specific risks like hallucination or confabulation
  • Model behavior, training data provenance, or output validation

This distinction matters. A SOC 2 report confirms that the vendor handles your data securely. It says nothing about whether the code the AI produces is secure, correct, or legally unencumbered. The vendor's operational security is one concern; the quality and safety of AI output is a separate concern requiring different evaluation.

AI-specific audit focus areas

SOC 2 was not designed for AI systems. The framework predates modern AI capabilities and contains no controls specifically addressing machine learning or large language models. Auditors apply existing Trust Services Criteria to AI systems, but must adapt their approach.

Security criterion adaptations

For AI coding tools, the Security criterion (required for all SOC 2 reports) examines:

Access controls for models and inference endpoints. Who can invoke the AI? What authentication and rate limiting protect API endpoints? Are customer workspaces isolated from one another?

Training data protection. For tools that fine-tune on customer data, how is that data secured? What controls prevent training data from leaking into responses for other customers?

API and transmission security. Are prompts encrypted in transit? What encryption protects data at rest? How are API keys managed?

Processing Integrity considerations

The Processing Integrity criterion often included for AI tools examines whether the system processes data completely, accurately, and in a timely manner. For AI systems, this creates interesting questions:

  • How does the vendor validate that AI outputs are accurate?
  • What mechanisms detect and mitigate confabulation?
  • Are outputs logged and auditable?

Processing Integrity does not require AI outputs to be correct. It requires controls to exist that address accuracy concerns. A vendor might satisfy this criterion by documenting their output validation processes, even if those processes cannot guarantee correctness.

Confidentiality and Privacy criteria

AI coding tools that process potentially sensitive code typically include Confidentiality in their SOC 2 scope. Auditors examine:

  • Customer code protection from unauthorized disclosure
  • Separation between customer environments
  • Controls preventing code from appearing in other customers' responses
  • Training data isolation (if the tool fine-tunes on customer code)

The Privacy criterion applies when the AI processes personal information. For coding tools, this may arise when code contains PII database schemas with customer data, configuration files with email addresses, or similar scenarios.

ISO 42001: the first AI management standard

ISO/IEC 42001, published in December 2023, is the world's first certifiable AI management system standard. Unlike SOC 2, ISO 42001 was designed specifically for AI systems and addresses risks that SOC 2 cannot cover.

The standard establishes requirements for an AI Management System (AIMS) a systematic approach to governing AI development and deployment within an organization. It follows the familiar ISO management system structure (Plan-Do-Check-Act) used in ISO 27001, ISO 9001, and other ISO standards.

What ISO 42001 requires

ISO 42001 contains ten clauses covering management system requirements:

Clauses 4-6 establish organizational context, leadership commitment, and planning. Organizations must define their AIMS scope, identify AI-related risks, and establish AI objectives.

Clause 6 introduces a requirement absent from SOC 2: AI Impact Assessment. Organizations must assess the potential consequences of their AI systems on individuals, groups, and society. This includes intended use, foreseeable misuse, and broader societal impacts. No equivalent requirement exists in SOC 2.

Clauses 7-8 cover support and operations resources, competence, documentation, and operational procedures.

Clauses 9-10 address performance evaluation and continuous improvement monitoring, internal audits, management review, and corrective actions.

Annex A controls

ISO 42001 includes Annex A with 38 controls across nine domains. Organizations must document a Statement of Applicability explaining which controls apply and how they are implemented.

DomainFocus Areas
A.2 PoliciesAI policy establishment, alignment with organizational policies
A.3 Internal OrganizationAI roles and responsibilities, concern reporting
A.4 ResourcesAI system documentation, data management including provenance
A.5 LifecycleDevelopment, deployment, maintenance processes
A.6 DataQuality, integrity, privacy considerations
A.7 InformationTransparency and communication with stakeholders
A.8 Responsible UseEthical use, fairness, accountability
A.9 Third PartiesSupply chain and customer considerations

Several controls directly address gaps in SOC 2:

  • Data provenance: Organizations must document where training data came from
  • Fairness and bias: Controls address discrimination and bias in AI systems
  • Explainability: Organizations must be able to explain how AI outputs are generated
  • Impact assessment: Formal assessment of AI effects on individuals and groups

Which AI coding tools are certified

ISO 42001 certification adoption is growing rapidly. As of early 2026, certified AI coding platforms include:

PlatformISO 42001Certification DateCertification Body
Anthropic ClaudeCertifiedJanuary 2025Schellman Compliance
Microsoft 365 CopilotCertifiedMarch 2025Mastermind
Azure OpenAI ServiceCertified2025Mastermind
Google Gemini/Vertex AICertifiedDecember 2024Not disclosed
Augment CodeCertifiedAugust 2025Not disclosed

Notable gaps exist. GitHub Copilot has SOC 2 and ISO 27001 certification but no ISO 42001 certification. OpenAI's direct API has extensive security certifications (SOC 2, ISO 27001, ISO 27017, ISO 27018, ISO 27701) but no ISO 42001 certification. Enterprises requiring AI-specific governance assurance should verify ISO 42001 status when evaluating tools.

SOC 2 versus ISO 42001

These frameworks complement rather than replace each other.

AspectSOC 2ISO 42001
FocusOperational security controlsAI governance and ethics
AI-specific controlsNone38 in Annex A
Bias/fairnessNot addressedRequired controls
ExplainabilityNot addressedRequired capability
Impact assessmentNot requiredMandatory
Geographic recognitionPrimarily North AmericaInternational
Regulatory alignmentGeneral complianceAligns with EU AI Act
Certification typeAttestation reportISO certification (3-year validity)

For enterprises evaluating AI coding tools:

SOC 2 answers: Is the vendor handling our data securely?

ISO 42001 answers: Is the vendor managing AI responsibly?

Both questions matter. SOC 2 remains the baseline for enterprise procurement without it, most security teams will not approve a tool. ISO 42001 provides additional assurance about AI-specific governance that SOC 2 cannot address.

The certification process

Understanding how these certifications work helps evaluate their meaning.

SOC 2 process

An independent CPA firm examines the organization's controls against Trust Services Criteria. Type I reports assess control design at a point in time. Type II reports assess control design and operating effectiveness over a period (typically six to twelve months).

The auditor issues an opinion on whether controls are suitably designed (Type I) or suitably designed and operating effectively (Type II). Reports typically include detailed control descriptions and test results.

Timeline: Initial certification takes three to six months of preparation plus audit period. Annual reports are standard practice.

ISO 42001 process

An accredited certification body audits the organization's AI Management System. Stage 1 audits examine documentation and readiness. Stage 2 audits examine implementation and effectiveness.

Successful organizations receive a certificate valid for three years. Surveillance audits occur annually. Recertification audits occur every three years.

Timeline: Six to twelve months for initial certification. Organizations with existing ISO 27001 certifications may achieve faster timelines due to overlapping requirements.

Combining certifications

Some auditors offer combined approaches. Schellman, for example, offers "SOC 2+ with ISO 42001 Annex A" stacking ISO 42001's AI-specific controls onto a SOC 2 examination. This provides security attestation (SOC 2) plus AI governance controls (ISO 42001 Annex A) in a single report.

For enterprises, this combination addresses both questions: secure data handling and responsible AI governance. When evaluating vendors, asking for both certifications or a combined report provides the most comprehensive assurance available.

Practical guidance

Certification status matters for tool selection. Procurement checklists should include:

  1. Verify SOC 2 Type II coverage for your deployment model (enterprise tier, specific features, regional availability)
  2. Request ISO 42001 certification for AI-specific governance assurance
  3. Review report scope to ensure it covers the services you will actually use
  4. Understand what certifications do not cover code quality, IP risk, and output correctness remain your responsibility
  5. Document compliance posture for audit trails which certifications were verified, when, and what was in scope

Certifications provide assurance about vendor practices. They do not transfer responsibility for how you use AI-generated code. The next sections examine regulatory requirements that apply regardless of vendor certification status.

On this page