Applied Intelligence
Module 7: Data Privacy and Compliance

Vendor Indemnification Policies

What indemnification means

Previous sections established that AI-generated code creates legal uncertainty. Who owns it? Could it inherit licensing obligations? These questions remain legally unresolved.

Vendor indemnification offers enterprises a practical hedge against this uncertainty. When vendors indemnify customers, they agree to defend against third-party intellectual property claims and pay any resulting judgments or settlements. The risk shifts from the customer to the vendor, within specified limits.

Understanding those limits matters more than understanding the coverage. Indemnification is not a blanket protection. Every major vendor carves out exclusions. Failing to understand these exclusions creates false confidence, which is worse than no confidence at all.

Anthropic's enterprise indemnification

Anthropic announced IP indemnification for commercial customers in January 2024. The protection extends to Claude API, Claude for Work (Team plans), Claude Enterprise, Amazon Bedrock access, Google Cloud Vertex AI access, Claude Gov, Claude for Education, and Claude Code used under commercial agreements.

What Anthropic covers

Anthropic defends customers from third-party claims alleging that authorized, paid use of Claude services violates intellectual property rights, including copyright, patent, trade secret, and trademark.

The indemnification explicitly covers training data. If a claim arises from "data Anthropic has used to train a model that is part of the Services," Anthropic's indemnification applies. Given the Bartz v. Anthropic settlement discussed in the previous section $1.5 billion for training on pirated books this coverage matters.

Anthropic commits to paying any approved settlements or judgments. The general liability caps in their terms do not apply to indemnification obligations.

What Anthropic excludes

The exclusions narrow the protection substantially:

ExclusionWhat it means
Customer modificationsIf you modify Claude's output, the modification is not covered
Combinations with non-Anthropic technologyIntegrating Claude output with other tools or services voids coverage for the combined result
Customer-provided prompts, inputs, or dataClaims arising from your prompts are your responsibility
Known infringementUsing output you know (or should know) infringes others' rights
Patented inventions in outputClaude generates code that happens to implement a patented algorithm? Not covered
Trademark claimsUsing output in trade or commerce creates trademark exposure Anthropic does not cover
Acceptable Use Policy violationsAny use violating Anthropic's AUP or use restrictions

The customer modification exclusion deserves attention. Standard development practice involves modifying AI-generated code refactoring, extending, integrating. Any such modification potentially moves the code outside indemnification coverage. In other words: the moment you touch the code, you may have voided the warranty.

The exclusion for customer-provided prompts matters too. If your prompt triggers infringement directing Claude to reproduce copyrighted content, for instance Anthropic does not cover the resulting claim.

Procedural requirements

Indemnification has procedural conditions. The indemnified party must promptly notify Anthropic of any claim. Reasonable cooperation in the defense is required. Anthropic controls defense strategy, including selection of counsel and settlement negotiations. Failure to provide prompt notice or cooperate may affect Anthropic's obligations.

Consumer terms excluded

Free tier users, Pro users, and Max users operating under consumer terms do not receive indemnification. Only commercial and enterprise agreements include IP protection.

Microsoft announced the Copilot Copyright Commitment in September 2023, extending existing IP indemnification to cover AI-generated outputs. The commitment became effective October 2023 for paid commercial Copilot services.

Covered products

Microsoft's commitment covers paid commercial versions of:

  • GitHub Copilot (Business and Enterprise tiers)
  • Microsoft 365 Copilot
  • Azure OpenAI Service
  • Microsoft Security Copilot
  • Dynamics 365 Copilot
  • Power Platform Copilot
  • Microsoft Copilot Studio (with conditions)

Free products and consumer versions are explicitly excluded.

Required mitigations

Unlike Anthropic's relatively straightforward terms, Microsoft's commitment requires customers to implement specific safety measures. Coverage is contingent on these mitigations.

Universal requirements (all Azure OpenAI customers):

  1. Metaprompt: Include a metaprompt directing the model to prevent copyright infringement
  2. Testing and evaluation: Conduct evaluations to detect third-party content output; retain results for potential claims

GitHub Copilot specific:

Duplicate Detection filtering must be set to "Block" mode. When enabled, Copilot checks suggestions against code indexed from public GitHub repositories within a range of 150 characters. Matching suggestions are blocked.

Important: Microsoft's defense obligations do not apply if Duplicate Detection filtering is not set to "Block" mode. This is an active configuration choice, not a default.

Azure OpenAI code generation:

  • Protected material code model: filter or annotate mode required
  • Prompt Shield (jailbreak model): filter mode required
  • Content flagged by asynchronous filters after generation is not covered unless license-compliant

What Microsoft excludes

Microsoft excludes claims arising from:

  • Customer input data (only outputs are covered)
  • Customer modifications to output
  • Uses the customer knows or should know will infringe
  • Trademark claims related to use in trade or commerce
  • Defamation or false light claims
  • Content flagged by asynchronous filters (unless license-compliant)

Critically, Microsoft excludes coverage when customers:

  • Disable, evade, disrupt, or interfere with content filters or safety systems
  • Provide input they do not have appropriate rights to use
  • Intentionally attempt to generate infringing materials

The hidden requirements

Microsoft publishes a detailed "Required Mitigations" page that evolves over time. For new services or features, new requirements are posted and take effect at or following launch. For existing services, customers have six months from publication to implement new mitigations.

Enterprises relying on Microsoft's commitment should review this page quarterly. A policy compliant in 2024 may not satisfy 2026 requirements.

OpenAI announced Copyright Shield at DevDay in November 2023. The program provides indemnification for ChatGPT Enterprise customers and API customers.

What OpenAI covers

OpenAI agrees to "indemnify, defend, and hold Customer harmless against any liabilities, damages and costs (including reasonable attorneys' fees) payable to a third party arising out of a Claim alleging that the Services infringe any third-party IP Right."

The indemnity is not subject to liability caps. Coverage includes both the services and output generated by customers.

What OpenAI excludes

The service-level indemnity does not apply to claims arising from:

  • Combination of services with products not provided by OpenAI
  • Modification of services by parties other than OpenAI
  • Customer content
  • Customer applications

The output-specific indemnity adds further exclusions:

ExclusionImplication
Knowledge of infringementCustomer or end users knew or should have known output was infringing
Disabled safety featuresCustomer disabled, ignored, or did not use relevant filtering or safety features
Modified or combined outputOutput was modified or combined with non-OpenAI products
Unlawful inputsCustomer did not have right to use input or fine-tuning files
Trademark claimsUse of output in trade or commerce
Third-party offeringsAllegedly infringing output from third-party content
Beta servicesOffered "as-is" and explicitly excluded

Zero Data Retention customers

OpenAI offers Zero Data Retention (ZDR) for qualifying enterprise customers. ZDR does not affect indemnification coverage the same terms apply. However, ZDR customers may have stronger positions in defending against claims that their specific usage contributed to infringement, since no data persists for training or monitoring.

What indemnification does not cover

All three major vendors share common exclusions. Understanding what is not covered matters as much as understanding what is.

Patent claims on output

None of the major vendors cover patent infringement in generated output. If Claude generates code that happens to implement a patented algorithm, Anthropic does not indemnify against patent claims. The same applies to Microsoft and OpenAI.

This gap is worth dwelling on. Code functionality can infringe patents regardless of how the code was written. The generation method human or AI does not affect patent exposure. Enterprises face the same patent risk with AI-generated code as with human-written code, but with less visibility into why a particular implementation was chosen.

Trademark claims

Vendor indemnification universally excludes trademark claims related to commercial use of output. If AI-generated code incorporates trademarks, or if outputs are used in ways that create trademark confusion, vendors do not cover the exposure.

Customer modifications

All vendors exclude customer modifications from coverage. The practical implication: the moment a developer edits AI-generated code, indemnification becomes uncertain. The more substantial the modification, the weaker the protection argument.

Combinations with other technology

Integrating AI output with non-vendor technology voids coverage. For enterprises using multiple AI tools, or integrating AI output into larger systems, this exclusion is nearly always triggered.

Intentional misuse

Vendors exclude protection when customers deliberately attempt to generate infringing content. Prompting an agent to "write code in the style of [specific project]" or "reproduce the implementation from [specific library]" could constitute intentional misuse.

Practical guidance

Vendor indemnification shifts risk. It does not eliminate it. Treat indemnification as one layer of defense, not a complete solution.

Maintain required mitigations. For GitHub Copilot, enable Duplicate Detection filtering in Block mode. For Azure OpenAI, implement required metaprompts and content filters. Audit configurations quarterly against vendor requirements.

Document usage patterns. If a claim arises, demonstrating compliance with vendor terms is the first hurdle. Log AI tool configurations, prompt patterns, and output handling practices.

Understand exclusion boundaries. Before assuming coverage, map your actual usage against vendor exclusions. If you modify AI output (you probably do), combine it with other technology (you probably do), or use it commercially (you certainly do), coverage may be narrower than you expect.

Negotiate where possible. Enterprise agreements may allow negotiation of indemnification terms. Larger customers may secure expanded coverage, though vendors resist broadening exclusions.

Layer protections. Indemnification works best alongside other practices: IP scanning, duplication detection, documentation of human contribution, and code review. No single protection is sufficient.

Terms change

Vendor indemnification terms are not static. As litigation outcomes emerge and legal clarity develops, vendors adjust their terms. Microsoft's Required Mitigations page updates regularly. Anthropic's commercial terms have been revised multiple times since launch. OpenAI's services agreement evolves.

Review vendor terms at least annually, ideally quarterly. Terms acceptable at agreement signing may change before renewal. Understanding the current state of coverage not the terms as remembered from initial review is necessary for accurate risk assessment.

The vendor indemnification landscape reflects the broader legal uncertainty around AI-generated content. Vendors absorb some risk, but they carve out substantial exclusions to limit exposure. The protection is real but narrow. Know where the boundaries are.

On this page