Applied Intelligence
Module 7: Data Privacy and Compliance

Building Organizational AI Policies

The policy gap

AI adoption in software development has outpaced governance. According to ISACA's 2025 AI Pulse Poll, only 28% of organizations have formal AI policies up from 15% in 2024, but still low given that 81% of employees report using AI at work.

The consequences show up in breach statistics. Shadow AI employees using unapproved tools accounts for 20% of all data breaches. Each shadow AI breach costs an average of $670,000 more than traditional incidents. Organizations that cannot see their AI data flows (86% by some estimates) cannot enforce policies around them.

For development teams, this gap creates daily uncertainty. Which tools are approved? What data can be shared? How should AI-generated code be reviewed? Without clear answers, developers either avoid useful tools entirely or use them without safeguards.

Policy scope and structure

A workable AI policy for software development covers six areas:

Permitted uses and approved tools. Which AI coding tools are authorized, under what conditions, and for what purposes. This section answers the question developers actually ask: "Can I use Claude Code for this task?"

Data handling requirements. Which data tiers can be processed by which tools. This connects to the data classification framework from earlier sections.

Code review and quality standards. How AI-generated code enters the codebase: review requirements, attribution practices, quality gates.

Intellectual property protections. Ownership, licensing compliance, and indemnification coverage. This translates the IP considerations from sections 7.6-7.9 into operational requirements.

Compliance alignment. How policy requirements map to regulatory obligations: EU AI Act, GDPR, HIPAA, SOC 2, and applicable state laws.

Exception handling. How deviations are requested, approved, documented, and audited. No policy survives contact with every edge case.

Key policy elements

Approved tool catalog

Maintain a registry of vetted AI coding tools, specifying for each:

ElementPurpose
Tool name and versionIdentify exactly what is approved
Approved use casesClarify what tasks the tool may perform
Data tier restrictionsSpecify maximum data sensitivity permitted
Configuration requirementsDefine required settings (enterprise tier, privacy modes)
Procurement statusConfirm licensing, indemnification, BAA where required
Review dateEnsure periodic reassessment

A tool on this catalog has passed procurement due diligence, security review, and legal assessment. Tools not on the catalog require explicit approval before use.

One complication: AI capabilities are now embedded in many development tools. IDE features, code search, documentation generators, and testing frameworks all incorporate AI to varying degrees. The policy needs to address embedded AI, not just standalone agents.

Data handling rules

Translate the data classification framework into specific AI tool guidance:

Public data: Unrestricted AI tool usage permitted.

Internal data: Enterprise-tier AI tools only, with appropriate access controls. No public or consumer-tier services.

Confidential data: Limited to specifically approved tools with verified data handling practices. Exclude from AI training. Require audit logging.

Restricted data: AI tool usage prohibited unless specifically approved by exception process with security and legal sign-off.

These rules must be technically enforceable, not just policy statements. Configure permissions.deny patterns in Claude Code settings. Set up content exclusion in GitHub Copilot (noting its limitations as documented in section 7.4). Implement pre-commit hooks that block sensitive file transmission.

Review requirements

How AI-generated code enters the codebase:

Mandatory human review. All AI-generated code requires review by a qualified developer before merging. The reviewer attests that the code:

  • Meets functional requirements
  • Follows project conventions
  • Contains no obvious security vulnerabilities
  • Does not introduce unwanted dependencies
  • Is appropriately licensed

Attribution standards. How is AI assistance documented in commit messages, pull requests, or code comments? Some projects require explicit disclosure; others treat AI as equivalent to any other tool. The policy should state which approach the organization uses.

Security scanning. AI-generated code passes through the same security scanning pipeline as human-written code. Consider additional scrutiny for patterns AI tools commonly miss: authentication edge cases, injection vulnerabilities, cryptographic implementation errors.

Test coverage. AI-generated code requires tests, whether AI-generated or human-written. The policy may specify minimum coverage thresholds or require tests for any new functionality.

Exception processes

Policies that cannot bend will break. Build exception handling into the framework:

Request process. Define who can request exceptions, what information must be provided, and how requests are submitted. A lightweight form captures the use case, risk assessment, proposed safeguards, and time limitation.

Approval authority. Specify who can approve different categories of exceptions. Low-risk deviations might require manager approval. High-risk exceptions (processing restricted data with AI tools) might require security team and legal review.

Documentation requirements. Every exception is recorded with rationale, approver, expiration date, and conditions. Exceptions without expiration become permanent policy gaps.

Audit trail. Exception records are subject to periodic review. If the same exception is repeatedly granted, consider updating the policy.

Governance structure

A policy document without governance behind it is just a PDF nobody reads. Making AI governance real requires organizational structure.

AI governance committee

Form a cross-functional body responsible for AI policy.

Composition:

  • Engineering leadership (technical implementation authority)
  • Information security (risk assessment, technical controls)
  • Legal and compliance (regulatory requirements, contractual obligations)
  • Privacy (data protection, personal data handling)
  • Procurement (vendor assessment, contract negotiation)
  • Business unit representatives (use case expertise)

Responsibilities:

  • Approve and maintain AI policies
  • Review and decide exception requests above defined thresholds
  • Assess new AI tools for organizational use
  • Monitor regulatory developments and update policies accordingly
  • Review AI-related incidents and near-misses
  • Report to executive leadership and board (where applicable)

Cadence: Monthly meetings address routine matters: tool approvals, exception reviews, policy clarifications. Quarterly reviews assess policy effectiveness and regulatory alignment. Emergency sessions convene for incidents or urgent regulatory changes.

Ownership and accountability

Every policy element requires an owner:

Policy ElementTypical Owner
Approved tool catalogEngineering leadership + Security
Data handling rulesInformation security + Privacy
Review requirementsEngineering leadership
IP protectionsLegal
Compliance alignmentCompliance + Legal
Exception handlingGovernance committee

Ownership means responsibility for keeping the element current, responding to questions, and proposing updates when circumstances change.

Executive oversight

AI governance now attracts board-level attention. Nearly half of Fortune 100 companies disclosed AI risks as part of board oversight in 2025, triple the year before. McKinsey research found that CEO engagement correlates with business value from AI initiatives.

For development teams, this means AI policies may face executive scrutiny. Document the policy rationale, risk assessments, and compliance alignment in terms executives can understand. Be prepared to explain both what AI coding tools enable and what controls make their usage acceptable.

Implementation approach

Phased rollout

Avoid the "publish and pray" approach where policies appear with no operational support.

Phase 1: Monitoring. Deploy AI governance tooling in observation mode. Log AI tool usage patterns without blocking. Understand baseline behavior before enforcing restrictions.

Phase 2: Soft enforcement. Enable blocking for critical policies (restricted data, unapproved tools in production code) while continuing to monitor lower-risk categories. Notify users of policy violations without blocking routine work.

Phase 3: Full enforcement. Activate all policies with automated blocking and response workflows. By this phase, users understand the requirements and tooling supports compliance.

Training and awareness

Policies are ineffective if developers do not know they exist. ISACA found that only 22% of organizations train all employees on AI. Development teams require targeted training.

Policy awareness. What tools are approved? What data restrictions apply? How do I request an exception?

Practical guidance. How do I configure Claude Code to respect organizational policies? What does proper AI-assisted code review look like?

Regulatory context. Why do these policies exist? What are the compliance implications?

Training is not a one-time event. As tools and regulations evolve, training follows. Consider requiring annual acknowledgment of AI acceptable use policies.

Technical controls

Policy statements mean nothing without technical enforcement.

Network controls. Block access to unapproved AI services at the network layer. This prevents shadow AI more effectively than policy documents.

Endpoint monitoring. Detect AI tool installations and browser extensions. Alert security teams to unapproved tool usage.

Data loss prevention. Configure DLP tools to identify sensitive data in AI tool traffic. Block or alert on attempts to transmit confidential or restricted data.

Audit logging. Log AI tool usage for compliance and incident investigation. Retain logs according to organizational retention policies and regulatory requirements.

These controls are not optional. Only 17% of organizations have technical controls capable of preventing employees from uploading confidential data to public AI tools. The remaining 83% rely on training and policy acknowledgment alone.

Review cadence

AI policy is not a one-time effort. Build regular review cycles into the governance framework.

Quarterly portfolio review. Assess approved tools against current capabilities and organizational needs. Remove deprecated tools. Evaluate new tools for addition.

Annual comprehensive review. Assess policy effectiveness against metrics. Update for regulatory changes. Incorporate lessons from incidents and near-misses.

Triggered reviews. Significant events prompt off-cycle assessment:

  • New regulations or enforcement actions
  • Major AI tool updates or new capabilities
  • Security incidents involving AI tools
  • Organizational changes affecting AI usage

Track policy effectiveness through measurable metrics:

MetricPurpose
Shadow AI detection rateAre unapproved tools being used?
Policy violation frequencyAre data handling rules being followed?
Exception request volumeIs the policy too restrictive?
Incident countAre AI-related security events occurring?
Audit findingsAre compliance gaps identified?

Metrics without action are vanity measures. Each metric needs an owner, a threshold for concern, and a response process when thresholds are exceeded.

Starting from zero

For organizations without existing AI policies, a minimal viable policy addresses immediate risks while comprehensive governance develops.

Day one requirements:

  1. Declare which AI coding tools are permitted (even if the list is "enterprise-tier Claude Code and Codex only")
  2. Prohibit processing restricted or confidential data with any AI tool pending further assessment
  3. Require human review for all AI-generated code entering production
  4. Designate a temporary owner for AI tool questions

30-day goals:

  1. Complete data classification mapping for AI tool usage
  2. Configure technical controls for approved tools
  3. Document exception request process
  4. Brief development teams on interim policy

90-day goals:

  1. Establish governance committee
  2. Complete full policy documentation
  3. Deploy monitoring and enforcement tooling
  4. Conduct training for all development staff

This timeline is aggressive but achievable. Organizations that delay AI governance face increasing risk as AI tool usage expands without guardrails.

Policy as enablement

The purpose of AI policy is not to prevent AI usage. It is to make responsible usage possible. Policies that make AI tools impossible to use drive developers toward shadow AI, creating far more risk than controlled adoption.

Good policies:

  • Clarify what is permitted, not just what is prohibited
  • Reduce friction for approved use cases
  • Provide guidance rather than just restrictions
  • Enable developers to use AI tools confidently, knowing they are operating within organizational bounds

The goal is not compliance theater. The goal is letting development teams benefit from AI coding tools while protecting the organization from unacceptable risks.

Module 7 has covered the technical, legal, and regulatory landscape of AI tool security and compliance. The exercise that follows applies these concepts to a practical compliance audit scenario.

On this page