Attribution Practices and Conventions
Who gets credit?
AI coding tools disagree on whether to credit themselves.
Claude Code adds a Co-Authored-By trailer by default.
Aider appends "(aider)" to the author name and includes its own trailer.
GitHub Copilot adds nothing the code appears as if the developer wrote every line.
Cursor, Windsurf, Tabnine, and most other tools stay silent.
Teams trying to track AI-assisted commits cannot rely on git history alone. Codebases mix attributed and unattributed AI contributions. Compliance audits find incomplete records. Without a standard, each organization defines and enforces its own policy.
The disclosure gap
Research published in late 2025 analyzed 14,300 GitHub commits across 7,393 repositories where developers mentioned AI tools.
| Tool | Explicit Attribution Rate |
|---|---|
| Claude | 80.5% |
| ChatGPT | 12.2% |
| Copilot | 9.0% |
| Cursor | 8.6% |
| CodeWhisperer | 1.4% |
| Tabnine | 1.3% |
Claude's high rate comes from default-on attribution the tool actively adds a trailer unless the developer disables it. Copilot's 9% represents deliberate developer action: someone manually adding a co-author line.
That gap affects real decisions. Code reviewers assess PRs differently when they know AI generated portions. Maintainers of open source projects evaluate contributions with context about their origin. Legal teams determining IP ownership need accurate records of who or what produced each component.
In early 2024, explicit attribution hovered near zero. By September 2025, it peaked at 58.9% before stabilizing around 40%. Growing awareness helps, but voluntary attribution remains spotty.
How tools handle attribution today
Claude Code adds two elements by default. The commit message includes "Generated with Claude Code" and a trailer:
feat: add user authentication
Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>GitHub parses this trailer and displays Claude in the commit's co-authors list. Configure attribution through settings:
{
"includeCoAuthoredBy": false,
"gitAttribution": false
}Setting includeCoAuthoredBy to false removes the trailer.
Setting gitAttribution to false removes all attribution including the "Generated with Claude Code" line.
Aider modifies the git author metadata, appending "(aider)" to your name:
Author: Jane Developer (aider)It also adds a detailed trailer:
Co-authored-by: aider (openrouter/anthropic/claude-sonnet-4) <noreply@aider.chat>This captures the specific model used, which helps for debugging or audits.
Configure the behavior through command-line flags: --no-attribute-author, --no-attribute-committer, and --no-attribute-co-authored-by.
GitHub Copilot Coding Agent inverts the typical pattern.
When the agent creates commits autonomously, it sets itself as the primary author and adds the supervising developer via Co-authored-by trailer.
On squash-and-merge, this becomes awkward: the agent becomes the sole author of record, and the human appears only in the squashed commit message body.
Workarounds include manually amending commit authors or adding explicit co-author lines before merging.
Most other tools Cursor, Windsurf, Tabnine, CodeWhisperer generate commit messages but add no attribution. Developers wanting records must add trailers manually.
The Co-authored-by problem
Using Co-authored-by for AI has drawn criticism, and the objections have merit.
The trailer originated to credit human collaborators pair programmers, reviewers who provided substantial feedback, contributors who shared code. The format expects a name and email identifying a person who could, in theory, sign off on the work.
When Claude uses <noreply@anthropic.com>, it adopts the syntax but violates the semantic intent.
The email doesn't identify anyone who can be contacted, credited, or held accountable.
It cannot execute a Developer Certificate of Origin.
Tools parsing commit trailers to identify contributors now need special handling for AI entries.
One developer proposal suggests a dedicated trailer:
AI-assistant: Claude Code v1.0 (Claude Sonnet 4.5)This captures tool name, version, and model without misusing human-centric metadata. The format also accommodates multi-model workflows:
AI-assistant: OpenCode v1.0.203 (plan: Claude Opus 4.5, edit: Claude Sonnet 4.5)The Linux kernel community, evaluating AI-assisted contributions, proposed using Co-developed-by instead of Co-authored-by.
Their explicit restriction: AI cannot use Signed-off-by because that trailer legally certifies the commit under the Developer Certificate of Origin.
No standard has emerged. Teams choose an approach and apply it consistently.
Tiered attribution
Not all AI assistance warrants the same disclosure.
Tab completion that fills in a variable name differs from an agent that writes an entire module. A tiered system matches attribution to contribution level:
| Trailer | AI Contribution | When to Use |
|---|---|---|
Assisted-by: | 0-33% | Developer wrote the code; AI provided suggestions or completions |
Co-authored-by: | 34-66% | Collaborative work; substantial contributions from both |
Generated-by: | 67-100% | AI produced majority of code with minimal intervention |
The percentages are guidelines, not measurements. The judgment call belongs to the developer. Did you direct the work, review each change, and make substantive edits? That's assisted. Did you describe a feature and accept the generated implementation with minor tweaks? That's generated.
A fourth tier covers commit messages:
Commit-generated-by: GitHub CopilotThis indicates the AI wrote the commit message but made no code changes useful for tracking even when code attribution isn't warranted.
Pair each tier with Signed-off-by or Reviewed-by to establish human oversight:
Generated-by: Claude Code (Claude Sonnet 4.5)
Reviewed-by: developer@company.com
Signed-off-by: Jane Developer <jane@company.com>This documents both the AI's contribution and the human's validation.
Documenting human contribution for IP
Copyright law in most jurisdictions requires human authorship. The U.S. Copyright Office has stated that works generated entirely by AI without human creative input are not copyrightable. Prompts alone do not establish authorship you cannot copyright output simply because you typed the instruction that produced it.
Human contributions that can be protected include:
- Original expression perceptible in the output
- Creative selection, coordination, or arrangement of AI-generated material
- Substantive modifications to AI output
For ASD, documentation matters. When disputes arise over IP ownership, the developer who can demonstrate creative contribution has stronger standing than one who cannot.
What to document:
Keep records of your creative process. Preserve prompts, iterations, and refinements. Note where you rejected AI suggestions and wrote code yourself. Document architectural decisions that shaped what the AI produced. If you heavily edited generated code, the diff between initial output and final commit shows your contribution.
Version control as evidence:
Commit history serves as a contemporaneous record. Frequent, well-described commits establish a timeline of human involvement. Squashing everything into a single commit obscures the development process. When IP ownership matters open source projects with contributor agreements, work-for-hire disputes, patent filings preserved history provides evidence.
Registration requirements:
The U.S. Copyright Office requires disclosure of AI-generated content in registration applications. Applicants must explain the human author's contributions and exclude AI-generated portions that are more than minimal. Do not list AI tools as authors or co-authors. Similar disclosure requirements are emerging in other jurisdictions.
Enterprise policies:
Organizations handling sensitive IP should establish clear policies:
- What AI tools are approved for use
- What code may be submitted to AI services
- What attribution practices are required
- How to document human contribution
- When to seek legal review
Clear records of what came from where protect everyone.
What to do
For individuals:
- Use
Assisted-byorGenerated-bytrailers for substantial AI contributions - Pair with
Signed-off-byto indicate human review - Skip attribution for trivial autocomplete variable names, import statements, syntax completion
- Keep prompts and iterations if working on IP-sensitive code
For teams:
- Establish a policy before widespread adoption
- Choose a consistent attribution format and configure tools to enforce it
- Set thresholds for required attribution (many teams use the 50% guideline)
- Include attribution checks in code review
- Document the policy in CLAUDE.md or project contributing guidelines
For open source maintainers:
- Add AI disclosure requirements to CONTRIBUTING.md
- Consider PR template checkboxes asking contributors to declare AI usage
- Decide organizational stance on AI-generated contributions
- Document what level of human involvement is expected
Without a universal standard, these decisions fall to each team. Consistency within a project matters more than which specific format is chosen. A documented policy, configured tools, and review discipline produce cleaner history than hoping developers remember to attribute.