Earlier today I ran a review of AI attribution adoption on the KubeVirt
release-1.8 branch against the KubeVirt AI Contribution
Policy.
The full report is available
here,
but I wanted to share some of the highlights and takeaways in this post.
Background
KubeVirt adopted an AI Contribution
Policy
asking contributors to disclose AI-assisted work using git trailers such as
Assisted-by:, Co-authored-by:, or Generated-by:. The policy emphasises
transparency, human oversight, and community review rather than mandating
specific workflows. This review covers the commit range
release-1.7..release-1.8 (the v1.8.0 development cycle) to see how well the
policy is being adopted and what the data tells us about AI-assisted
contributions.
Overview
| Metric | Count |
|---|---|
| Total commits | 1,089 |
| Merge commits | 351 |
| Non-merge commits | 738 |
| Commits with AI attribution | 76 |
| AI-attributed share (of non-merge) | 10.3% |
| Distinct contributors using AI | 15 |
| DCO (Signed-off-by) compliance | 76/76 (100%) |
10.3% of non-merge commits now carry AI attribution, with 15 distinct
contributors using AI tooling. Every single AI-attributed commit has a valid
Signed-off-by, so DCO compliance is perfect.
AI Tools Used
| Tool | Trailer Instances | Authors |
|---|---|---|
| Claude (all variants) | 70 | 11 |
| Cursor | 14 | 7 |
Claude dominates at ~83% of all AI attribution instances, with the majority of commits using either Claude Code or the Claude web interface. Cursor accounts for the remainder.
98.7% of AI-attributed commits come from Red Hat engineers, likely reflecting both Red Hat’s large contribution share and potentially earlier internal adoption of the policy.
PR Quality: AI vs Non-AI
This is the most interesting part of the review. I pulled metrics for 35 AI-attributed PRs and 315 non-AI PRs merged during the cycle via the GitHub API.
Time to Merge
| Metric | AI PRs | Non-AI PRs |
|---|---|---|
| Median | 7.5 days | 8.9 days |
| Mean | 14.3 days | 22.9 days |
AI PRs merge slightly faster on median. When controlled for PR size, medium PRs (50-200 lines) with AI attribution merge notably faster (8.7 vs 13.0 days median), while small and large PRs show no significant difference.
PR Size
| Metric | AI PRs | Non-AI PRs |
|---|---|---|
| Median additions | 88 | 24 |
| Median deletions | 17 | 11 |
| Median changed files | 5 | 3 |
AI-attributed PRs skew larger, with 65% at size L or above. This suggests AI tooling is being used for substantial work rather than trivial changes. 7 of the 35 AI PRs implement approved VEPs, including features like Containerpath Volumes and Passt as a beta core networking backend.
Changes Requested
| Rounds | AI PRs | Non-AI PRs |
|---|---|---|
| 0 | 32 (91%) | 288 (91%) |
| 1+ | 3 (9%) | 27 (9%) |
The rate of PRs receiving formal “changes requested” reviews is identical at 9% for both AI and non-AI PRs. AI-attributed contributions are not generating more reviewer pushback than human-only contributions.
Review Intensity
One metric that warrants monitoring is review depth on large PRs. When looking at comments per 100 lines changed:
| Size Bucket | AI PRs | Non-AI PRs |
|---|---|---|
| Small (<50 lines) | 35.4 | 100.0 |
| Medium (50-200 lines) | 15.4 | 15.0 |
| Large (200+ lines) | 2.5 | 6.2 |
For medium PRs the review intensity is essentially identical. For large PRs, AI-attributed changes receive fewer comments per line (2.5 vs 6.2). This could indicate cleaner code or it could suggest reviewers spend less time per line on large AI-generated diffs. This is worth tracking over future releases.
Trailer Format: The Main Issue
While the policy is being adopted, the format of the attribution trailers is all over the place. There are 17 distinct trailer value formats across only 76 commits:
| Format | Count | Notes |
|---|---|---|
Assisted-By: Claude <noreply@anthropic.com> |
21 | Matches policy example |
Assisted-by: Claude <noreply@anthropic.com> |
19 | Matches policy (case variant) |
Co-authored-by: Cursor <cursoragent@cursor.com> |
7 | Auto-added by Cursor |
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> |
5 | Includes model name |
Assited-by: Claude Sonnet 4.5 <noreply@anthropic.com> |
4 | Typo in trailer name |
Assisted-by: claude-4.5-opus |
4 | Missing email, uses model slug |
| … | 12 more variants |
Contributors vary by including model names, using model ID slugs, omitting email
addresses, and in one case misspelling the trailer name as Assited-by. The
Generated-by: trailer defined in the policy was never used at all.
Recommendations
Based on this review I’d suggest the project consider:
-
Standardize trailer formats - Define 1-2 canonical formats and provide a
git interpret-trailersalias or commit hook to enforce them. -
Clarify model name inclusion - Currently ~30% of Claude attributions include the model variant, ~70% don’t. The policy should take a position.
-
Add CI validation - A prow check that validates AI attribution trailer format would catch typos and non-standard formats before merge.
-
Monitor review depth on large AI PRs - The lower comment-per-line rate on large AI PRs isn’t necessarily a problem today but should be tracked over time.
-
Continue tracking these metrics across releases - This release-1.8 baseline can be compared against future releases to identify trends.
Conclusion
The overall picture is positive. Contributors are voluntarily adopting KubeVirt’s AI Contribution Policy, DCO compliance across AI-attributed commits is perfect, and the data shows no measurable quality difference between AI-attributed and non-AI PRs. AI tooling is being used for substantial work across core subsystems, not just boilerplate. The main area for improvement is standardizing the trailer format to make attribution data more consistent and machine-parseable.
The full report with all the raw data is available here.