I’ve been using Claude Code heavily over the past few months and one thing that kept bugging me was how hard it is to find and resume old conversations. The built-in --resume picker works but it’s minimal — no filtering, no metadata at a glance, and no way to search across projects.
So I built cctv — a terminal UI for browsing and resuming Claude Code conversations from the local filesystem.
Why bother resuming?
Claude Code conversations accumulate context as you work — files discussed, decisions made, bugs investigated. Starting fresh throws all of that away. Resuming a session means:
No re-explaining — Claude already knows your codebase, what you tried, and what’s left
Cheaper — resumed sessions carry their prompt cache forward, so Claude doesn’t re-read everything from scratch
Better coherence — pick up hours or days later and the conversation still knows what was agreed
What it does
cctv reads Claude Code’s local storage (~/.claude/) and presents all your sessions in a searchable, filterable list. Each session shows its summary, project, git branch, linked PRs, message count, and when it was last active. Running sessions are highlighted.
Pressing Enter on a session launches claude --resume — cctv suspends, Claude takes over the terminal, and when you’re done cctv comes back. You can also pop open a stats view showing token usage and cache hit rates, or drill into the detail view for prompt history and model info.
Filtering
The filter bar supports regex and field-specific prefixes:
project:kubevirt$ # exact project name
branch:^feature/ # branches starting with feature/
pr:enhancements#242 # by PR repo or number
project:backend branch:main # multiple terms ANDed
There’s also a non-interactive cctv list command with --json output for scripting, and all the same filter flags (--project, --branch, --pr, --cwd, --pwd).
This was built around my own workflow so I’ve almost certainly missed use cases. If you have ideas for new filters, views, or features — or if you spot something that doesn’t work with your Claude Code setup — issues and PRs are very welcome.
Earlier today I ran a review of AI attribution adoption on the KubeVirt
release-1.8 branch against the KubeVirt AI Contribution
Policy.
The full report is available
here,
but I wanted to share some of the highlights and takeaways in this post.
Background
KubeVirt adopted an AI Contribution
Policy
asking contributors to disclose AI-assisted work using git trailers such as
Assisted-by:, Co-authored-by:, or Generated-by:. The policy emphasises
transparency, human oversight, and community review rather than mandating
specific workflows. This review covers the commit range
release-1.7..release-1.8 (the v1.8.0 development cycle) to see how well the
policy is being adopted and what the data tells us about AI-assisted
contributions.
Overview
Metric
Count
Total commits
1,089
Merge commits
351
Non-merge commits
738
Commits with AI attribution
76
AI-attributed share (of non-merge)
10.3%
Distinct contributors using AI
15
DCO (Signed-off-by) compliance
76/76 (100%)
10.3% of non-merge commits now carry AI attribution, with 15 distinct
contributors using AI tooling. Every single AI-attributed commit has a valid
Signed-off-by, so DCO compliance is perfect.
AI Tools Used
Tool
Trailer Instances
Authors
Claude (all variants)
70
11
Cursor
14
7
Claude dominates at ~83% of all AI attribution instances, with the majority of
commits using either Claude Code or
the Claude web interface. Cursor accounts for the remainder.
98.7% of AI-attributed commits come from Red Hat engineers, likely reflecting
both Red Hat’s large contribution share and potentially earlier internal adoption
of the policy.
PR Quality: AI vs Non-AI
This is the most interesting part of the review. I pulled metrics for 35
AI-attributed PRs and 315 non-AI PRs merged during the cycle via the GitHub API.
Time to Merge
Metric
AI PRs
Non-AI PRs
Median
7.5 days
8.9 days
Mean
14.3 days
22.9 days
AI PRs merge slightly faster on median. When controlled for PR size, medium PRs
(50-200 lines) with AI attribution merge notably faster (8.7 vs 13.0 days
median), while small and large PRs show no significant difference.
PR Size
Metric
AI PRs
Non-AI PRs
Median additions
88
24
Median deletions
17
11
Median changed files
5
3
AI-attributed PRs skew larger, with 65% at size L or above. This suggests AI
tooling is being used for substantial work rather than trivial changes. 7 of the
35 AI PRs implement approved
VEPs, including features like
Containerpath Volumes and Passt as a beta core networking backend.
Changes Requested
Rounds
AI PRs
Non-AI PRs
0
32 (91%)
288 (91%)
1+
3 (9%)
27 (9%)
The rate of PRs receiving formal “changes requested” reviews is identical at 9%
for both AI and non-AI PRs. AI-attributed contributions are not generating more
reviewer pushback than human-only contributions.
Review Intensity
One metric that warrants monitoring is review depth on large PRs. When looking at
comments per 100 lines changed:
Size Bucket
AI PRs
Non-AI PRs
Small (<50 lines)
35.4
100.0
Medium (50-200 lines)
15.4
15.0
Large (200+ lines)
2.5
6.2
For medium PRs the review intensity is essentially identical. For large PRs,
AI-attributed changes receive fewer comments per line (2.5 vs 6.2). This could
indicate cleaner code or it could suggest reviewers spend less time per line on
large AI-generated diffs. This is worth tracking over future releases.
Trailer Format: The Main Issue
While the policy is being adopted, the format of the attribution trailers
is all over the place. There are 17 distinct trailer value formats across
only 76 commits:
Format
Count
Notes
Assisted-By: Claude <noreply@anthropic.com>
21
Matches policy example
Assisted-by: Claude <noreply@anthropic.com>
19
Matches policy (case variant)
Co-authored-by: Cursor <cursoragent@cursor.com>
7
Auto-added by Cursor
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
5
Includes model name
Assited-by: Claude Sonnet 4.5 <noreply@anthropic.com>
4
Typo in trailer name
Assisted-by: claude-4.5-opus
4
Missing email, uses model slug
…
12 more variants
Contributors vary by including model names, using model ID slugs, omitting email
addresses, and in one case misspelling the trailer name as Assited-by. The
Generated-by: trailer defined in the policy was never used at all.
Recommendations
Based on this review I’d suggest the project consider:
Standardize trailer formats - Define 1-2 canonical formats and provide a
git interpret-trailers alias or commit hook to enforce them.
Clarify model name inclusion - Currently ~30% of Claude attributions
include the model variant, ~70% don’t. The policy should take a position.
Add CI validation - A prow check that validates AI attribution trailer
format would catch typos and non-standard formats before merge.
Monitor review depth on large AI PRs - The lower comment-per-line rate on
large AI PRs isn’t necessarily a problem today but should be tracked over time.
Continue tracking these metrics across releases - This release-1.8
baseline can be compared against future releases to identify trends.
Conclusion
The overall picture is positive. Contributors are voluntarily adopting
KubeVirt’s AI Contribution Policy, DCO compliance across AI-attributed commits
is perfect, and the data shows no measurable quality difference between
AI-attributed and non-AI PRs. AI tooling is being used for
substantial work across core subsystems, not just boilerplate. The main area for
improvement is standardizing the trailer format to make attribution data more
consistent and machine-parseable.
The full report with all the raw data is available
here.
Welcome to part #6 of this series following the
development of instance types and preferences within KubeVirt!
It’s been over two years since the last update, during which the instance type
and preference APIs have matured significantly. This update covers the major
milestones achieved between KubeVirt v1.0.0 and v1.7.0.
Please also feel free to file bugs or enhancements against
https://github.com/kubevirt/kubevirt/issues using the /area instancetype
command to label these for review and triage.
Major Milestones
Removal of instancetype.kubevirt.io/v1alpha{1,2}
As planned in the previous update, the deprecated v1alpha1 and v1alpha2 API
versions have been removed. Users should ensure they’ve migrated to v1beta1 before upgrading to recent KubeVirt releases.
Deployment of common-instancetypes from virt-operator
A long-awaited feature has landed - virt-operator now deploys the
common-instancetypes bundles directly. This eliminates the need for separate
installation steps and ensures that a standard set of instance types and
preferences are available out of the box.
The latest deployed version is v1.5.1, providing a comprehensive set of
instance type classes and OS-specific preferences.
New Instance Type Features
IOThreads Support
Instance types can now configure IOThreads for improved storage performance:
Instance types now support nodeSelector and schedulerName for fine-grained
scheduling control. The nodeSelector accepts any Kubernetes node labels
defined by cluster administrators:
Panic devices can now be configured through preferences for improved crash
diagnostics.
Firmware Preferences Update
PreferredEfi has been introduced to replace the deprecated PreferredUseEfi
and PreferredUseSecureBoot fields, providing a more flexible firmware
configuration mechanism.
Preferred Annotations
Similar to instance types, preferences can specify preferred annotations:
Support for upgrading ControllerRevisions to newer API versions has been
implemented, enabling seamless migration as the API evolves.
What’s Coming Next
The most significant upcoming change is the promotion of the API to v1,
planned for KubeVirt v1.8.0. This milestone is tracked through
VEP #17 and implemented
in PR #16598.
The instance type and preference APIs continue to evolve based on community
feedback. Check the KubeVirt issue tracker for upcoming enhancements and
contribute your ideas!
I had the pleasure of delivering the following talk in person this year at devconf in Brno.
There were some challenges with audio and my delivery was rusty at best but the presentation is hopefully understandable and useful for folks.
There were plenty of questions at the end of the talk and even more in the hallway/booth track outside the lecture hall. Feel free to comment on the post if you have more or reach out directly using my contact details on the opening slide.
Happy and slightly nervous to highlight that a talk submitted on my behalf by a wonderful colleague while I was out on sick leave was accepted for devconf.cz this year and I’ll be presenting in room D105 at 14:00 CEST on Thursday June 13th:
This has already allowed me to report bug #11749 regarding vCPUs being exposed as threads pinned to non-thread sibling pCPUs on hosts without SMT when using dedicatedCpuPlacement. It has also helped greatly with the design of SpreadOptions that aims to allow instance types users to expose more realistic vCPU topologies to their workloads through extending the existing PreferSpreadpreferredCPUTopology option.
I’m looking also at breaking up the KUBEVIRT_NUM_VCPU env variable to better control the host CPU topology within a kubevirtci environment but as yet I haven’t found the time to work on this ahead of the rewrite of vm.sh in go via PR #1164.
Feel free to reach out if you have any other ideas for kubevirtci or issues with the above changes! Hopefully someone finds this work useful.
After just over 4 months off I’m returning to work on Monday February 26th.
What follows is a brief overview of why I was offline in an attempt to avoid repeating myself and likely boring people with the story over the coming weeks.
tl;dr - If in doubt always seek multiple medical opinions!
As crudely documented at the time on my private instagram account and later copied into this blog I was admitted to hospital back in October after finally getting a second opinion on some symptoms I had been having for the prior ~6 months. These symptoms included fevers, uncontrollable shivering and exhaustion but had unfortunately been misdiagnosed as night sweats. I was about to see an Endocrinologist when my condition really deteriorated and my wife finally pushed me to seek the advice of a second GP.
Within a few minutes of seeing the GP a new diagnosis was suggested and I was quickly admitted to Hospital. There we discovered that my ICD, first implanted 19 years earlier for primary prevention, had become completely infected with a ~40mm long mass (yum!) hanging onto the end of the wire within my heart. The eventual diagnosis would be cardiac device related infective endocarditis. I was extremely lucky that the mass hadn’t detached already causing a fatal heart attack or stroke.
If that wasn’t enough I also somehow managed to contract COVID within a few days of being in hospital and had to spend a considerable amount of time in isolation while my initial course of antibiotics were being given. This was definitely a low point but the Coronary Care Unit and Lugg Ward teams at Hereford County Hospital were excellent throughout.
Once I was COVID negative I was transferred to St Bartholomew’s Hospital in London and had an emergency device extraction via open heart surgery on November 14th. While the surgery went well there were complications in the form of liquid (~600ml) building up around my heart and lungs that led to more surgery at the start of December. During this time I remained on a cocktail of IV antibiotics that finally stopped in the middle of December and after numerous blood cultures (including one false positive!) I had an S-ICD implanted on December 20th before being discharged home the next day.
I’ve spent the time since recovering at home with family as my sternum and numerous wounds slowly heal. This time has been extremely important and I can’t thank Red Hat and my management chain enough for their support with this extended sick leave. I’ve been able to focus on reconnecting with my wife and girls Rose (3.5) and Phoebe (9 months) after 9 extremely difficult weeks apart. I’ve also ventured back to London for checkups and to thank friends who helped me through the weeks away from home. I’ve got many many more people to thank in the coming months.
I now feel mentally ready to return to work but know it’s going to be a little while longer before I’m fully physically recovered.
Thanks to anyone who reached out during this time and I’ll catch you all upstream in the coming weeks!
Update 26/02/24
I’m back to work today, many thanks once again for all of the responses to this on LinkedIn and elsewhere, they are all greatly appreciated!
Thanks again to anyone who liked, messaged, visited or helped out back home. Expect my normal spamming of Rose and Phoebe pictures to begin again soon.
Unfortunately while I’m feeling much better and basically back to normal a set of blood cultures taken last Friday somehow managed to grow bacteria from the same family (Staphylococcaceae) as my original infection. Additional cultures taken on Saturday and Monday did not growing anything leading to the assumption that the first set were somehow contaminated.
To ensure this really was the case another two sets of cultures are being taken today and should allow for the all clear to be given some time on Sunday evening.
This has obviously delayed my final S-ICD implant surgery but I’ve been assured that once the all clear is given the surgery should be able to take place on Monday or Tuesday at the latest. If anything does grow then I’m in for another 10 to 14 days of antibiotics before we try the process again.
I’m pretty gutted after assuming I’d be coming home this week but hopeful that things work out in the coming days and I can get home early next week instead.
Thanks as always to folks sending messages, visiting or helping out at home. We are almost there now.
The last 7 days have been extremely difficult. From being told by a nurse that my pain tolerance is too low (it isn’t, see below about my lung), having a drain and bucket of blood attached to me for 4 days and being left all weekend without my phone, laptop, tablet and glasses it has honestly been the most trying week of my life.
The surgery to remove liquid from around my heart was successful (~750ml removed over 4 days) but we then discovered yet more liquid around my partially collapsed left lung. Thankfully the latter just required rest, exercise and time to clear and I was given the all clear yesterday for both.
This then allowed my antibiotics to finally be stopped as my infection/inflammation blood markers all came crashing down. It honestly felt weird not being hooked up to an IV for 3 hours while I attempted to sleep last night.
With the antibiotics stopped I could then be booked in for S-ICD implant surgery on Monday, the final thing required before I can go home.
All being well this means that I should be discharged sometime on Tuesday and finally able to go home to see my girls for the first time in almost 8 weeks.
To celebrate I’ve just ventured outside on my own and had a flat white for the first time since I was admitted.
As always I can’t thank everyone who made the effort to message, visit or help out back home enough. I can’t imagine what this would have been like without your support so thank you from the bottom of my now disinfected and hopefully healthy heart!