Summarizer

LLM Input

llm/9b2efe03-4d9e-4db2-a79a-13cee83b17d6/topic-9-36bff0dd-aa10-458f-a242-7d50edce1599-input.json

prompt

The following is content for you to summarize. Do not respond to the comments—summarize them.

<topic>
Context Window Visibility Tools # User built claude-trace CLI to parse usage logs and break down token consumption by session, tool, project, providing measurement before optimization
</topic>

<comments_about_topic>
1. This post made me realize I had zero visibility into where my Claude Code tokens were actually going, so I built a small companion CLI this morning: https://github.com/vexorkai/claude-trace

It parses ~/.claude/projects/*/*.jsonl and breaks usage down by session, tool, project, and timeline with cost estimates (including cache read/create split).

Context Mode solves output compression really well; this is more of a measurement layer so you can see where the burn is before/after changes.

Disclosure: I built it.

2. > I had zero visibility into where my Claude Code tokens were actually going

/context?
</comments_about_topic>

Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.

topic

Context Window Visibility Tools # User built claude-trace CLI to parse usage logs and break down token consumption by session, tool, project, providing measurement before optimization

commentCount

2

← Back to job