llm/065c6e83-d0d5-4aca-be3d-92768a8a3506/topic-12-02a8ee93-bb6d-4dbf-8f18-25b26cfd9349-input.json
The following is content for you to summarize. Do not respond to the comments—summarize them. <topic> Annotation Workflow Details # Questions about how to format inline annotations for Claude to recognize. Techniques like TODO prefixes, HTML comments, and clear separation between human and AI-written content. </topic> <comments_about_topic> 1. I use Claude Code for lecture prep. I craft a detailed and ordered set of lecture notes in a Quarto file and then have a dedicated claude code skill for translating those notes into Slidev slides, in the style that I like. Once that's done, much like the author, I go through the slides and make commented annotations like "this should be broken into two slides" or "this should be a side-by-side" or "use your generate clipart skill to throw an image here alongside these bullets" and "pull in the code example from ../examples/foo." It works brilliantly. And then I do one final pass of tweaking after that's done. But yeah, annotations are super powerful. Token distance in-context and all that jazz. 2. Can I ask how you annotate the feedback for it? Just with inline comments like `# This should be changed to X`? The author mentions annotations but doesn't go into detail about how to feed the annotations to Claude. 3. Slidev is markdown, so i do it in html comments. Usually something like: <!-- TODOCLAUDE: Split this into a two-cols-title, divide the examples between --> or <!-- TODOCLAUDE: Use clipart skill to make an image for this slide --> And then, when I finish annotating I just say: "Address all the TODOCLAUDEs" 4. Not yet... but also I'm not sure it makes a lot of sense to be open source. It's super specific to how I like to build slide decks and to my personal lecture style. But it's not hard to build one. The key for me was describing, in great detail: 1. How I want it to read the source material (e.g., H1 means new section, H2 means at least one slide, a link to an example means I want code in the slide) 2. How to connect material to layouts (e.g., "comparison between two ideas should be a two-cols-title," "walkthrough of code should be two-cols with code on right," "learning objectives should be side-title align:left," "recall should be side-title align:right") Then the workflow is: 1. Give all those details and have it do a first pass. 2. Give tons of feedback. 3. At the end of the session, ask it to "make a skill." 4. Manually edit the skill so that you're happy with the examples. 5. > After Claude writes the plan, I open it in my editor and add inline notes directly into the document. These notes correct assumptions, reject approaches, add constraints, or provide domain knowledge that Claude doesn’t have. This is the part that seems most novel compared to what I've heard suggested before. And I have to admit I'm a bit skeptical. Would it not be better to modify what Claude has written directly, to make it correct, rather than adding the corrections as separate notes (and expecting future Claude to parse out which parts were past Claude and which parts were the operator, and handle the feedback graciously)? At least, it seems like the intent is to do all of this in the same session, such that Claude has the context of the entire back-and-forth updating the plan. But that seems a bit unpleasant; I would think the file is there specifically to preserve context between sessions. 6. The whole process feels Socratic which is why I and a lot of other folks use plan annotation tools already. In my workflow I had a great desire to tell the agent what I didn’t like about the plan vs just fix it myself - because I wanted the agent to fix its own plan. 7. One reason why I don't do this: even I won't be immune to mistakes. When I fix it with new values or paths, for example, and the one I provided is wrong, it can worsen the future work. Personally, I like to order claude one more time to update the plan file after I have given annotation, and review it again after. This will ensure (from my understanding) that claude won't treat my annotation as different instructions, thus risking the work being conflicted. 8. Since everyone is showing their flow, here's mine: * create a feature-name.md file in a gitignored folder * start the file by giving the business context * describe a high-level implementation and user flows * describe database structure changes (I find it important not to leave it for interpretation) * ask Claude to inspect the feature and review if for coherence, while answering its questions I ask to augment feature-name.md file with the answers * enter Claude's plan mode and provide that feature-name.md file * at this point it's detailed enough that rarely any corrections from me are needed 9. Absolutely. And you can also always let the agent look back at the plan to check if it is still on track and aligned. One step I added, that works great for me, is letting it write (api-level) tests after planning and before implementation. Then I’ll do a deep review and annotation of these tests and tweak them until everything is just right. 10. If you’ve ever desired the ability for annotating the plan more visually, try fitting Plannotator in this workflow. There is a slash command for use when you use custom workflows outside of normal plan mode. https://github.com/backnotprop/plannotator 11. Regarding inline notes, I use a specific format in the `/plan` command, by using th `ME:` prefix. https://github.com/srid/AI/blob/master/commands/plan.md#2-pl... It works very similar to Antigravity's plan document comment-refine cycle. https://antigravity.google/docs/implementation-plan 12. The annotation cycle is the key insight for me. Treating the plan as a living doc you iterate on before touching any code makes a huge difference in output quality. Experimentally, i've been using mfbt.ai [ https://mfbt.ai ] for roughly the same thing in a team context. it lets you collaboratively nail down the spec with AI before handing off to a coding agent via MCP. Avoids the "everyone has a slightly different plan.md on their machine" problem. Still early days but it's been a nice fit for this kind of workflow. 13. > “remove this section entirely, we don’t need caching here” — rejecting a proposed approach I wonder why you don't remove it yourself. Aren't you already editing the plan? 14. I’m a big fan of having the model create a GitHub issue directly (using the GH CLI) with the exact plan it generates, instead of creating a markdown file that will eventually get deleted. It gives me a permanent record and makes it easy to reference and close the issue once the PR is ready. 15. Cool, the idea of leaving comments directly in the plan never even occurred to me, even though it really is the obvious thing to do. Do you markup and then save your comments in any way, and have you tried keeping them so you can review the rules and requirements later? 16. Insights are nice for new users but I’m not seeing anything too different from how anyone experienced with Claude Code would use plan mode. You can reject plans with feedback directly in the CLI. 17. this is exactly how I work with cursor except that I put notes to plan document in a single message like: > plan quote my note > plan quote my note otherwise, I'm not sure how to guarantee that ai won't confuse my notes with its own plan. one new thing for me is to review the todo list, I was always relying on auto generated todo list 18. I agree with most of this, though I'm not sure it's radically different. I think most people who've been using CC in earnest for a while probably have a similar workflow? Prior to Claude 4 it was pretty much mandatory to define requirements and track implementation manually to manage context. It's still good, but since 4.5 release, it feels less important. CC basically works like this by default now, so unless you value the spec docs (still a good reference for Claude, but need to be maintained), you don't have to think too hard about it anymore. The important thing is to have a conversation with Claude during the planning phase and don't just say "add this feature" and take what you get. Have a back and forth, ask questions about common patterns, best practices, performance implications, security requirements, project alignment, etc. This is a learning opportunity for you and Claude. When you think you're done, request a final review to analyze for gaps or areas of improvement. Claude will always find something, but starts to get into the weeds after a couple passes. If you're greenfield and you have preferences about structure and style, you need to be explicit about that. Once the scaffolding is there, modern Claude will typically follow whatever examples it finds in the existing code base. I'm not sure I agree with the "implement it all without stopping" approach and let auto-compact do its thing. I still see Claude get lazy when nearing compaction, though has gotten drastically better over the last year. Even so, I still think it's better to work in a tight loop on each stage of the implementation and preemptively compacting or restarting for the highest quality. Not sure that the language is that important anymore either. Claude will explore existing codebase on its own at unknown resolution, but if you say "read the file" it works pretty well these days. My suggestions to enhance this workflow: - If you use a numbered phase/stage/task approach with checkboxes, it makes it easy to stop/resume as-needed, and discuss particular sections. Each phase should be working/testable software. - Define a clear numbered list workflow in CLAUDE.md that loops on each task (run checks, fix issues, provide summary, etc). - Use hooks to ensure the loop is followed. - Update spec docs at the end of the cycle if you're keeping them. It's not uncommon for there to be some divergence during implementation and testing. 19. How are the annotations put into the markdown? Claude needs to be able to identify them as annotations and not parts of the plan. 20. It seems like the annotation of plan files is the key step. Claude Code now creates persistent markdown plan files in ~/.claude/plans/ and you can open them with Ctrl-G to annotate them in your default editor. So plan mode is not ephemeral any more. 21. I use both. As I’m working, I tell each of them to update a common document with the conversation. I don’t just tell Claude the what. I tell it the why and have it document it. I can switch back and forth and use the MD file as shared context. 22. The “inline comments on a plan” is one of the best features of Antigravity, and I’m surprised others haven’t started copycatting. 23. I do something broadly similar. I ask for a design doc that contains an embedded todo list, broken down into phases. Looping on the design doc asking for suggestions seems to help. I'm up to about 40 design docs so far on my current project. 24. Tip: LLMs are very good at following conventions (this is actually what is happening when it writes code). If you create a .md file with a list of entries of the following structure: # <identifier> <description block> <blank space> # <identifier> ... where an <identifier> is a stable and concise sequence of tokens that identifies some "thing" and seed it with 5 entries describing abstract stuff, the LLM will latch on and reference this. I call this a PCL (Project Concept List). I just tell it: > consume tmp/pcl-init.md pcl.md The pcl-init.md describes what PCL is and pcl.md is the actual list. I have pcl.md file for each independent component in the code (logging, http, auth, etc). This works very very well. The LLM seems to "know" what you're talking about. You can ask questions and give instructions like "add a PCL entry about this". It will ask if should add a PCL entry about xyz. If the description block tends to be high information-to-token ratio, it will follow that convention (which is a very good convention BTW). However, there is a caveat. LLMs resist ambiguity about authority. So the "PCL" or whatever you want to call it, needs to be the ONE authoritative place for everything. If you have the same stuff in 3 different files, it won't work nearly as well. Bonus Tip: I find long prompt input with example code fragments and thoughtful descriptions work best at getting an LLM to produce good output. But there will always be holes (resource leaks, vulnerabilities, concurrency flaws, etc). So then I update my original prompt input (keep it in a separate file PROMPT.txt as a scratch pad) to add context about those things maybe asking questions along the way to figure out how to fix the holes. Then I /rewind back to the prompt and re-enter the updated prompt. This feedback loop advances the conversation without expending tokens. 25. This separation of planning and execution resonates deeply with how I approach task management in general, not just coding. The key insight here - that planning and execution should be distinct phases - applies to productivity tools too. I've been using www.dozy.site which takes a similar philosophy: it has smart calendar scheduling that automatically fills your empty time slots with planned tasks. The planning happens first (you define your tasks and projects), then the execution is automated (tasks get scheduled into your calendar gaps). The parallel is interesting: just like you don't want Claude writing code before the plan is solid, you don't want to manually schedule tasks before you've properly planned what needs to be done. The separation prevents wasted effort and context switching. The annotation cycle you describe (plan -> review -> annotate -> refine) is exactly how I work with my task lists too. Define the work, review it, adjust priorities and dependencies, then let the system handle the scheduling. 26. last i checked, you can't annotate inline with planning mode. you have to type a lot to explain precisely what needs to change, and then it re-presents you with a plan (which may or may not have changed something else). i like the idea of having an actual document because you could actually compare the before and after versions if you wanted to confirm things changed as intended when you gave feedback 27. 'Giving precise feedback on a plan' is literally annotating the plan. It comes back to you with an update for verification. You ask it to 'write the plan' as matter of good practice. What the author is describing is conventional usage of claude code. 28. A plan is just a file you can edit and then tell CC to check your annotations 29. One thing for me has been the ability to iterate over plans - with a better visual of them as well as ability to annotate feedback about the plan. https://github.com/backnotprop/plannotator Plannotator does this really effectively and natively through hooks 30. Wow, I've been needing this! The one issue I’ve had with terminals is reviewing plans, and desiring the ability to provide feedback on specific plan sections in a more organized way. Really nice ui based on the demo. </comments_about_topic> Write a concise, engaging paragraph (3-5 sentences) summarizing the key points and perspectives in these comments about the topic. Focus on the most interesting viewpoints. Do not use bullet points—write flowing prose.
Annotation Workflow Details # Questions about how to format inline annotations for Claude to recognize. Techniques like TODO prefixes, HTML comments, and clear separation between human and AI-written content.
30