Back to Journal
Field Notes

Sprint Workflow With an AI Pair

How sprint retros, journal entries, and session handoffs create a feedback loop that makes AI coding sessions better over time. The workflow I use across every project.

March 23, 20266 min read

AI coding tools have a memory problem. Every new session starts from zero. The AI doesn't know what you built yesterday, what broke, what you learned, or what you decided to do differently. You're essentially onboarding a new developer every time you open a terminal.

I've been working around this for 9 projects now, and the solution isn't a single trick. It's a workflow. A loop where each session feeds the next one, and the AI gets better at helping you because you got better at setting it up.

The Loop

Here's the cycle that runs across every sprint:

Plan sprint → Work sessions → Session handoffs → Retro → Update docs → Next sprint
                    ↑                                          |
                    └──────────────────────────────────────────┘

Each piece generates artifacts that feed the next piece. The sprint plan tells the AI what to build. The session handoffs preserve context between conversations. The retro captures what went wrong and what to change. The doc updates make the next sprint's sessions start from a better baseline.

None of these steps exist just for documentation's sake. Each one solves a specific problem I hit when working with AI tools.

Session Handoffs: The 3-Line Summary

The biggest context loss happens between sessions. You close the terminal, come back tomorrow, and the AI has no idea where you left off. You spend the first 15 minutes re-explaining what you were doing.

At the end of every session, I write a 3-line handoff:

  1. What was done this session
  2. What was discovered (bugs, surprises, decisions)
  3. What's next for the following session

That's it. Three lines. When I start the next session, I paste those three lines as context and the AI picks up exactly where I left off. No re-explaining, no re-reading files to figure out state, no accidental re-doing of work that's already done.

This sounds trivially simple. It is. But the number of times I used to start a session with "so, where were we..." and then spend 20 minutes getting the AI back up to speed was embarrassing. Three lines at the end saves fifteen minutes at the start. Every time.

Sprint Planning With Context

Before starting a sprint, the AI gets three things:

  1. The project CLAUDE.md with architecture, patterns, and key decisions
  2. The roadmap showing what's done and what's next
  3. The sprint scope broken into stories

With all three loaded, the planning conversation is focused. "Here are the 6 stories for this sprint. What order should we tackle them? What are the dependencies?" The AI can give real answers because it has the full picture.

Without that context, sprint planning with AI is useless. "Help me plan my sprint" with no context gets you generic agile advice. "Help me plan these 6 stories against this architecture with these constraints" gets you an actual sequence with dependency reasoning.

The trick is that each sprint starts with more context than the last. The CLAUDE.md grows. The roadmap fills in. Patterns accumulate. By Sprint 4 of a project, the AI knows enough about the codebase and your decisions to give genuinely useful planning input.

The Retro That Changes Things

I run a retro after every sprint. Not a big ceremony. Just a structured look at what happened:

  • What was planned vs what was delivered
  • What surprised me
  • What should change in the process

The retro for Fizzics Sprint 4 was the most useful one I've done. I planned 30 tasks and delivered 105. Two of six stories never got started. And yet the sprint was the most productive of the project. The retro surfaced why: I'd planned a convergence sprint like a construction sprint.

That retro led to concrete Playbook changes: sprint type classification, first-playthrough rituals, session boundary logging. Real process improvements that came from actually examining what happened, not from reading a best practices article.

Here's the part that matters for AI workflow: those retro findings go back into the project docs. The CLAUDE.md gets updated with new patterns. The Playbook templates get revised. Next sprint, the AI starts with better context because the retro generated better documentation.

That's the loop. Sprints generate retros. Retros generate doc improvements. Doc improvements make the next sprint's AI sessions better.

Journal Entries as Thinking Tools

The journal entries on this site aren't just content for readers. They're a forcing function for me.

Writing "here's what I decided and why" forces me to actually articulate the reasoning. Half the time, the act of writing it down reveals that my reasoning has a hole in it. "We chose this approach because..." and then I realize the because doesn't hold up.

The entries also become context for future sessions. When I start a new project that's similar to a previous one, I can point the AI at the journal entry and say "I ran into this problem before, here's what I learned, don't repeat it." The journal is searchable institutional memory.

This is different from having the AI write docs for you. The AI can help draft, sure. But the thinking happens when you review what it wrote and fix the parts where it papered over a decision you hadn't actually made.

What This Looks Like in Practice

A typical sprint day for me:

Start of session:

  • Open Claude Code in the project directory (CLAUDE.md loads automatically)
  • Paste the handoff notes from last session
  • "Here's where we left off. Next up is [story]. Let's start."

During the session:

  • Work through stories incrementally. Each change builds and runs before the next one starts.
  • When something unexpected comes up, note it. Don't just fix it and move on. Write down what was surprising.
  • When a decision gets made ("we're going with SSG, not SSR"), update the CLAUDE.md immediately so the AI doesn't suggest SSR in the next conversation.

End of session:

  • Write the 3-line handoff
  • Update the roadmap (check off what's done, note what's blocked)
  • If something changed about the architecture or patterns, update CLAUDE.md

End of sprint:

  • Write the retro (planned vs actual, surprises, process changes)
  • Write the journal entry (what was built, what was learned)
  • Update Playbook templates if the retro surfaced improvements

Each step is small. None of it takes more than a few minutes. But the compound effect across a multi-sprint project is significant. By Sprint 8 or 9, the AI is working from a rich context that reflects everything I've learned about the project so far.

The Compound Effect

The first session on a new project is the worst one. The AI knows nothing. The CLAUDE.md is thin. There's no history, no patterns, no "we tried X and it didn't work."

By Sprint 3, things are noticeably different. The AI knows the architecture, the key patterns, the gotchas. Sessions start faster, require less correction, and produce better code on the first pass.

By Sprint 8, it feels like working with someone who actually knows the project. Not because the AI remembers anything between sessions. It doesn't. But because the documentation reflects enough accumulated knowledge that each new session starts from a genuinely informed baseline.

The workflow is the memory. The retros, the handoffs, the journal entries, the CLAUDE.md updates. Each one is a tiny investment that makes every future session better. And unlike the AI's actual memory, this one persists.