Most teams run the same retrospective regardless of what they are actually working on. Two weeks into discovery research, they ask "what went well, what didn't." Three days after a launch, same format. Deep in iteration mode optimizing conversion, same questions again.
This is a missed opportunity. The work you do in discovery is fundamentally different from the work you do during a launch. The risks are different, the failure modes are different, and the questions worth asking are different. Your retrospectives should reflect that.
Here is how to adapt your retrospective format to each stage of product development so you actually surface the insights that matter.
Why One Format Does Not Fit All
A retrospective's job is to help you improve at the work you are doing right now. During discovery, "the work" is learning fast. During build, it is execution quality. During launch, it is coordination across functions. During iteration, it is making smart bets about what to keep, cut, or expand.
When you use a generic retrospective format, you tend to get generic observations. Teams default to discussing process complaints (standups are too long, Jira is messy) instead of examining the deeper questions specific to their current stage. Adapting your format is how you steer the conversation toward what actually needs attention.
Stage 1: Discovery — Optimize for Learning Speed
During discovery, your team is running experiments, talking to customers, and testing assumptions. The biggest risk is not that you build something slowly; it is that you build the wrong thing entirely.
Retrospective format: Hypothesis / Test / Learning / Next Action
This four-column structure forces the team to articulate what they assumed, how they tested it, what they actually learned, and what they will do next. It keeps the conversation grounded in evidence rather than opinion.
Questions to ask:
- Which assumptions did we validate or invalidate this cycle?
- Where did we spend time on research that did not produce a clear signal?
- Are we talking to the right people, or are we stuck in a comfortable segment?
- How quickly are we moving from question to answer?
What to watch for:
If your team cannot clearly state what they learned in the past one to two weeks, something is off. Either the research is unfocused, the experiments are too slow, or insights are getting lost between team members. The retrospective should surface which of these is the bottleneck.
Another common pattern: teams that keep "validating" without ever killing an idea. If every hypothesis comes back confirmed, you are probably asking leading questions or interpreting ambiguous data too generously. A healthy discovery process invalidates assumptions regularly.
Stage 2: Build — Balance Speed and Quality
Once you have conviction about what to build, the work shifts to execution. Now the risks are scope creep, unclear requirements, integration headaches, and the slow accumulation of shortcuts that create problems later.
Retrospective format: Delivered / Blocked / Rework / Collaboration
This format focuses on execution health. "Delivered" celebrates progress. "Blocked" surfaces systemic impediments. "Rework" tracks where the team had to redo work (a leading indicator of process problems). "Collaboration" examines how well different functions are working together.
Questions to ask:
- Where did requirements change after development started, and why?
- What rework happened this sprint, and what caused it?
- Were there decisions we had to wait on that slowed us down?
- Is the scope still aligned with what we learned in discovery?
What to watch for:
The build phase is where teams most commonly lose connection to the "why" behind what they are building. Retrospectives should periodically check whether the team still has clarity on the problem they are solving, not just the features they are shipping.
Pay attention to rework patterns. If the same types of issues keep causing rework (unclear acceptance criteria, missing edge cases, design-to-code mismatches), your retrospective action items should target the root cause rather than just noting the symptom again.
Stage 3: Launch — Coordinate Across Functions
Launch is a coordination challenge. Engineering, product, design, marketing, sales, and support all need to execute their parts in sequence. The biggest risk is not a bug in the code; it is a gap between functions where something falls through.
Retrospective format: Planned / Actual / Gap / Next Time
This format is deliberately comparative. You lay out what the plan was, what actually happened, where the gaps were, and what you would change for the next launch. It works well because launches are concrete enough that you can be specific about what deviated from the plan.
Questions to ask:
- Where did the plan break down, and was it a planning failure or an execution failure?
- Which cross-functional handoffs went smoothly and which did not?
- Did customers react the way we expected? What surprised us?
- What did we learn in the first week that we wish we had known earlier?
When to run it:
Do not wait too long. Run a quick retrospective within a week of launch while details are fresh. If it is a significant launch, run a second one at the 30-day mark once you have real usage data. The first retro catches coordination issues. The second catches product-market fit signals.
What to watch for:
Launch retrospectives often devolve into blame when things go wrong. Set the tone early: the goal is to improve the launch process, not to identify who dropped the ball. Frame gaps as system failures, not individual ones. "Our process did not include a step for X" is more useful than "Person Y forgot to do X."
Stage 4: Iterate — Decide What Deserves More Investment
After launch, you are watching usage data and deciding where to invest further. Some features will take off and deserve expansion. Others will underperform and need to be rethought or cut. The biggest risk in this phase is the sunk cost fallacy: continuing to invest in something just because you already built it.
Retrospective format: Working / Not Working / Double Down / Let Go
This format forces explicit prioritization decisions. "Working" and "Not Working" are based on actual usage data and feedback, not gut feeling. "Double Down" and "Let Go" translate observations into resource allocation decisions.
Questions to ask:
- Which features are customers actually using, and which are they ignoring?
- Where are we investing effort that is not producing proportional results?
- What signals would tell us it is time to stop iterating and move on?
- Are we iterating toward a local maximum, or is there a bigger opportunity we are missing?
What to watch for:
Teams often resist the "Let Go" column. There is emotional attachment to features they worked hard on. The facilitator needs to normalize sunsetting as a healthy part of product development, not a failure. Every feature you keep has an ongoing maintenance cost. Being honest about what is not working frees up capacity for things that are.
Running Stage-Specific Retrospectives in Practice
You do not need to build an elaborate system around this. Here are the practical steps:
1. Name your current stage. At the start of each retrospective, explicitly state which stage the team is in. This sounds obvious but many teams never do it, and it reframes the entire conversation.
2. Pick the right format. Use the formats above as starting points and adjust them to your context. The specific column names matter less than whether the format directs attention to the right questions for your current stage.
3. Transition deliberately. When you shift from one stage to another (say, from discovery to build), run a "transition retro" that looks back at the previous stage and sets expectations for the next one. This is a natural moment to realign on goals and success metrics.
4. Keep action items stage-appropriate. A discovery action item should be about improving how you learn. A build action item should be about improving how you execute. If your action items do not match your stage, the retrospective format is not doing its job.
5. Review across stages at milestones. After a full cycle from discovery through iteration, run a meta-retrospective that examines how the overall process worked. This is where you improve your product development process itself, not just the work within a single stage.
Common Mistakes to Avoid
Using build metrics during discovery. Velocity and story points are irrelevant when the goal is learning. Measuring discovery by delivery speed incentivizes premature building.
Skipping the launch retrospective. Teams are often exhausted after a launch and skip the retro. This is exactly when the retro is most valuable, because coordination problems are fresh and specific.
Treating iteration as infinite. Every iteration cycle should have a clear decision point: expand, maintain, or sunset. If your retrospectives during iteration never produce a "let go" decision, you are probably not being honest about what the data is telling you.
Not involving the right people. Discovery retros need researchers and designers front and center. Launch retros need marketing and support. Invite the people who are actually doing the work for that stage.
Try NextRetro free — Set up stage-specific retrospective boards in minutes with customizable columns and built-in templates.
Last Updated: February 2026
Reading Time: 7 minutes