Building an MVP for a new product means that we work in a field of high uncertainty and may sometimes tend to excessive investigations or rushed and non-optimal decisions.
This article is dedicated to our experience setting up productivity metrics and using them to configure a predictable working process.
There’s no perfect tool that will save you from problems with your product. But tracking those productivity metrics is really about being involved and visualising what’s going on — and once you see unwanted changes, take action.
Outline: • Visualise your product development plan • Keep track of your development & budget • Measure team productivity metrics • Build forecasts
Visualise your product development plan
When we work on an MVP, usually we have a strict budget and a corresponding delivery date based on the scope & the team’s velocity.
- To make a detailed plan for the phase I recommend using the Gantt chart based on the scope from the very beginning — use the story level to make it more detailed and assign recourses to produce a realistic delivery date. Check out gantter.com — this is the tool I’m using.
- Don’t forget about the focus factor → instead of using an 8-hr working day for planning purposes, an optimistic value would be a 6-hr working day. Moreover, based on our experience and considering meetings / additional tasks, a realistic value would be around a 4-5-hr working day (on the product features).
- Once you have a detailed Gantt chart, you may go one level higher and prepare a roadmap for the whole team on the epic level.
It is an open question, whether you need to go and update your detailed Gantt chart on a regular basis. For short development phases (1-2 months) updating the epic level roadmap every 2 weeks should be enough.
Keeping track of your plan and budget
Every week’s iteration ideally should finish with delivering a working piece of increment, a feature with a clear customer value that could be tried out.
There is an opinion that tracking % of the work done in comparison to the planned scope might not reflect the real situation — as just one feature may change it completely and require significantly more or less time.
So focus on the budget here — track the budget usage and the corresponding completion of the scope, to make sure we are not falling off the plan/budget ratio.
Measure team productivity metrics
The scope & budget state is already a good presentation of the team’s work. But to have data for our improvements and dig a bit deeper into how and why we deliver at this pace, we also calculate:
- Sprints plan / fact stats Using these stats we want to see how well we stick to what we plan from sprint to sprint. This data may be of great use for retrospectives to discuss what activities take the time we need and plan for features, and how we could minimise it.
- Real team’s focus factor
- Team’s energy level Team morale is a good metric to track which isn’t about the work itself but rather about the team being excited about what we do and how we work together. I usually ask the team members about their energy levels at the end of the retrospective (every 2 weeks).
Out of 8 hrs in a standard working day, how much do we actually spend on product features? With time we replace the “Real” capacity from the previous metrics with the one calculated based on the team’s focus factor.
As discussed at the beginning of the article, working within a strict budget means that we have a deadline when the money will run out. So in addition to the metrics above, I usually calculate the estimated delivery date — this is something that is easier to communicate to the team & stakeholders. I have 2 approaches for delivery date estimation: pessimistic and optimistic.
The first one is based on the completed % of scope: we take the % and the number of sprints we had to cover it → and calculate how many we will need to cover 100%. This value might be not accurate due to the fact that we consider only features that are completed. Those that are in progress don’t count — but in fact, the scope remaining would be smaller due to those “in progress” features.
The second approach is based on the team’s velocity. We know how many story points or hours we cover every sprint, and we know the total amount of story points or hours — so we could estimate the total number of sprints. The minus of this approach is that it doesn’t consider features that took or will take longer than initially estimated.
The combination of these 2 approaches, I believe works really nicely providing the delta and giving founders a realistic picture of your development pace.
Well, choose the one metric or a combination of them that feels right and valuable for your product, and start!
We often talk about improvements, but keep in mind that no improvement is possible without initial data — as it is the starting point we evolve from.
Written by Alexandra Melnikova