Improve Your Sprint Planning with the Story Impact Checklist
By Allison Lazarz, Software Test Engineer at Cox Automotive (allison.lazarz@coxautoinc.com)
We’ve probably all been in situations where we’ve looked back at our 2-week sprint cycle and thought, “how did our team forget [fill in the blank]?” How did we forget we’d have a dependency on another team? How did we forget that it wasn’t easy to get test data to test this story?
A Story Impact Checklist (SIC) is a simple tool your team can utilize right away to help you stay organized and prevent recurring problems from impacting sprint work. I’ve had success using it on my scrum teams for the last 8 years. I'm excited to share its benefits with you!
What is an SIC?
As the name implies, an SIC is a checklist of items that might impact the stories in your sprint. It's best used during sprint planning, when the team is discussing what tasks are needed in order to complete a story. Ideally, the SIC will prompt the team to add tasks to your stories that you may have otherwise forgotten. Getting a full picture of the work needed to complete the story during the story tasking phase will help ensure the team makes time to complete all tasks. You avoid being surprised at the last minute and not having enough time to course-correct. The checklist has two main goals:
- Improve organization and consistency
- Get team members involved in discussing items that could impact story completion
Let’s look at each of these areas.
Improve organization and consistency
By using a checklist, you won’t forget to ask important questions that may get overlooked. It’s easy to think, “we’ll remember that next sprint” and then forget! Using the checklist as a reference will ensure you do remember those important questions. Some examples are: do we have the necessary data to test this story? If not, will we need to invest a significant amount of time in creating that data? Or: Do we need monitoring or alerting for this work? Having these discussions sooner than later can help the team plan effectively. Which leads into the next goal of the checklist—discussion!
Get all team members involved in discussing items that could impact story completion
One of the biggest benefits I’ve found with the SIC is that it naturally leads to important discussions about the work ahead. For example, having a discussion around test data could lead the team to realize there isn’t a way to test the story unless a user with the proper permissions is created. Perhaps it’s an involved process to get the user created. What if the team didn’t discuss this ahead-of-time, and instead waited until a day or two before the sprint ended (when the work was ready to be tested)? There’s a good chance the story would carry to the next sprint while the team waited for the user to be created.
Keep It Simple
To start getting results using a Story Impact Checklist, keep it simple and then iterate on it! Jot down a list of items you think your team would benefit from discussing during sprint planning. These can be items related to the type of work your team does (questions about browser testing or API testing) and/or questions that touch on items the team repeatedly forgets. Examples might include:
- Should we document this work (and where)?
- Do we need a test plan? (Will anyone review it?)
- What types of testing will we do (automated, manual)?
- Will this work impact existing automated tests?
Iterate, get feedback, repeat!
As you start using the checklist, you’ll likely add to it and remove from it. Mine grew from a list of 10 questions to a list of 14 questions before I heard grumbles about how long it was taking to run through the SIC for each story! I then categorized the items on the list so that the team could skip over irrelevant items. For example, I had a section for “Monitoring and Alerting,” and another for “Browser Testing.” If the category wasn’t applicable to the story, we skipped over all of the items in it.
The more the team uses the checklist, the more automatic the thought process becomes around these important items. Once you feel an item on the list isn’t serving you anymore, take it off! I was on a team that had an SIC item for “release the code.” We would create a task each sprint for, “release the code,” because there were multiple occasions where the work was completed but doing the actual code release simply got overlooked. Over time, we were able to remove this item from our checklist because it became a habit and a part of our regular process to release our code after development!
What’s it look like?
Check out a few visuals below that show how my story impact checklist evolved over time.
I’ve found the Story Impact Checklist to be a powerful tool for engaging in conversations about quality and testing. I hope you’ll give it a try and please reach out if you have questions about how it could benefit your team!
How My SIC Matured
The SIC is a constant progression. I went through various iterations before reaching my current version. This is based on feedback from my team and my learnings. Here are some examples, starting with my current SIC and working backward.
Version 4: This is my current version. It is a list of tasks in Rally (our company’s Epic/Feature/Story organization software). Any time a new user story is created, that list of stock tasks is added to it. Our team reviews them after we’ve listed all the tasks we think we need to complete the story. They are our “sanity check” of important items that we want to make sure we don’t forget!
- Test Data?
- Test Outline?
- Test Outline Review?
- Verification?
- Automated Tests?
- Feature Flag?
- User Documentation?
- Monitoring and Alerting?
Version 3: This version started a shift in thinking. Creating categories served two purposes. First, it allowed us to eliminate sections that weren’t relevant to the story, making the discussion process faster. Second, it started to help us mentally visualize which items were important to each category. Before we got to the point of running through the list, the team would often come up with tasks that met the different bullet point items because it had become a part of the process. This meant we spent less time discussing each item. Only those items that hadn’t yet been addressed became topics of conversation.
Testing
- Risks/regression testing?
- Specific test data?
- Automation?
- Unit, integration, end-to-end
- Need to fix existing automation tests?
- Review Acceptance Criteria task?
- Test outline and test outline review tasks?
Coding
- User permission/feature flag?
- Handling half-processed data?
- AWS cost-analysis
- Processing, storage, bandwidth?
- Documentation?
Monitoring/Alerting
- New Relic/Sumo Logic?
- Add PagerDuty alerts?
Releasing
- Do we have a “Live Release” task?
- Do we have an “Acceptance of Story” task?
- Do we have a failover/rollback plan?
- Do we need stakeholder communication?
Version 2: After finding value in Version 1, the team thought of more important questions to discuss during sprint planning. This worked great for several sprints, then there were grumbles about how long it was taking to go through the list for each story. I decided to revamp it into categories. (See Version 3 below).
- What’s at risk? What regression testing should be done?
- Is there specific test data needed to test this?
- Is there anything in this story that can be automated (including end-to-end automation)?3b. Would it affect any existing automated tests?
- Do we have or need a fail-over plan?
- Is this behind a user permission or feature flag?
- Do group account scenarios need to be considered?
- Is there a specific browser/platform we need to support for this?
- Is it sensitive to the network or other latency? Does it need timeouts or circuit breakers?
- Do we need to consider cache?
- Are there defined SLAs or performance objectives? Should we performance/stress test it?
- Do we need new metrics or analytics in place?
- Did we add a “documentation” task?
- How will we demo this at Sprint Review?
Version 1: This was my first pass at questions I thought the team would benefit from discussing.We had such great conversations that we kept adding to it! (See version 2 below).
- What’s at risk? What regression testing should be done?
- Is there specific test data needed to tet this?
- Is there anything in this story that can be automated (including end-to-end automation)?
- Do we have or need a fail-over plan?
- Is this behind a feature flag or user permission?
- Is there a specific browser/platform we need to support for this?
- Can we load test it?
- Did we add a “documentation” task?
- Do we have code review tasks?
Comments ()