Using AI in Quality Coaching
Many teams are keen to explore how AI can help them in improve test coverage or reduce the time required for testing.
If a team approaches you with questions on how AI can assist in testing, grab it! Any motivation to improve testing is a plus, and if AI is the tool to help improve their test efforts, then use it.
Problem, Experiment, Observe, Evaluate
There are many ways to use AI to help a team. The key is to realise that AI is a tool like any other. A quality coach can assist the team in taking a systematic, experimental approach to adopting AI.
The heuristic "Problem, Experiment, Observe, and Evaluate" supports such an approach.
Identify the Problem you want AI to help solve
While a team may come to you with a tool in mind, it's important to understand the problem the team wants to solve. What exactly do they want AI to help them with? Teams typically ask, "How can I improve test coverage?" and "How can I reduce the time to write tests?" So, this is an excellent place to start. I'm sure, in time, you will find many other problems that AI can help solve.
Once you understand the problem, ask the team how they will measure success. Will it be improved test coverage, reduced test creation time, or faster test creation?
If the team has quantitative data around an agreed-upon metric, you can use that. Otherwise, qualitative data is acceptable. For example, agreeing to a team discussion on the pros and cons of an experiment and coming to an agreed-upon outcome may be the more appropriate 'definition of success' and nicely circumnavigates the need to collect baseline data, which may be overkill in early-stage experimentation.
Experiment with AI in small slices
I've seen teams get excited about how AI might help. Rather than choosing big hairy problems, encourage a lean approach with experimentation. Experiments are faster and cheaper to run, and you can iterate as you learn more.
Agree on when to regroup. You could use existing team rituals such as a retro or arrange a session. I like to keep this short, a maximum of thirty minutes.
Observe the outcome - What did you learn about AI?
Ask open-ended questions to encourage reflection. Some questions you could ask are:
- How did you find using AI?
- What worked? What was difficult? What surprised you?
- What would you do differently?
- And what do you want to do next?
Frame any outcome as a learning. Encourage the team to keep experimenting and reflecting until a successful experiment is completed or the team decides they're done.
Evaluate the AI experiment.
Remind the team of the original definition of success you outlined for the experiment. Did we solve the problem we intended to? If you used data, did the needle change on the chosen metric?
Ask them to evaluate the use of AI to solve the problem at hand. Even if the success metric trended positively, there may be other reasons a team opts not to proceed. For example, the setup time is too long, or their context is inappropriate.
If the team opts out, offer to write up so you can share with other teams.
Research and company policy
If you and the team are new to AI, research existing tools. Nearly every tooling company has injected AI into a tool for marketing purposes, so it pays to be sceptical and demand case studies and pilots.
If you have a security department, speak to them about what's required to endorse an external tool. You may discover they already have answers, but they may not. They may also have to go on this AI journey, so bring them along and work with them together as early as possible.
If a company has already endorsed an AI tool, explore how you can use that. For example, if the company has existing AI tools (such as M365 CoPilot), can we do some experiments around building acceptance test criteria or improving test design?
Use the opportunity in AI to build collaboration
Pairing with a team to get experience using a tool is a great way to build collaboration. It levels the playing field when everyone doesn't know a new tool, so I would push to do that if possible. Offer to pair with a team member to perform a trial. You can share your knowledge of acceptance tests and test coverage while they can provide context and learn how to use a new tool.
Even if the context is more technical than you are familiar with, it's still an excellent opportunity to learn together; if you feel safe, open yourself to pairing with an engineer on building (for example) a playwright test together. Be the guide on what the best test to build is. Ask to control the keyboard so you can get to practise.
Principles of Responsible AI
Your company may already have researched and created principles and governance guidelines around responsible AI. Make a point of understanding these and helping any team members be aware of their responsibilities.
Australia has adopted the UN Responsible AI principles with some modifications. I like them; they're clear and easy to understand. I developed a mnemonic around them to ensure I can speak to them at any time.
It is:
We Value Fairness To All People Continuously and Repeatibly.
- Wellbeing -> Societal and individual considerations
- Value -> Human Rights, Diversity and Autonomy
- Fairness -> Inclusivity
- Transparency -> How AI is used
- Accountability -> ownership of AI product
- Privacy & Security
- Contestability -> the right to question the outcome
- Reliability and Safety
If your company hasn't yet implemented these, the United Nations has published the "Principles for the ethical use of artificial intelligence" an ethical impact assessment.
Happy experimenting!
Comments ()