I had the absolute pleasure to be invited to keynote at SeleniumConf 2020 in Bangalore. I had decided in February to talk about test leadership and how to lead through uncertainty. And that was before COVID-19 came on the scene!
It’s the first time I’ve spoken ‘live’ online at a conference and it doesn’t come without its own set of unpredictability and unknowns! It’s hard to speak when you can’t see the audience and you have little interaction. But overall it was fun. I think next time, I will keep the talk shorter, and allow for Q&A. I think speaking remotely like that makes interaction even more important. So shorter talks, longer Q&A is my learning!
I didn’t get to the Q&A here are responses to the 4 of the most popular questions.
How do we see future of manual Testing over Automation Testing ?
Personally, I dislike this pitting of manual testing against test automation. For me it’s not one or the other, but instead a range, a blend of the two. Both types of testing provide value. Exploratory Testing is useful in more places than we realise. You can do exploratory testing in unit testing, when creating automated checks, during bug blitzes and even in production. Test Automation is great to give reliability of results and provide some confidence that the system is behaving as we hope it should.
It also depends on what you are testing. For instance, if you are testing something brand new, and you have no idea on what to expect, a high amount of exploratory testing may be useful as you don’t have sufficient knowledge to decide on what the asserts are going to be yet.
Again if something is changing a lot, exploratory testing maybe better suited until outputs start becoming more predictable.
I think there’s going to be interesting manual work being done in the product area. I see the need for a product tester, someone who sits with the product team and helps identify risk(from a consumer perspective) early on.
As Increase Test Coverage will increase testing cost. How can we achieve good test coverage under limited testing cost?
Let risk be your guiding star. What is so important to your company that it must not fail? Make sure you test for that. Make sure you are refactoring your regression test suites. Consider that every test has an end by date. Look for duplication in test effort between levels of abstractions. Are we testing the same thing again and again in a different way, or are we truly testing something different.
Ask what could fail in production and we would be ok with that? (Great question to ask in planning sessions)
How Automation testing moving over AI Testing?
It’s useful to think of AI/Machine Learning as a tool in software testing as opposed to a replacement. Yes, its a very shiny new tool, that promises the world, but its still a tool. I’ve not doubt in time, AI will replace some of the simpler activites in software testing. For instance, I’ve seen AI being used in test case duplication and also an attempt to make decisions on prioritisation of tests. There’s no doubt that the concept of AI is attractive and many companies will look to invest in this, but we’re far away from it taking over as the tool of choice. Put it this way, 25 years ago I was promised a test tool that would automated all the things in end to end testing. We still haven’t solved that problem well. These tests are still hard to maintain and are still brittle. Who knows, maybe AI will solve this problem for us! But I’m skeptical its going to be any time soon!
That doesn’t mean we shouldn’t explore and better understand what value these tools can bring to our testing. I bet we will discover many ways it will help us, but also it might make our testing more susceptible to errors.
I want to perform software testing so that as organisation we get better Return on Investment?
I’m going to assume you are talking about ROI on software testing here.
Short Answer: Don’t. It’s going to get you into all sorts of mess that needs a whole blog post. Deploying faster and more reliably with less errors is what most companies are aiming to do. So focus on that instead. Small frequent batches is how the risk of failure is being managed. That means, when (not if) there is a failure, that failure can be quickly fixed. Because of this approach, we’re measuring different things. Think of software testing in combination with software development as opposed to independent activities that are measured in isolation. Instead, measure deployment frequency, Lead time for changes, Mean time to recover (MTTR) and Change failure rate. (Read State of Devops 2017 for a good beginners guide into these measurements).
Oh, and focus on trends, not absolutes. How we did compared to last year, as opposed to is this value good?