A meticulous QA release plan, even more carefully designed test cases down to the minutest detail and all geared up QA engineers to execute the QA phase—-what more could a QA lead ask for? The icing on the cake of course is “A four month long dedicated QA phase” begins with almost nothing working in the product!!!
No matter how stringent a QA team has put the build acceptance criteria for the QA phase, how religious it has been in adhering to the established criteria, if the development has been honoring the dynamic requirement changes throughout their cycle, QA cannot be successful coming only towards the end the release cycle. Should dynamic requirement changes be constrained? Or should the QA be active participant throughout the development cycle? In software development constraining the requirement changes for a typical 8-12 month release cycle would be impractical though not impossible. The dynamism of the market in response to the consumer demands dictates the software release cycles to be more agile and accommodating. That makes the QA an integral part of the development completeness rather than a dedicated phase towards the end (unlike the manufacturing industry where QA comes as a dedicated phase towards the end). Should we call it the Agile QA? So what’s unique about the Agile QA? What should a QA engineer do differently here…..
Agile QA in short means ensuring quality is built as the product evolves incrementally in iterations. Making sure that every increment and the iteration actually produces a shippable software is the goal that caters subsequently to quick feedback cycles to continue incorporating the requested changes in each iteration. In this first of the series of blogs on agile QA focus is on answering the “Sufficiency of QA within the sprint cycle”
Within the agile development an interesting debate I’ve always encountered is the “Sufficiency of QA within the sprint”. At the completion of the sprint agile advocates the “working software” to be available. Is this working software ready for production at the completion of the sprint—-definitely a million dollar question?
Three years of my earlier journey on the agile (scrum) development has given me good insights into this problem and a probable approach to resolve the issue. “Continuous integration” again a typical agile term prescribes the various levels of automation that needs to be established to reach the ideal state of incremental development of “production ready” agile development. However for the beginners of the agile journey an automation suite necessarily is not all readily available to start reaping the benefits of faster QA cycles. So while a conscious investment to building the automation suite is being built there is a necessary multi-tiered approach to be adopted to overcome the issue on hand.
In my view following a three-Tiered approach can fit into a transforming agile development story
– Sprint Level Testing
– Development-complete Testing
– Release Testing
Sprint level testing: Has to focus on code-level unit testing (Junit), functional unit testing, Integration Testing, functional regression testing
Development-Complete testing: A focused business functionality testing has to go in here. This includes simulating /recreating business users way of using the product. Everything a product manager envisages the finished product to perform has to be checked here. This can be for one week or more but not exceeding two weeks
Release Testing: Focus on the deployment, help test, release notes, and all that would fall under the last minute checks before the actual release. This phase can be a quick short one that typically last from three days to a week.
More insights into the agile QA drawn from some experiences in the coming posts