Agile testing engages testers in an agile development environment, encouraging proactive involvement in an integrated team-oriented approach that expands their traditional remit. With a viewpoint that combines the prejudices of the customer with strong technical skills, an agile tester can make a significant contribution throughout the entire lifecycle of a software project. Developing and performing acceptance tests iteratively complements and strengthens the development effort, helping to deliver improved quality. Acceptance tests for agile testing are an eXtreme Programming idiom and encompass traditional validation and functional tests. Testers should work closely with the customer and business analysts to develop the acceptance tests in parallel with the formulation of requirements. This will provide an implicit review of the requirements that will help visualise the application, and supply early feedback on aspects such as usability. This also facilitates continuous input from the customer and business analysts to the specification of the acceptance tests so that they produce the desired level of quality. The level of knowledge and familiarity with the requirements accumulated during the generation of the acceptance tests will be useful during the design and modeling sessions. The testers can help interpret the requirements for the developers and clarify the customer's needs. They can also ensure that the criteria for quality are considered in design decisions together with testability. Test earlyDefects revealed early in the development process can be fixed more easily and more cheaply. Developers use unit tests to verify that software works as intended. In an iterative development process, automated unit tests should be evolved in unison with the software. Each time a feature is added, it is tested and fixed, immediately stabilising the software. The next feature is then designed, implemented, tested, and fixed, and so on. These automated unit tests double as regression tests, performed whenever the specification or the implementation of software is changed. The software may be incomplete, but early on in the development phase it is already usable and testable. This approach provides early feedback that reveals erroneous designs, previously unidentified requirements, and defects, and can be used to steer the project, driving changes in the priorities and details of the software to be implemented next. Test oftenContinuous integration should integrate software components at least on a daily basis. During this cycle, automated integration tests should be developed iteratively to match the software that has been integrated (this can require stubs and simulators to be created that represent software not yet developed). A build should be performed at the end of each day comprising only the successful integrations, i.e. those with unit tests that run with 100% success, and should be deployed to the integration environment where the automated unit and integration tests are performed overnight. As the software matures during an iteration, the acceptance tests should be automated and progressively introduced into the overnight test execution (see the FIT Testing Framework). When an iteration's development is complete, the acceptance tests should be executed with the customer. All these tests accumulate over the iterations, so that tests from previous iterations are executed as regression tests for each new build. Test automation is key to agile testing, but repeatable tests cannot be achieved if the results have to be interpreted manually. Repeatable tests require a facility to set up and run the tests, check the results, and report them. Running these tests periodically provides a regular indication of progress to management and the customer.
Posting successful results boosts confidence and morale, whereas posting failed results alerts the team to issues, which can then be addressed more quickly. Test enoughDue to practical and economic constraints, complete testing of a software system is usually not feasible. A more realistic goal would be to develop enough test cases to provide reasonable assurance that the software system works as it is supposed to. The amount of testing required to provide reasonable assurance, should be determined relative to both the short and long-term goals of the project, and should be based on the concept of coverage. Coverage is a measure of how completely a set of test cases exercises the capabilities of a software system. It can be measured in terms of how many requirements are tested, or in terms of how many lines of code are executed. Testing resources should be invested wisely, employing automation appropriately. They should be directed at software functionality that, when tested, provides the greatest level of return. This targeting should be based on risk analysis, and should employ techniques that maximise the re-use of test cases and select input criteria by statistical sampling. Automated unit testing combined with continuous integration can help prevent the waste of high-level testing resources when unit and integration level bugs permeate system testing. |
||
Copyright © 2008 AntonConsulting | info | webmaster | site map |