Test Implementation | effective testing requires organisation and investment
Test implementation

Test implementation involves developing test cases and test scripts, designing and acquiring test data, preparing the test environment, and selecting and implementing the tools that will be used to facilitate this process.

Test cases

The primary objective of test case design is to derive a set of tests that have the highest likelihood of discovering defects in the software. Test cases are designed based on the analysis of requirements, use cases, and technical specifications, and they should be developed in parallel with the software development effort.

A test case describes a set of actions to be performed and the results that are expected. A test case should target specific functionality or aim to exercise a legitimate path through a use case. This should include invalid user actions and illegal inputs that are not necessarily listed in the use case.

A test case described by the IEEE829-1998 specification contains the following sections:

  1. Test case specification identifier comprising the date, number and version
  2. Test items, e.g. requirements, design specifications, etc.
  3. Input specifications or actions
  4. Output specifications or expected results
  5. Environment needs, e.g. test harnesses, tools, specific test data, etc.
  6. Special procedural requirements
  7. Inter-case dependencies, e.g. using one test case to set-up the environment for another.

Ultimately, how a test case is described depends on several factors, e.g. the number of test cases, the frequency with which they change, the level of automation employed, the skill of the testers, the selected testing methodology, staff turnover, and risk.

Test data

To perform a valid test the input conditions and the expected results must be known. The test data defines these input conditions and is a constituent factor in achieving a controlled and predictable test. Predictability is important for manual testing, and is absolutely necessary for automated testing - without predictable, repeatable data there is no practical means of re-using automated tests across iterations.

Software systems can contain complex relationships between interfaces and cooperating sub-systems that capture their data in different formats and media. Consequently, test data will need to maintain cohesion across different packages - files, messages, transactions, and records - that directly correspond to these formats and media.

Creating balanced test data that accurately represents reality can be accomplished using a combination of construction and acquisition techniques. But data generated by a system through interfaces or processes rather than through online interaction can be challenging to create. Collecting data can be automated, but data protection must be safeguarded depending on the source. Test data is fluid and requires maintenance, e.g. dates and times will need to be adjusted so that the test data moves forward and can be re-used.

Test data contributes directly to the quality of the tests performed, and as such is an integral part of the overall test process. The time and effort required to construct the test data so that it can be easily enhanced and extended is an investment.

Automated testing

Automation enables test execution, and the logging and comparison of results to be performed with minimal human intervention.

Not all tests can or should be automated. Tests should be evaluated to determine their suitability for automation. Candidates for automation include tests that exhibit repetitive steps, e.g. performance and load tests; tests that are executed many times, e.g. 'smoke' (build verification) tests and regression tests; and tests that are prohibitively expensive to perform manually. The time to automate testing must also be considered because applications that are too unstable require excessive maintenance of test scripts.

Test automation can increase coverage and help to improve quality, but to achieve this requires investment in time and resources. Automation must be built on a foundation comprising a defined and generally predictable set of tests. Effective manual test cases should exist before making them efficient through automation. Developing scripts that are reliable and maintainable is just as challenging as developing the system being tested, and keeping scripts synchronised with the application will require continuous commitment and effort.

Test environment

A test environment provides a sterile habitat within which testing is performed and includes hardware, software, interfaces, facilities, security access, and other requirements that are relevant to the testing effort. A fully functional and integrated environment facilitates end-to-end testing, quality improvement and pre-production certification for the application.

In a tiered testing strategy, different physical test environments can exist each dedicated to a specific level of testing. As software is promoted from one level of testing to the next, the configuration of the test environment should approach the real-world or production environment.

Back to top ^^

This page is valid XHTML 1.0 This page uses valid CSS

Test Planning & Strategy | Test Implementation | Agile Testing | Defect Management

Methodologies | Project Management | Analysis & modeling | Development | Testing | Quality Assurance

Home | Services | Contact Us