Beyond ISEB ISTQB-BCS: User Acceptance Testing (UAT) for Software Systems Part 2
The test environment must be as similar to the production environment as possible. This is simple to accomplish when the application or system is installed on a user's computer but a little more difficult in a client server or hosted environment. The environment should have the same software, hardware, and network configuration as the production environment. The environment should have all the common or shared data from the production environment. It may be possible to simply port this data from QA to UAT. The UAT environment should also have sufficient privately owned data to enable testing. This may be done by selecting one customer to test on, or one product, and to port this data from the production environment where the new system replaces an existing one, or to create the data manually where a manual system is being replaced. Another approach is to simply port all the data in the production software system into the UAT environment. This will require someone to groom the data so that it is compatible with the new system's data dictionary (assuming data is handled by a database).
The hardware installed in the UAT environment should be as close to the production environment as possible to provide users with the same capacity and performance they will experience in production. Saving money here will guarantee hard discussions about system performance during UAT. You can attempt to save money here by having one environment for both QA and UAT which is a duplicate of the production environment in which case all QA testing must be finished before UAT can start.
Testers must be set up with user accounts which reflect their accounts in the production system, including their passwords and privileges. The testers will also need access to a bug reporting system. This won't be a problem when the testers belong to your organization; it may require an administrator to add a few privileges to the testers account, or to create a new account for them. Accessing testers from an external organization may be more difficult, for starters they will almost certainly have to penetrate your organizations fire wall. The simplest way to solve this problem is to use a wiki to do bug reporting. This approach will allow anyone with access to the internet to be added to the project's tester community.
Beta testing will require you to set up a bug reporting system that allows the user to report bugs from their own environments. This can be done by providing access to your bug reporting system via a web portal (providing you host a web site), or to use a publicly accessible bug reporting system such as BUGtrack. This approach can also work with an external customer's organization.
Your development testing and QA testing should have eliminated software bugs from the system by this time. The bugs that will most likely be reported during UAT can be grouped into 3 categories:
- What I like to refer to as requirements interpretation issues. These bugs result from the developer/QA tester having one view of the function that meets a requirement and the UA tester having another.
- "Cosmetic" bugs - the user dislikes the look and feel of a screen, or dislikes the layout of the screen.
- Minor bugs such as the text in an error message, e.g. the system is developed in the USA and tested in England and the error message displays "behaviour" rather than "behaviour".
These are the most difficult bugs to address, particularly those that fall into categories 1 and 2.
Developers handling bug reports should be able to interpret information provided by the bug report and determine the root cause of the bug. They should be particularly vigilant for bugs that fall into the first 2 categories. When the system performs in a fashion that differs from what the tester expected, the cause could be a system failure, that the user was improperly trained in the use of the tool, the user is improperly trained in their job function, or the system performance doesn't meet the tester's needs. The administrator, or developer handling the bug reports, should address those that describe a system failure, or resolve them by stating that the system is performing as intended and refer them back to the tester.
It will fall on your shoulders to deal with users who have reported a bug that the developers have assigned a "system performs as intended" status. The user clearly perceives the performance as a failure, or they wouldn't have taken the trouble to report a bug. Bugs that are caused by a user improperly trained in the use of the new system can be resolved by demonstrating the correct use. Bugs that are caused by the system failing to meet the users' needs are more difficult to resolve. The problem here may be that the user needs were improperly captured during planning. Verify this with other users in the user community who perform the same function as the author of the bug report. Users are allowed to have differences of opinion and where a majority of users favour the approach the developer has chosen, the bug report should be closed with the "system performs as intended" solution.
Bug reports of a system failure that reveal the requirement was improperly captured in the first place should trigger a change request, if the user community cannot live with the system as is. Failures of this nature can be avoided by employing the proper Requirements Gathering techniques so too many errors of this nature should trigger an analysis of the techniques used on your project. The tester may protest that they shouldn't have to author a change request because the system should perform as they have stated. Explain the need of the change request to them: any change in requirements must be supported by a change request and the project budget altered accordingly. You will have a finite budget for re-work and you won't be able to meet it if you use it to make design changes. This may be an especially difficult conversation to have with an external customer so make sure the requirement was incorrectly stated initially and then refer the dispute to your executive sponsor, or to the dispute resolution mechanism.
Bug reports that address issues the tester has with screen design or screen layout should be resolved in the same fashion as reports of the system not behaving as expected. The bug should trigger a fix if the developer failed to properly code the requirement; otherwise a change request is required to change the requirement.
Fixes to the system (and approved changes) should be delivered to the UAT environment in a controlled fashion. This will require a new build to deliver the updated software. Builds should be scheduled at regular intervals throughout the UAT phase to address the need to deliver fixes and changes and fixes and changes should be assigned to the next build. The only exception to this rule will be when a fix is necessary to enable further testing. Bugs that prevent any further testing of the system should never be discovered during the UAT phase, but if they are an emergency build must be scheduled. UAT data may have to be reset when a new build is deployed. This should be done so that testing is disrupted as little as possible. Some data may have to be flushed from the system in the case where data has been corrupted by a bug.
Smoke tests are simple tests to verify that nothing major has been missed when a system has been promoted from the UAT environment to the production environment. The smoke test should be part of your cutover plan, not your test plan; I'm covering it here because it is a form of testing that should be a part of your project. The term smoke test derives from the test that is sometimes performed in the plumbing industry where smoke is introduced to a waste or drain pipe under pressure to find leaks in the pipe. Smoke testing can be done at any point during development but is most commonly done at the point when a new system is promoted to production. Smoke testing is most important when an emergency fix is introduced to production because of the limited testing done to the system. Testing is limited due to the limited scope of the fix, the limited time allowed to test and promote the fix, and the limited budget for testing.
The cutover plan for your project should include a set of tests to be performed during the production cutover. The tests should include the most frequently performed functions supported by the system such as logging in, logging out, viewing data (the most frequently used views, the most frequently viewed data), and etc. The amount of testing will be constrained by the time available to perform the testing so ensure that only the most frequently performed functions are exercised.
You should sleep soundly after your production cutover, providing you have done a thorough job of development testing, QA testing, and UA testing. Should the phone ring in the middle of the night, don't panic. The most likely cause of these calls is a user who doesn't see what they expect and panics. The fix is an explanation of how the new system works and then back to sleep. Even better, have an SME who is familiar with the new system on call to do the hand holding.
If you haven't done a thorough job testing the new system, you should at least have done a thorough job of planning and practicing a rollback strategy so the major bug can be fixed. The rollback strategy should be a part of your production cutover plan. In the meantime you'll see first hand why bugs reported in production are the most expensive of all to fix and why testing costs are justified.
Author: Dave Nielsen