Start trefwoorden te typen om de site te doorzoeken. Druk enter om te verzenden.
Generative AI
Cloud
Testing
Artificial intelligence
Security
Reporting takes place in accordance with the reporting structure described in the test plan. The progress report contains data on the most recent reporting period and cumulative data on the entire test process.
Besides figures, the report should also provide textual explanation and advice on the results, progress, risks and any problem areas. The latter is inclined to be forgotten in reports that are generated from test-management tools. It should be realised that explanation and advice are very important in the provision of quick and reliable insight into the figures. It is the most important product of testing. While the explanation can and should be given verbally, it most definitely should be contained in the written report. This forces the test manager to think carefully, as well as making the advice stronger, reaching a wider audience and helping with the process evaluation in retrospect.
Although the terms ‘interim report’ or ‘progress report’ may suggest that these are less important than the final report, in fact the opposite is true. The progress report supplies early information and advice, with which the recipients (such as client, project manager and others) can often make timely adjustments for keeping the total process on the right track. The final report is more a retrospective evaluation that mainly benefits subsequent test processes and projects.
In outline, a progress report has the following content (based on the BDTM method with the four aspects of Result, Risks, Time and Costs). In practice, the list of contents may follow a different sequence; subjects may be combined, or even omitted. It depends on the report’s target group.
These subjects are further explained below.
It is shown per characteristic/object part:
The closer the end of the test period approaches, the more attention is paid in the progress report to the consequences of open defects. In the beginning, it is less useful to include this in the report, since it is expected that the defects will be solved. But the consequences should always be included in the defect report itself.
Based on the above, the status per test goal (user requirement, business process, critical success factor, etc.) is reported. Sometimes a test goal can be directly linked to a number of characteristics/object parts and to the test status related to them; sometimes the status per characteristic/object part is not sufficiently usable and the test manager still has to determine the test status per test goal. The risk tables from the Product Risk Analysis make the link possible.
Relevant trends and related recommendations can be reported here.
Below are some overviews that will reveal whether certain trends are taking place:
In order to give the trend significance for the stakeholders, it is advisable to use graphics, making the trend visible. This is not as easy as it seems. It is difficult to produce a clear and legible graphic. A few tips (quoted freely from [Tufte, 2001]:
In this part of the report, the stakeholders are given insight into the degree to which the coverage of the various product risks has changed, as well as into any process risks.
In the test plan strategy, it is determined whether and to what degree product risks will be covered by testing. During the test process, aberrations may occur: the estimate of the risk appears different and/or the test coverage requires adjusting. The adjustments over the reporting period, with associated consequences, are reported in this part. In this, the translation is made into the test goals: what kind of impact will the changed risks have on the attainment of these goals?
Regarding the progress of the test process, the points below are significant.
At the level of phases and/or main products, the following could be reported:
Products could be the test plan, test scripts, test-execution files and reports.
If the test manager is responsible for the budget, he will also include in the progress report information on completing the test process within budget.
Here, the activities to be carried out in the coming period are reported.
This refers to non-productive hours of the testers. If the test process environment does not meet certain preconditions, this will result in inefficiency and loss of hours. Examples are a non-functioning test infrastructure, much or lengthy test-obstructing defects or lack of support. Hours lost, and the causes, are reported here.
As with trends in the status of the test object, trends and recommendations in connection with the progress of the testing should also be reported. The central question here is whether the agreed milestones are (or appear to be) feasible.
One of the trends that can be watched is the average time required for the reworking of a defect. If this increases, it is possibly a signal that the volume of the backlog of work is increasing sharply. The percentage of wrongly reworked defects can also be observed.
In this section of the report, the test manager points out any problem areas or points for discussion that jeopardize completion of the test assignment within the set limits of time and costs. For example:
Besides the various problem areas, their consequences and possible measures are shown. Here, too, the test manager makes the translation into the test goals.
This part shows the agreements made in the current period between the test team and other parties that are relevant to the recipients of the report.
If required, this part of the report can include information on the quality of the test process. The following questions play a part here:
The three quality aspects of the test process.
A point of focus here is the general problem with metrics: how to draw the right conclusions from the figures; how to avoid comparing apples with oranges. See also [Link {13.xml}Chapter 13] “Metrics”.
The difficulty with the question of whether the testing is effective, is that this can usually only be established in retrospect. The effectiveness issue can be split into two parts:
There are various indicators that can be included in the report:
The following are possible indicators of this:
By comparing these figures with an established standard, a picture is created of the efficiency of the test process.
This aspect is difficult to communicate through indicators. What the test manager can say in the report about this is whether and how in the latest period it was verified that the test team was working as agreed. The verification can focus on the test products or the processes and can be based on the planned quality measures, or on monitoring, or on a random check at the overall level. The test manager should make a good risk estimate as regards what checking would be useful. In particular, the test levels that are placed with inexperienced test managers or that have been outsourced are eligible for verification.
Below is an example of a dashboard, enabling the most important information to be seen at a glance.
Later in the report, these points are worked out in detail in overviews with notes. Examples of overviews (without notes):
Quality of test object – defects
Quality of test object – subsystem x causes
Progress
Overview – Building Block
Reporting & Alerting
Related Wiki’s
Progress report
Risk report
Release advice
Final report