Identify iteration target items
Purpose:
|
To gain an initial understanding of the specific objectives behind the iteration plan.
|
Examine the iteration plan, and identify the specific items that will govern the plan, and the key deliverables by
which the execution of the plan will be measured. Key elements you should examine include: Risk lists, Change Request
lists, Requirements sets, Use Cases lists, UML Models etc.
It's useful to supplement this examination with attending iteration kickoff meetings. If these aren't already planned,
organize one for the test team that invites key management and software development resources (e.g. project manager,
software architect, development team leads).
|
Gather and examine related information
Purpose:
|
To gain a more detailed understanding of the scope of and specific deliverables of the iteration
plan.
|
Having examined the iteration plan, looking initially for tangible and clearly defined elements that would be good
candidates for assessment. Examine the details behind the work to be done, including both "new work" and Change
Request's etc. Study the risks that will be addressed by the plan to understand clearly what the potential impact of
the risk is and what must be done to address it (mitigate, transfer, eliminate etc.)
|
Identify candidate motivators
Purpose:
|
To outline the test motivators that are candidates for this iteration.
|
Using the understanding you've gained of the iteration plan, identify potential sources for things that will motivate
the test effort. Motivation may come from one of any number of sources: an individual work product, a set of work
products, an event or activity, or the absence of any of these things. Sources might include: Risk List, Change
Requests, Requirements Set, Use Cases, UML Models etc.
For each source, examine the detail for potential motivators. If you cannot find a lot of detail about, or you are
unfamiliar with the motivation source, it may be useful to discuss the items with the analyst and management staff,
usually by starting with the project manager or lead system analysts.
As you examine the information and discuss it with the relevant staff, enumerate a list of candidate test motivators.
|
Determine quality risks
Purpose:
|
To determine what quality risks are most relevant to this iteration.
|
Using the list of candidate test motivators, consider each motivator in terms of the potential for quality risks. This
will help you to better understand the relevant importance of each candidate, and may expose other candidate motivators
that are missing from the list.
There are many different dimensions of quality risk, and it's possible that a single motivator may highlight the
potential for risk in multiple categories. Highlight the potential quality risks against each candidate motivator and
indicate the likelihood of the risk being encountered, and the impact of the risk eventuating.
|
Define motivator list
Purpose:
|
To define the specific test motivators that will be the focus for this iteration.
|
Using the list of candidate motivators and their quality risk information, determine the relative importance of the
motivators. Determine the motivators that can be addressed in the current iteration ( you may want to retain the list
of remaining candidates for subsequent iterations).
Define the motivator list, documenting it as appropriate. This may be as part of the iteration test plan, in a database
or spreadsheet or as a list contained within some other work product. It is useful to briefly describe why the
motivator is important and what aspects of quality risk it will help to address.
|
Maintain traceability relationships
Purpose:
|
To enable impact analysis and assessment reporting to be performed on the traced items.
|
Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as required.
|
Evaluate and verify your results
Purpose:
|
To verify that the task has been completed appropriately and that the resulting work products are
acceptable.
|
Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you
did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and
that it is complete enough to be useful to those team members who will make subsequent use of it as input to their
work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".
Have the people performing the downstream tasks that rely on your work as input take part in reviewing your interim
work. Do this while you still have time available to take action to address their concerns. You should also evaluate
your work against the key input work products to make sure you have represented them accurately and sufficiently. It
may be useful to have the author of the input work product review your work on this basis.
Try to remember that that RUP is an iterative delivery process and that in many cases work products evolve over time.
As such, it is not usually necessary-and is often counterproductive-to fully-form a work product that will only be
partially used or will not be used at all in immediately subsequent work. This is because there is a high probability
that the situation surrounding the work product will change-and the assumptions made when the work product was created
proven incorrect-before the work product is used, resulting in wasted effort and costly rework. Also avoid the trap of
spending too many cycles on presentation to the detriment of content value. In project environments where presentation
has importance and economic value as a project deliverable, you might want to consider using an administrative resource
to perform presentation tasks.
|
|