Quality of use evaluation when carried out at the end of the development process is of limited use unless the developers have the intention of creating an update soon. Evaluation with typical users of the intended product, or user-based validation should be built in to all the stages of the design process, from the first prototypes till the pre-release stage. The forthcoming ISO 13 407 standard provides a framework for user centered development activities that can be adapted to numerous development environments: from a straight waterfall type of development process to an iterative type of environment.
The ISO 13 407 standard as it is presented concentrates on the process of development. Two recently completed projects part sponsored by the European Commission have produced sourcebooks of methods that can be used to implement the standard: The INUSE and RESPECT projects.
These projects were concerned with gathering, testing, and promoting best practice methods for user-based evaluation, and user-based requirements elicitation and representation.
The following paragraphs give a summary of what should happen at each stage of the process.
It is important to stress the transition from each stage to the next is a very crucial step in the successful implementation of the process and it is important not to progress to the next step until satisfied all aspects and information have been covered.
Plan the human centred process
This first stage requires the gathering of the commitment of all concerned in the development process to the user-centered design philosophy, and to create a plan whereby there is ample time and opportunity for engaging in user requirements elicitation and testing as well as the more technical aspects of development.
The necessary side effect of this first step should be to gain consensus among the design team that user involvement in the project is not simply at the end. A Validation Plan is the outcome of this first stage. It specifies how much iteration will be carried out and time-lines for each. However, this plan should also list the success criteria to be reached at each stage and the methods to be adopted to attain these criteria and to check that the criteria have been reached. The BASELINE project proposed and tested a User-based Validation Assistant which is a large pro-forma that enables an organisation to manage these concerns. Although the BASELINE User-based Validation Assistant was designed explicitly for use in projects involved in the Information Engineering domain of the EC's Telematics Application Programme it is orientated towards industrial usage outside of this programme, in line with the general objectives of the EC's Telematics Applications Programme as a whole.
The Validation Plan is a working document which is first produced in outline terms and which is then reviewed, maintained, extended and updated during the design and development process.
Specify the context of use
The quality of use of a system depends on understanding and planning for the characteristics of the users, tasks and the organisational and physical environment in which the system will be used. It is important to understand and identify the details of this context in order to guide early design decisions, and to provide a basis for specifying the context in which usability should be evaluated. Laboratory evaluations of the system by personnel intimately acquainted with it are likely to produce user acceptance results which are misleading when the system is later rolled out in the training room.
Here an existing system is to be upgraded or enhanced, the context may already be well understood. There may be extensive results from user feedback, help desk reports and other data which will provide a basis for prioritising user requirements for system modifications and changes. For new products or systems, it will be necessary to gather information about its context of use through interviews and meetings with project stakeholders.
The context in which the system will be used should be identified in terms of:
There are different methods, which can be used for collecting information about the context of use.
In the first instance it will usually be necessary to gather together a group of stakeholders in the product (such as the project manager, a developer, a marketing specialist, a representative of at least some of the various types of users specified earlier and a usability expert) to discuss and agree the details of the intended context of use. Where more detailed information is required, it may be necessary to conduct a task analysis which yields a systematic description of user activities.
The output from this activity may be summarised in a Context of Use Description, which describes the relevant characteristics of the users, tasks and environment and identifies what aspects have an important impact on the system design.
Specify the user and organisational requirements
In most design processes, there is a major activity in which the functional requirements for the product or system are specified. For user-centered design, it is essential to extend this to create an explicit statement of user and organisational requirements, in relation to the context of use description, in terms of:
From this, usability criteria will be derived and objectives set with appropriate trade-offs identified between the different requirements. These requirements should be stated in terms, which permit subsequent testing. In particular, the following objectives should be considered for each class of user, following the ISO 9241 part 11 model:
Usability objectives should be set for all of the major areas of user performance and acceptance. These agreed objectives should be set out in a Specification of User and Organisational Requirements document.
Requirements elicitation and analysis is widely accepted to be the most crucial part of software development. Indeed, the success of the user-centred approach largely depends on how well this activity is done.
Produce design solutions
The next stage is to create potential design solutions by drawing on the established state of the art and the experience and knowledge of the participants. The process therefore involves:
The level of fidelity of prototype and the required amount of iteration will vary depending on several factors including the importance attached to optimising the design. In some developments, prototyping may start with paper visualisations of screen designs and progress through several stages of iteration to interactive software demonstrations with limited real life functionality. Later in design, prototypes can be evaluated in a more realistic context. When trying to improve a prototype to meet design objective such as usability, co-operative evaluation can be valuable, where an evaluator sits through a session with a user and discusses problems with the user as they occur. To obtain the maximum benefits, it is best to carry out such evaluations in several iterations with a few users, rather than less iteration with more users. At this stage, the emphasis is on qualitative feedback to the design. Expert-based evaluation is also useful, so long as the experts are experts in the domain of the application rather than technical design and multi-media experts.
Even if a straight 'waterfall' model is adopted (usually for reasons of time pressure) this stage simply begs for a number of small and fast iterations within the larger process. The greater the amount of confidence that user goals are being achieved with the prototypes, the more confidence there will be that the following stage of evaluating designs against user requirements will pass smoothly.
One of the major problems in user-based work is to check the developing set of requirements against the experience and work practices of real end users. A set of technical requirements documents is not an adequate representation for most end users who will usually be unfamiliar with the methods and terminologies adopted. End users can appreciate a mockup, paper prototype, or storyboard, and can usually give meaningful feedback in reaction to such an instantiated statement of requirements. This has led in some companies to an inevitable blending of this stage with the previous one. The degree to which this is desirable or possible depends on two factors: firstly, the work practices of the organisation carrying out the development, and secondly, the size and scope of the project. Small, relatively informal projects can blend these two stages to advantage; a large project in a formal development environment will of necessity see these stages as separate processes.
Evaluate designs against user requirements
Evaluation is an essential activity in user-centred design. Evaluation can be used at least two ways:
Whatever kind of evaluation is used, it is important to understand that evaluation results are only as meaningful as the context in which the system has been tested. If the system is tested only in unrealistic environments, then the results are likely to be highly misleading when compared to realistic usage. In general, the following concept should be carefully considered:
Context of evaluation = context of use
If an iterative process is used, then early in design the emphasis will be on obtaining feedback (typically consisting of a list of usability defects) which can be used to improve the design, while later when a realistic prototype is available it will be possible to measure whether user and organisational objectives have been achieved.
The benefits of an iterative process are that in the early stages of the development and design process, changes are relatively inexpensive. The longer the process has progressed and the more fully the system is defined, the more expensive the introduction of changes will be. Bringing in user evaluation in at the end of the process may be prohibitively expensive, and ignoring the results of user trials earlier in the process is just a waste of effort.
Evaluation techniques will vary in their degree of formality, rigour, and amount of involvement from designers and users, depending on the environment in which the evaluation is conducted. The choice will be determined by financial and time constraints, the stage of the development lifecycle and the nature of the system under development as well as the degree of maturity of the organisation with user-centered design.
All evaluations at this stage should be summarised in a Usability Evaluation Report which gives the reader progressively more detail as the report progresses, from 'design recommendations and summary' at the front of the report, to statistically analysed data on which the recommendations are based at the back. All such reports should include a detailed context of use as well as a context of evaluation as appendices.
In general the most effective usability work is carried out early and continuously during the phases of a product life cycle.
Each part of the process will be presented with the associated tools that help to carry out each stage of the process as effectively as possible.
Copyright EMMUS 1999.