Despite numerous variations in procedures for gathering and analysing critical incidents researchers and practitioners agree about the definition of what critical incident technique (CIT) analysis should do:
In real world task performance, users are perhaps in the best position to recognise critical incidents caused by usability problems and design flaws in the user interface. Critical incident identification is arguably the single most important kind of information associated with task performance in usability -oriented context.
Several methods have been developed for conducting usability evaluation without direct observation of a user by an evaluator. However, contrary to the modern 'user reported critical incident method', none of the existing remote evaluation methods (nor even traditional laboratory-based evaluation) meets all the following criteria for a successful CIT analysis:
Typical application areas:
Useful for obtaining in-depth data about a particular role or set of tasks. Also extremely useful to obtain detailed feedback on a design option.
The basic steps involved are:
Step 1:Gathering facts
The methodology usually employed is an open-ended questionnaire, gathering retrospective data. The events should have happened fairly recently: the longer the time period between the events and their gathering, the greater the danger that the users may reply with imagined stereotypical responses. Interviews can also be used, but these must be handled with extreme care not to bias the user.
Example of a moderately structured approach:
Example of a moderately unstructured approach:
CIT generates a list of good and bad behaviours, which can then be used for performance appraisal.
Vague, generic answers suggest:
Step 2: Content analysis
Subsequent steps in the CIT consist of identifying the content or themes represented by clusters of incidents and conducting "retranslation" exercises during which the analyst or other respondents sort the incidents into content dimensions or categories. These steps help to identify incidents that are judged to represent dimensions of the behaviour being considered.
This can be done using a simple spreadsheet. Every item is entered as a separate incident to start with, and then each of the incidents is compiled into categories. Category membership is marked as:
This continues until each item is assigned to a category on at least a 'quite similar' basis.
Each category is then given a name and the number of the responses in the category are counted. These are in turn converted into percentages (of total number of responses) and a report is formulated.
Step 3:Creating feedback
It is important to consider not only the bad (negative) features of the report, but also the positive ones, so as not to undo good work, or to make destructive recommendations.
The poor features should be arranged in order of frequency, using the number of responses per category. Same with the good features.
Go back to the software and examine the circumstances that led up to each category of critical incident. Identify what aspect of the interface was responsible for the incident. Sometimes one finds that there is not one, but several aspects of an interaction that lead to a critical incident; it is their conjunction together that makes it critical and it would be an error to focus on one salient aspect - for instance, to focus on the very last event before the incident when a litany of errors has preceeded it.
Drawbacks of the method
Evidence in support of the method
Variations on the method
Several human factors and human computer interaction researchers have developed software tools to assist identifying and recording critical incident information.
This would have been quite a new departure from the standard CIT format of retrospective reporting. Researchers at IBM in Toronto developed a software system called UCDCam, based on Lotus ScreenCam. This application was used for capturing digitised video of screen sequences during task performance, as part of critical incident reports. In the authors' own words:
Whereas they discovered quite a delay between the time many users encountered a critical incident and the time they initiated reports. This delay defeated the mechanism for capturing, as context, a video clip of screen activity again stressing the naturally retrospective nature of the approach.
Copyright EMMUS 1999.