Heuristics as an Aid to Training a Usability Evaluator’s Expertise

(This is unmarked coursework, part of my HCI course at UCLIC and is released with permission from Prof. Ann Blandfod. Since this is not evaluated and hasn’t gone through any sort of peer review process, it will most certainly contain errors.)

Heuristics as an Aid to Training a Usability Evaluator’s Expertise

Vishnu Gopal

University College London

It is clear that heuristic evaluation as Nielsen envisioned it is a method meant for experts (Nielsen, 1992). Heuristics do not stand alone, and have to be moulded to fit any particular scenario: the general set of heuristics have been expanded into specific guidelines for different kinds of activities: accessibility, internationalization (Gonzalez, Granollers, Pascual, 2008), etc. are examples and these require evaluators who are trained and experienced in separate spheres. Experimental data also seems to suggest that the more experienced the evaluators, the more usability errors they find (Dumas and Redish, 2002). Given this scenario, I seek to explore if heuristics or other similar guidelines can serve as a tool to strengthen a beginning evaluator’s “experience”.

Without doubt, applying heuristics to usability evaluation gives a methodical structure to the task of analysing potential faults in a system. During a recent analysis of e-commerce websites, one major observation that I made was that without rules, it’s easy to miss the forest for the trees—i.e. one might speculate on possible faults: for e.g. the links on the right hand side navigation bar was not prominent or relevant enough, or the visual design was not attractive; but fail to gather the data into meaningful coherent suggestions. The aim, after all, of a successful usability evaluation is to find ways to rectify potential faults. When pitted against an evaluators raw instincts then, following a set of guidelines acts both as a reasonably exhaustive search space and a framework for assessing faults that have been found. It can also be argued that using a set of guidelines methodically can sensitise an evaluator to common errors.

On the other hand, I also observed that there might be faults found more easily not from a strict adherence to guidelines, but from an evaluator’s own prior experience. During the usability evaluation activity, my companion who is a trained visual designer found that much of the website’s apparent clutter was due to it not following a coherent “grid system” (see Chang, Dooley, Tuovinen, 2002). While it might be easy to slot this into either guideline 4 (consistency) or 8 (aesthetic design), it doesn’t cleanly fit into the heuristic framework provided, but is a crucial criticism nevertheless. I suspect that a strict adherence to guidelines without a broader background might harm rather than help a beginning evaluator’s progress, but this requires detailed investigation. It could also be too easy to be trained to look into a series of specific and common problems rather than try to evaluate a system based on its intent.

This is further imperilled by the fact that the minutiae of specific guidelines change often. An example of this debate is how the specific recommendation relating to websites displaying content above the fold (the initial viewable area) changed from 1994 to 1997, a short span of three years (Nielsen 1997).

When compared to other methods of user testing, heuristics pale further in this regard. They remove a vital component from usability evaluation: the serendipity (Stoskopf 2008) that observing a user adds to training an evaluator’s instincts. This is especially important in a field like usability evaluation where observation of real users continues to be stressed (Petrelli, Hansen et. al. 2004), and rightly so, for HCI evaluation has its roots in cognitive psychology and that is a science yet to attain adulthood (Miller 2003).

It would also be instructive to observe how HCI (and accordingly usability evaluation) is taught in University courses worldwide. Saul Greenberg of the University of Calgary remarks that a “fundamental tenant of HCI is that end-users should play an integral role in the design process” and that “performing usability studies in class hammers home the relevance of evaluation”—indeed his course description (Greenberg 1996) is filled with references that directly involve users in class. Interestingly, the course is structured so that “Designing Without the User” is a later event: where lessons learnt from these evaluations are then integrated to try to formulate a theory of user behaviour.

Chan, et. al. exploring issues integrating HCI in master-level MIS programs also stresses the emphasis on users and “empirical testing” and recommends a curriculum that largely ignores heuristics. Faulkner and Culvin in “Integrating HCI and Software Engineering” condenses it well and also explains a crucial difference with software engineering:
“Some HCI practitioners seem to believe that if HCI can be reduced to guides and checklists that anyone can apply to anything, then all will be well. This is tantamount to designing HCI out of software engineering as it is providing rules to be followed without the requisite theoretical under-pinning. Students trained in this way will be chanting mantras and will be woefully unable to deal with problems that have not been solved elsewhere or are not covered by style guides and checklists. Software engineers on the other hand are either keen to embrace these checklists or are unwilling to accept that the age of users having to adapt themselves to systems has gone. Users want systems to work for them and not the other way round.” (Faulkner & Culvin, 2000)

Furthermore, in a study examining how guidelines and patterns might be effective in HCI teaching, Hvannberg et al. “found very little hard evidence” supporting the importance of using patterns or guidelines in HCI teaching. However, they also noted “a desperate need to conduct studies on a suitable scale on the use” of patterns and guidelines in teaching HCI concepts.

There is no doubt that Nielsen’s basic heuristics have stood the test of time as a way to find usability errors. However, as a tool to train a beginning evaluator, they should certainly be supplemented by other evaluation methods.
(1045 words).

References:

Chan, S.S, Wolfe, R. J., Fang, X. (2003), Issues and strategies for integrating HCI in masters level MIS and e-commerce programs, International Journal of Human-Computer Studies, Volume 59, Issue 4, Zhang and Dillon Special Issue on HCI and MIS, October 2003, Pages 497-520, ISSN 1071-5819, DOI: 10.1016/S1071-5819(03)00110-1.(http://www.sciencedirect.com/science/article/B6WGR-4938JRM-1/2/498d2855a23d35c7524dd9c4201b5d4e)

Chang, D., Dooley, L. Tuovinen, L.E. (2002), Gestalt theory in visual screen design: a new look at an old subject, ACM International Conference Proceeding Series; Vol. 26 Proceedings of the Seventh world conference on computers in education conference on Computers in education: Australian topics – Volume 8

Dumas, J.S., Redish J. A Practical Guide to Usability Testing. (1999), Oregon. Intellect Books. pp 67.

Faulkner, X. Culwin F. (2000), Enter the Usability Engineer: Integrating HCI and Software Engineering, ACM SIGCSE Bulletin

González, M.P, Granollers, T., Pascual, A. (2008), Testing Website Usability in Spanish-Speaking Academia through Heuristic Evaluation and Cognitive Walkthroughs, Journal of Universal Computer Science, vol. 14, no. 9.

Greenberg, S (1996), Teaching Human Computer Interaction to Programmers. Technical Report 96/582/02. University of Calgary.

Hvannberg, E.T., Read, J.C., Bannon, L. Kotzé, P. & Wong W. (2006), Patterns, anti-patterns and guidelines: Effective aids to teaching HCI principles? In, Inventivity: Teaching theory, design and innovation in HCI – Proceedings of HCIEd2006-1 (First Joint BCS/IFIP WG 13.1/ICS /EU CONVIVIO HCI Educators Workshop (pp. 115–120). Limerick, University of Limerick.

Miller, G. A. (2003), The cognitive revolution: a historical perspective. Trends in Cognitive Sciences, vol.7 no.3.

Nielsen, J. (1992), Finding usability problems through heuristic evaluation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Monterey, California, United States, May 03 – 07, 1992). P. Bauersfeld, J. Bennett, and G. Lynch, Eds. CHI ‘92. ACM, New York, NY, 373-380. DOI= http://doi.acm.org/10.1145/142750.142834

Nielsen, J. (1997), Scrolling Now Allowed. Blog post at http://www.useit.com/alertbox/9712a.html

Petrelli, D., Hansen, P., Beaulieu, M., Sanderson, M., Demetriou, G. and Herring, P. (2004), Observing Users – Designing clarity a case study on the user-centred design of a cross-language information retrieval system. Journal of the American Society for Information Science and Technology, 55 (10). pp. 923-934.

Stoskopf, M. K. (2008), How Serendipity Provides the Building Blocks of Scientific Discovery. ILar Journal. vol. 46.

Leave a Reply

Create a website or blog at WordPress.com