학술논문

Heuristic Evaluation Versus Guideline Reviews: A Tale of Comparing Two Domain Usability Expert's Evaluation Methods
Document Type
Periodical
Source
IEEE Transactions on Professional Communication IEEE Trans. Profess. Commun. Professional Communication, IEEE Transactions on. 65(4):516-529 Dec, 2022
Subject
General Topics for Engineers
Engineering Profession
Usability
Guidelines
Cultural aspects
Educational institutions
Web sites
Human computer interaction
Stakeholders
Heuristic algorithms
Comparison
culture
domain
guideline reviews
heuristic evaluation
university websites
Language
ISSN
0361-1434
1558-1500
Abstract
Background: The usability of university websites is important to ascertain that they serve their intended purpose. Their usability can be evaluated either by testing methods that rely on actual users or by inspection methods that rely on experts for evaluation. Heuristic evaluation and guideline reviews are two inspection methods of usability evaluation. A heuristic evaluation consists of a few general heuristics (rules), which are limited to checking general flaws in the design. A guideline review uses a much larger set of guidelines/suggestions that fit a specific business domain. Literature review: Most of the literature has equated usability studies with testing methods and has given less focus to inspection methods. Moreover, those studies have examined usability in a general sense and not in domain- and culture-specific contexts. Research questions: 1. Do domain- and culture-specific heuristic evaluation and guideline reviews work similarly in evaluating the usability of applications? 2. Which of these methods is better in terms of the nature of evaluation, time needed for evaluation, evaluation procedure, templates adopted, and evaluation results? 3. Which method is better in terms of thoroughness and reliability? Research methodology : This study uses a comparative methodology. The two inspection methods—guideline reviews and heuristic evaluation—are compared in a domain- and the culture-specific context in terms of the nature, time required, approach, templates, and results. Results: The results reflect that both methods identify similar usability issues; however, they differ in terms of the nature, time duration, evaluation procedure, templates, and results of the evaluation. Conclusion: This study contributes by providing insights for practitioners and researchers about the choice of an evaluation method for domain- and culture-specific evaluation of university websites.