A comment on usable decision support systems (DSS)

When discussing how to make DSS more usable several things need to be considered. This review looks at metrics for measuring usability by defining the term ‘usability’ first, identifying characteristics of usability, applying these characteristics to DSS and in the end proposing a framework to select the right metrics for DSS.

The term ‘usability’ has been defined in many ways. The focus of this review will be on factors that make systems usable. The term ‘usability’ is defined by the ISO as “the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments” (Rubin and Chisnell 2008). This definition doesn’t tell much without the context of users and goals and makes the point that there is no usability in general. It has to be specified and the definition gives a hint how by mentioning “specified users”. What this definition means are target users. Target users are primary users of the tool and should to be separated into unique groups. After that, primary goals for using the product of these groups have to be specified (Albert, Tullis and Tedesco 2010). This covers another part of the definition, “specified goals”. However, after having target user and their primary goals identified the ISO definition doesn’t explain what makes a system effective, efficient and satisfactory. In the literature some people split the above three attributes to make them more accessible. Nielsen, for example, identifies five characteristics which are widely accepted and will be used for the following analysis (1993):

  • Learnability (part of effectiveness),
  • Efficiency,
  • Memorability (part of effectiveness),
  • Errors (part of efficiency)
  • Satisfaction

Now it is possible to ask, how these characteristics relate to DSS. How to make DSS more usable is not often discussed in the literature.

DSS are defined as computer based systems that support decision-making activities (Turban, Aronson & Liang 2004). Computers can assist because of four characteristics identified by (Smith, Geddes & Beatty 2011):

  • Improved access to information
  • More informative presentation of information
  • Support better forms of communication
  • Support the use of algorithms

By defining DSS ‘users’ and ‘goals’ from the ISO definition of usability are specified and it is possible to take a closer look at how the characteristics of usability relate to DSS. In order to do this the following analysis is separated into three areas: goals of usable products, characteristics of usable products, and characteristics of less usable products.

The goals of usable products, which can be applied to DSS, are often two factors (Rubin and Chisnell 2008):

  • improve profitability of products
  • informing design

The first factor can be connected to errors and learnability and therefore effectiveness and efficiency because it is about minimising or eliminating errors and minimising the costs of service and support (Rubin and Chisnell 2008). The second factor can be connected to learnability, efficiency and satisfaction because it helps making the design useful and valued by the target user (Rubin and Chisnell 2008). Therefore the design should be easy to learn and satisfactory (Rubin and Chisnell 2008).

The characteristics of usable products are closely related to the target users and specified goals of the product. Companies that follow basic principles of user centred design (UCD) often have more usable products (ISO 13407, Rubin and Chisnell 2008). Instead of discussing characteristics of usable products it makes more sense to talk about attributes of UCD leading to more usable products. The design process of a new product must be structured and systematic: beginning with high level goals and then moving to more specific goals (Rubin and Chisnell 2008). This improves effectiveness of the product. UCD proposes an early focus on users and their tasks. This helps to identify and categorise target users of the product (Rubin and Chisnell 2008). Therefore, throughout the development testing with actual users is possible. This affects the characteristics of learnability and memorability.

The characteristics of less usable products are also closely related to targets user and specified goals. Three general issues could be identified that are worth considering in more detail because they especially affect DSS:

  • development focuses on the machine or system
  • target audience changes and adapts
  • design and implementation don’t always match

When the focus during the design and implementation was not on the end user but on the machine effectiveness (learnability and memorability) and satisfaction are particularly affected. The second factor, target audience changes and adapts, is very critical and difficult to consider (Rubin and Chisnell 2008). The worst case scenario would be that the system is useless after the target audience changes. This would affect all aspects of usability. The last factor of this list means that the challenge for technical implementation has decreased but the one for design increased (Rubin and Chisnell 2008). This means that too little effort is spent on the desired effects the product should have and too much on the product itself (form vs. function). For usability this means that the characteristic satisfaction is affected. However, Rubin and Chisnell point out that UCD can help with the identified characteristics of less usable products identified above (2008).

Figure 1 shows a framework developed to choose techniques and measurements. This framework can be applied to every usability study and is not specific to DSS. After going through this process and having decided about the metrics to use it comes to the design of a usability study. This step should be specific to DSS as there are differences in the design and not all the methods available are equally appropriate for DSS. Smith, Geddes & Beatty identified methods that are especially helpful for making a DSS more usable: cognitive task analysis/cognitive walkthroughs, and work-domain analysis (2011). There are a number of variations on conducting cognitive task analysis but they all have their roots in hierarchical task analysis techniques (Smith, Geddes & Beatty 2011). This means high level goals have to be decomposed into a hierarchy of sub goals then methods can be used to describe the associated cognitive processes (Smith, Geddes & Beatty 2011, Diaper & Stanton 2004). The description of this method shows good alignment with the above discussion of usability issues (goals of usable products, characteristics of usable products, and characteristics of less usable products). Therefore, it is a prime candidate to use this method for a usability study of DSS.

The other possible method identified is work-domain analysis (Vicente 1999). This method is especially appropriate because it considers the fact that DSS are designed for complex environments (Smith, Geddes & Beatty 2011, Bar-Yam 2005). A work-domain analysis can identify constraints that are relevant to successful product performance (Smith, Geddes & Beatty 2011). This type of analysis aims to understand and model the application area of the future product (Smith, Geddes & Beatty 2011). Like the previous method, this one covers the characteristics of the usability definition and is therefore another prime candidate for performing a usability study.

Figure 1 – Usability framework to give a guideline for specifying a usability study (based on Albert, Tullis and Tedesco 2010, Barnum 2010, Koutsabasis, Vlachogiannis & Darzentas 2010, Leroy 2011, Rubin & Chisnell 2008, Tullis 2008)

Bibliography

Albert, W., Tullis, T. & Tedesco, D., 2010. Beyond the Usability Lab: Conducting Large-Scale User Experience Studies, Morgan Kaufmann. Available at: http://www.amazon.co.uk/Beyond-Usability-Lab-Conducting-Large-Scale/dp/0123748925 [Accessed April 20, 2012].

Barnum, C., 2010. Usability Testing Essentials: Ready, Set…Test, Morgan Kaufmann Publishers In. Available at: http://www.amazon.co.uk/Usability-Testing-Essentials-Ready-Set-Test/dp/012375092X [Accessed April 20, 2012].

Brassington, D.F. et al., 2010. Principles of Marketing with EBook and How to Write Essays and Assignments: Principles of Marketing with Companion Website Withgradetracker Student Access Card:Brassington Principles of Marketing, Custom Publishing. Available at: http://www.amazon.co.uk/Principles-Marketing-EBook-Essays-Assignments/dp/1848787146 [Accessed August 28, 2012].

Covey, S.R., 1991. The seven habits of highly effective people. National medicallegal journal, 2(2), p.8. Available at: http://www.ncbi.nlm.nih.gov/pubmed/1747433.

Faulkner, X., 2000. Usability Engineering (Grassroots), Palgrave Macmillan. Available at: http://www.amazon.co.uk/Usability-Engineering-Grassroots-Xristine-Faulkner/dp/0333773217 [Accessed August 27, 2012].

Joshi, A., Sarda, N.L. & Tripathi, S., 2010. Measuring effectiveness of HCI integration in software development processes. Journal of Systems and Software, 83(11), pp.2045–2058. Available at: http://linkinghub.elsevier.com/retrieve/pii/S0164121210001391.

Koutsabasis, P., Vlachogiannis, E. & Darzentas, J.S., 2010. Beyond Specifications : Towards a Practical Methodology for Evaluating Web Accessibility. Journal of Usability Studies, 5(4), pp.157–171. Available at: http://www.upassoc.org/upa_publications/jus/2010august/koutsabasis8.html.

Leroy, G., 2011. Designing User Studies in Informatics (Health Informatics), Springer. Available at: http://www.amazon.co.uk/Designing-User-Studies-Informatics-Health/dp/0857296213 [Accessed April 20, 2012].

Rubin, J. & Chisnell, D., 2008. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, John Wiley & Sons. Available at: http://www.amazon.co.uk/Handbook-Usability-Testing-Conduct-Effective/dp/0470185481 [Accessed April 20, 2012].

Smith, P.J., Geddes, N.D. & Beatty, R., 2011. Human-Centered Design of decision-support systems. In Human Computer Interaction. pp. 245 – 274.

Tullis, 2008. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics (Interactive Technologies), Morgan Kaufmann. Available at: http://www.amazon.co.uk/Measuring-User-Experience-Interactive-Technologies/dp/0123735580 [Accessed April 20, 2012].

Turban, E., Aronson, J.E. & Liang, T.-P., 2004. Decision Support Systems and Intelligent Systems (7th Edition), Prentice Hall. Available at: http://www.amazon.com/Decision-Support-Systems-Intelligent-Edition/dp/0130461067 [Accessed August 24, 2012].

Turner, C.W. & Ph, D., 2011. A Strategic Approach to Metrics for User Experience Designers. Most, 6(2), pp.52–59. Available at: http://www.upassoc.org/upa_publications/jus/2011february/JUS_Turner_February_2011.pdf.

Vicente, K.J., 1999. Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work, CRC Press. Available at: http://www.amazon.co.uk/Cognitive-Work-Analysis-Productive-Computer-Based/dp/0805823972 [Accessed August 28, 2012].

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: