Elements of UX: Hicks Law

By    John Garner on  Sunday, September 7, 2014
Summary: The more choices you are given, the harder the decision will be for you.

Elements of User Experience series:
"Hicks Law"

Summary

"Hicks Law: the time required to make a decision increases, with the additional number of alternatives you are presented with"

Principal

The principle is: a) you are presented with a task / goal / issue, b) you analyse the situation, judge your options to achieve the given task c) you make a decision / choose an option d) you apply / execute your decision.
In this context Hick's law predicts (algorithmically) that the more alternatives you provide users with the harder you are making things for them. You should aim at presenting users with only the options they require to achieve the task.

Context in UX

In a moderately complex to simple situation, like with a website design, Hick's law can be an interesting principle to check. Do you really need all that content / all those options / all those images to help the user achieve their goal? As explained in further detail by Smashingmagazine,  it usually makes sense to "take a step back" and think of your project in general rather than applying this principle religiously to each sub part and element of the overall structure one by one. Consider the user journey, the objectives and via testing if necessary clarify that certain elements are not superfluous.

Hick's Law
Hick's Law
Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Tuesday, May 23, 2023
Sustainable Enterprise AI Adoption: Protecting Confidentiality, Ensuring Accuracy, and Successful Business Integration

The public's recent access to breakthroughs in AI has sparked excitement but their integration into businesses often leads to significant issues, especially without proper management. Implementing AI effectively requires robust security measures to protect sensitive data, investment in unbiased technology, sufficient training for understanding AI systems, identification of the best AI use cases, assurance of reliable data sources, and careful management to prevent over-reliance on AI over human workforce. It's also critical to understand that AI systems like ChatGPT have their limitations and inaccuracies, and they need continuous monitoring and fine-tuning, while keeping in mind that these technologies have evolved from a long history of advancements, thanks to various companies and organizations.

Read More
Saturday, May 13, 2023
AI in my pocket

A novel AI topic that is trending, is around the porting of foundation models like Llama on to Google Pixel phones. This also maps to the leaked Google Memo about the threat of open source to their general 'moat model'.

Read More
Wednesday, May 10, 2023
AI: I see hallucinations

Discussing AI-generated hallucinations in language models like ChatGPT, which sometimes provide incorrect or fictional information aka BS. This problem is concerning for businesses that require trustworthy and predictable systems. While search engines like Google and Bing attempt to improve their accuracy and user experience, neither is perfect. The unpredictability of AI systems raises concerns about high-stakes decisions and public trust. Is the closing of OpenAI’s open-source projects a good idea? Could it benefit from expert analysis to understand and mitigate AI hallucinations?

Read More
Monday, May 8, 2023
AI promises: the good, the bad, the ugly

Looking at the current condition and possibilities of AI and AGI, emphasizing the rapid progress, benefits, and potential risks linked to their development. AI tools are already driving productivity gains in various industries. We look at applications ranging from farming to law. However, concerns about the security, accuracy, and ethical implications of these technologies persist. Some experts, like Dr. Geoffrey Hinton, are advocating for stricter regulation and caution in AI development.

Read More
crossmenuarrow-down