Elements of UX: Hicks Law

By    John Garner on  Sunday, September 7, 2014
Summary: The more choices you are given, the harder the decision will be for you.

Elements of User Experience series:
"Hicks Law"

Summary

"Hicks Law: the time required to make a decision increases, with the additional number of alternatives you are presented with"

Principal

The principle is: a) you are presented with a task / goal / issue, b) you analyse the situation, judge your options to achieve the given task c) you make a decision / choose an option d) you apply / execute your decision.
In this context Hick's law predicts (algorithmically) that the more alternatives you provide users with the harder you are making things for them. You should aim at presenting users with only the options they require to achieve the task.

Context in UX

In a moderately complex to simple situation, like with a website design, Hick's law can be an interesting principle to check. Do you really need all that content / all those options / all those images to help the user achieve their goal? As explained in further detail by Smashingmagazine,  it usually makes sense to "take a step back" and think of your project in general rather than applying this principle religiously to each sub part and element of the overall structure one by one. Consider the user journey, the objectives and via testing if necessary clarify that certain elements are not superfluous.

Hick's Law
Hick's Law
Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Wednesday, June 18, 2025
The ONHT Framework for Intermediate users

This Intermediate Guide for the ONHT (Objective, Needs, How, Trajectory) Framework transforms you from someone who uses GenAI into someone who thinks with GenAI by adding the missing cognitive functions that current GenAI lacks. The framework works through three critical pillars – Empathy (understanding all stakeholders), Critical Thinking (challenging assumptions), and Human in the Loop (active partnership). Master these patterns and you'll be solving complex problems others can't even approach, becoming indispensable by designing interactions that produce exceptional results rather than just functional outputs.

Read More
Monday, June 16, 2025
The ONHT Framework: Beginners Guide

Stop getting generic AI responses. Learn the four-letter framework that transforms vague requests into precise results. The ONHT framework: Objective (what problem you're solving), Needs (key information that matters), How (the thinking approach), and Trajectory (clear steps to the answer), teaches you to think WITH AI, not through it, turning "analyse customer feedback" into board-ready insights. Real examples show how adding context and structure gets you from Level 1 basics to Level 3 mastery, where AI delivers exactly what you need.
The difference? Knowing how to ask.

Read More
Sunday, June 15, 2025
The ONHT Framework: GenAI Prompting Solutions That Actually Work for People

GenAI tools are transforming work, but most people get poor results because they don't understand how to communicate with AI built on structured data. This guide is a series of articles that teaches the ONHT framework—a systematic approach to prompting that transforms vague requests into exceptional outputs by focusing on Objectives (what problem), Needs (what information), How (thinking approach), and Trajectory (path to solution). Master this framework and develop an expert mindset grounded in human-in-the-loop thinking, critical analysis, and empathy, and you'll excel with any AI tool, at any company, in any role.

Read More
Sunday, September 24, 2023
The reliability & accuracy of GenAI

I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.

Read More
crossmenuarrow-down