Summary: The more choices you are given, the harder the decision will be for you.
Elements of User Experience series: "Hicks Law"
Summary
"Hicks Law: the time required to make a decision increases, with the additional number of alternatives you are presented with"
Principal
The principle is: a) you are presented with a task / goal / issue, b) you analyse the situation, judge your options to achieve the given task c) you make a decision / choose an option d) you apply / execute your decision. In this context Hick's law predicts (algorithmically) that the more alternatives you provide users with the harder you are making things for them. You should aim at presenting users with only the options they require to achieve the task.
Context in UX
In a moderately complex to simple situation, like with a website design, Hick's law can be an interesting principle to check. Do you really need all that content / all those options / all those images to help the user achieve their goal? As explained in further detail by Smashingmagazine, it usually makes sense to "take a step back" and think of your project in general rather than applying this principle religiously to each sub part and element of the overall structure one by one. Consider the user journey, the objectives and via testing if necessary clarify that certain elements are not superfluous.
I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.
I discuss my experience with chatbots, contrasting older rules-based systems with newer GenAI (General Artificial Intelligence) chatbots. We cannot dismiss the creative capabilities of GenAI-based chatbots, but these systems lack reliability, especially in customer-facing applications, and improvements in the way AI is structured could lead to a "software renaissance," potentially reducing society's technical debt.
The article discusses the contrasting debate on how AI safety is and should be managed, its impact on technical debt, and its societal implications.
It notes the Center for AI Safety's call for a worldwide focus on the risks of AI, and Meredith Whittaker's criticism that such warnings preserve the status quo, strengthening tech giants' dominance. The piece also highlights AI's potential to decrease societal and technical debt by making software production cheaper, simpler, and resulting in far more innovation. It provides examples of cost-effective open-source models that perform well and emphasizes the rapid pace of AI innovation. Last, the article emphasises the need for adaptive legislation to match the pace of AI innovation, empowering suitable government entities for oversight, defining appropriate scopes for legislation and regulation, addressing ethical issues and biases in AI, and promoting public engagement in AI regulatory decisions.
Japan has made its ruling on the situation between Content creators and Businesses. Japanese companies that use AI have the freedom to use content for training purposes without the burden of copyright laws. This news about the copyright laws in Japan reported over at Technomancers is seen as Businesses: 1 / Content Creators: 0 The […]