Google and design

By    John Garner on  Sunday, March 22, 2009
Summary: There is no doubt that Google has changed the way people use the Internet; it's search tool and to a certain extent other great services like GoogleMaps and GMail. I was surprised by learning however about the relationship that Google seems to have with design. Douglas Bowman has just left Google and explains his decision, […]

There is no doubt that Google has changed the way people use the Internet; it's search tool and to a certain extent other great services like GoogleMaps and GMail. I was surprised by learning however about the relationship that Google seems to have with design. Douglas Bowman has just left Google and explains his decision, albeit the reasons behind it in a really interesting article about his experience there. There is an underlying theme of how Google relies too much on data to decide how design decisions should be settled.

I found it fascinating, having worked in the same type of situation and also the opposite, where design is not tested and relies on the gut feeling of the creative people rather than user experience testing. The success of this approach is the luck of the draw though. And even with world class creatives, nobody is perfect and your gut feeling isn't always going to be the right decision, even if you can convince your entourage it is. Bowman seems to be really good and you can feel the frustration of his creativity being put into question by other aspects or realities of the Google business:

Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions. [...] Yes, it's true that a team at Google couldn't decide between two blues, so they're testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can't operate in an environment like that. I've grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

The article Bowman links to on NY Times actually describes the issue and the role that Marissa Mayer had in this story (on page 3):

A designer, Jamie Divine, had picked out a blue that everyone on his team liked. But a product manager tested a different color with users and found they were more likely to click on the toolbar if it was painted a greener shade.
As trivial as color choices might seem, clicks are a key part of Google's revenue stream, and anything that enhances clicks means more money. Mr. Divine's team resisted the greener hue, so Ms. Mayer split the difference by choosing a shade halfway between those of the two camps.

You feel you're getting a peak view and understanding of an event, like watching the intrigue of you favourite TV show unfold. In this case though the importance of the debate and the impact that each party could have, can affect the crucial services that Google offer. On the one hand you can say that Google have an impressive track record, on the other, you wonder whether innovative and creative solutions aren't stifled in the process. Too much creative lead 'can' damage the best overall user experience without proper testing. But never taking a chance with a different creative approach can result in uniformity / dullness. I do feel that design, when applied to services and products that thousands or millions of people will use, should be tested by people from different backgrounds to see how well they interact with it. This may again be considered data, but real live people testing your work is going to happen sooner or later, hopefully...

Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Wednesday, June 18, 2025
The ONHT Framework for Intermediate users

This Intermediate Guide for the ONHT (Objective, Needs, How, Trajectory) Framework transforms you from someone who uses GenAI into someone who thinks with GenAI by adding the missing cognitive functions that current GenAI lacks. The framework works through three critical pillars – Empathy (understanding all stakeholders), Critical Thinking (challenging assumptions), and Human in the Loop (active partnership). Master these patterns and you'll be solving complex problems others can't even approach, becoming indispensable by designing interactions that produce exceptional results rather than just functional outputs.

Read More
Monday, June 16, 2025
The ONHT Framework: Beginners Guide

Stop getting generic AI responses. Learn the four-letter framework that transforms vague requests into precise results. The ONHT framework: Objective (what problem you're solving), Needs (key information that matters), How (the thinking approach), and Trajectory (clear steps to the answer), teaches you to think WITH AI, not through it, turning "analyse customer feedback" into board-ready insights. Real examples show how adding context and structure gets you from Level 1 basics to Level 3 mastery, where AI delivers exactly what you need.
The difference? Knowing how to ask.

Read More
Sunday, June 15, 2025
The ONHT Framework: GenAI Prompting Solutions That Actually Work for People

GenAI tools are transforming work, but most people get poor results because they don't understand how to communicate with AI built on structured data. This guide is a series of articles that teaches the ONHT framework—a systematic approach to prompting that transforms vague requests into exceptional outputs by focusing on Objectives (what problem), Needs (what information), How (thinking approach), and Trajectory (path to solution). Master this framework and develop an expert mindset grounded in human-in-the-loop thinking, critical analysis, and empathy, and you'll excel with any AI tool, at any company, in any role.

Read More
Sunday, September 24, 2023
The reliability & accuracy of GenAI

I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.

Read More
crossmenuarrow-down