Google and design

By    John Garner on  Sunday, March 22, 2009
Summary: There is no doubt that Google has changed the way people use the Internet; it's search tool and to a certain extent other great services like GoogleMaps and GMail. I was surprised by learning however about the relationship that Google seems to have with design. Douglas Bowman has just left Google and explains his decision, […]

There is no doubt that Google has changed the way people use the Internet; it's search tool and to a certain extent other great services like GoogleMaps and GMail. I was surprised by learning however about the relationship that Google seems to have with design. Douglas Bowman has just left Google and explains his decision, albeit the reasons behind it in a really interesting article about his experience there. There is an underlying theme of how Google relies too much on data to decide how design decisions should be settled.

I found it fascinating, having worked in the same type of situation and also the opposite, where design is not tested and relies on the gut feeling of the creative people rather than user experience testing. The success of this approach is the luck of the draw though. And even with world class creatives, nobody is perfect and your gut feeling isn't always going to be the right decision, even if you can convince your entourage it is. Bowman seems to be really good and you can feel the frustration of his creativity being put into question by other aspects or realities of the Google business:

Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions. [...] Yes, it's true that a team at Google couldn't decide between two blues, so they're testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can't operate in an environment like that. I've grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

The article Bowman links to on NY Times actually describes the issue and the role that Marissa Mayer had in this story (on page 3):

A designer, Jamie Divine, had picked out a blue that everyone on his team liked. But a product manager tested a different color with users and found they were more likely to click on the toolbar if it was painted a greener shade.
As trivial as color choices might seem, clicks are a key part of Google's revenue stream, and anything that enhances clicks means more money. Mr. Divine's team resisted the greener hue, so Ms. Mayer split the difference by choosing a shade halfway between those of the two camps.

You feel you're getting a peak view and understanding of an event, like watching the intrigue of you favourite TV show unfold. In this case though the importance of the debate and the impact that each party could have, can affect the crucial services that Google offer. On the one hand you can say that Google have an impressive track record, on the other, you wonder whether innovative and creative solutions aren't stifled in the process. Too much creative lead 'can' damage the best overall user experience without proper testing. But never taking a chance with a different creative approach can result in uniformity / dullness. I do feel that design, when applied to services and products that thousands or millions of people will use, should be tested by people from different backgrounds to see how well they interact with it. This may again be considered data, but real live people testing your work is going to happen sooner or later, hopefully...

Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Tuesday, May 23, 2023
Sustainable Enterprise AI Adoption: Protecting Confidentiality, Ensuring Accuracy, and Successful Business Integration

The public's recent access to breakthroughs in AI has sparked excitement but their integration into businesses often leads to significant issues, especially without proper management. Implementing AI effectively requires robust security measures to protect sensitive data, investment in unbiased technology, sufficient training for understanding AI systems, identification of the best AI use cases, assurance of reliable data sources, and careful management to prevent over-reliance on AI over human workforce. It's also critical to understand that AI systems like ChatGPT have their limitations and inaccuracies, and they need continuous monitoring and fine-tuning, while keeping in mind that these technologies have evolved from a long history of advancements, thanks to various companies and organizations.

Read More
Saturday, May 13, 2023
AI in my pocket

A novel AI topic that is trending, is around the porting of foundation models like Llama on to Google Pixel phones. This also maps to the leaked Google Memo about the threat of open source to their general 'moat model'.

Read More
Wednesday, May 10, 2023
AI: I see hallucinations

Discussing AI-generated hallucinations in language models like ChatGPT, which sometimes provide incorrect or fictional information aka BS. This problem is concerning for businesses that require trustworthy and predictable systems. While search engines like Google and Bing attempt to improve their accuracy and user experience, neither is perfect. The unpredictability of AI systems raises concerns about high-stakes decisions and public trust. Is the closing of OpenAI’s open-source projects a good idea? Could it benefit from expert analysis to understand and mitigate AI hallucinations?

Read More
Monday, May 8, 2023
AI promises: the good, the bad, the ugly

Looking at the current condition and possibilities of AI and AGI, emphasizing the rapid progress, benefits, and potential risks linked to their development. AI tools are already driving productivity gains in various industries. We look at applications ranging from farming to law. However, concerns about the security, accuracy, and ethical implications of these technologies persist. Some experts, like Dr. Geoffrey Hinton, are advocating for stricter regulation and caution in AI development.

Read More
crossmenuarrow-down