Danger in the cloud: backup nightmare

By    John Garner on  Sunday, October 11, 2009
Summary: T-Mobile users of the sidekick device have been warned to not let their devices drain completely after a server from the Danger company (owned by Microsoft) had a catastrophic failure, specifically the server managing this service. Seems weird when you read up on cloud computing that one server would hold all the data and not […]

T-Mobile users of the sidekick device have been warned to not let their devices drain completely after a server from the Danger company (owned by Microsoft) had a catastrophic failure, specifically the server managing this service. Seems weird when you read up on cloud computing that one server would hold all the data and not have any type of backup system! Especially when it concerns so many people's everyday digital life! Read more about the event here and here.
Through the different accounts of the incident it seems that there a) wasn't an ongoing backup system and b) when upgrading the system the techies at Danger didn't actually perform a backup, so when things went wrong they were, well, out of options! c) data is not saved on a proper backup system on the Sidekick since the battery draining itself can kill all the data and relies too much on the cloud / offsite storage system!
It is obvious that this story is a dream com true for consultants and companies that work in the backup industry and a nightmare for T-Mobile users concerned...

Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Wednesday, June 18, 2025
The ONHT Framework for Intermediate users

This Intermediate Guide for the ONHT (Objective, Needs, How, Trajectory) Framework transforms you from someone who uses GenAI into someone who thinks with GenAI by adding the missing cognitive functions that current GenAI lacks. The framework works through three critical pillars – Empathy (understanding all stakeholders), Critical Thinking (challenging assumptions), and Human in the Loop (active partnership). Master these patterns and you'll be solving complex problems others can't even approach, becoming indispensable by designing interactions that produce exceptional results rather than just functional outputs.

Read More
Monday, June 16, 2025
The ONHT Framework: Beginners Guide

Stop getting generic AI responses. Learn the four-letter framework that transforms vague requests into precise results. The ONHT framework: Objective (what problem you're solving), Needs (key information that matters), How (the thinking approach), and Trajectory (clear steps to the answer), teaches you to think WITH AI, not through it, turning "analyse customer feedback" into board-ready insights. Real examples show how adding context and structure gets you from Level 1 basics to Level 3 mastery, where AI delivers exactly what you need.
The difference? Knowing how to ask.

Read More
Sunday, June 15, 2025
The ONHT Framework: GenAI Prompting Solutions That Actually Work for People

GenAI tools are transforming work, but most people get poor results because they don't understand how to communicate with AI built on structured data. This guide is a series of articles that teaches the ONHT framework—a systematic approach to prompting that transforms vague requests into exceptional outputs by focusing on Objectives (what problem), Needs (what information), How (thinking approach), and Trajectory (path to solution). Master this framework and develop an expert mindset grounded in human-in-the-loop thinking, critical analysis, and empathy, and you'll excel with any AI tool, at any company, in any role.

Read More
Sunday, September 24, 2023
The reliability & accuracy of GenAI

I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.

Read More
crossmenuarrow-down