Preparing for the Inevitable: 5 Steps to Follow When Technology Fails
Amazon’s massive internet outage in late March was a reminder that any company offering a public cloud service, however big or small, needs a plan for incident response. Outages are a fact of life; what matters is how you respond when they occur.
Having processes in place is essential, but those processes can’t (and shouldn’t try to) cover all eventualities. If something unexpected strikes at 3 a.m., your incident response team needs firm guidelines to help them decide how to act in the critical moments that follow.
At Atlassian, we came up with five values that guide how we respond to incidents and minimize disruption. A lot gets written about “values,” but they’re more than something nice to hang on the wall. Our engineers look to these values to steer them through tough decisions they have to make under pressure.
Each value maps to a specific component of incident response. I'm sharing them here in the hope they’ll be useful to your organization, too.
Value: Atlassian knows before our customers do
A well-designed service will have enough monitoring to detect and flag any issue before it becomes an incident. If your team isn't getting paged about imminent problems before they impact customers, you need to improve your monitoring and alerting.
Value: Escalate, escalate, escalate
The worst thing an engineer can decide is that they don’t want to wake someone because it might not be their problem. Nobody should mind getting woken for an incident and finding out they’re not needed. But they will mind if they’re not woken when they should have been. We're supposed to be on the same team, and teammates support each other.
Related: The Worst Hacks of 2017 -- So Far
Value: Stuff happens; clean it up fast
Customers don’t care why your service is down, only that you restore it as fast as possible. Never hesitate to get an incident resolved quickly so you can minimize the impact.
If you’re the tech lead and you know you can restore service with a quick restart, but you could also spend time investigating the cause while the service is still down, what should you do? This value guides your answer: Restore now and figure out the cause later; the customer experience comes first.
Value: Always blameless
Incidents are a part of running a service. We all improve by holding teams accountable, not apportioning blame. Human error is never a valid root cause for a major incident. Why was that engineer able to deploy a dev version to production? How did a command-line typo have such a devastating effect?
Assigning blame is never the appropriate response. Figure out what safeguards were missing and put them in place.
Value: Never have the same incident twice
Determine the root cause and identify the changes that will prevent that whole class of incidents from happening again. Can the same bug bite elsewhere? What situations could lead to a programmer introducing this bug? Commit to delivering specific changes by specific dates.
With these values in place, the next step is to ensure they’re put into practice. We hold monthly meetings where we discuss how they’ve been implemented and dissect occasions when they weren’t. We call people out for following them -- and for not following them. And we’ve added them to our documentation for incident response.
Service outages are a big deal: the AWS incident affected 54 of the top 100 retailers, and that’s in just one industry segment. Your footprint may be a good deal smaller, but the impact of an outage on both you and your customers can be just as disruptive, proportionally speaking. Give your engineers the help they need to make the tough calls at crunchtime. Both they and your customers will thank you for it.