I have mentioned before what you need when things go wrong is options. Options are golden.
You will note that has an S on it – it is plural – more than one.
People are fallible, and machines, well they break, damage, and let the wrong person in the front door from time to time. So being able to make things good again – it’s like something worth a little more than a cursory consideration.
As you are probably aware – things went wrong… very wrong for the people over at GitLab earlier this week.
They have the full Reason For Outage (RFO) – or the closest you are going to get here on their website furthermore – they streamed the recovery on YouTube.
You have got to love that kind of honesty and transparency.
So anyway – my point here – have a think about where things can go wrong, and how you are managing the tell tail signs, keeping it running, coasting out a and recovering should the worst happen…
– Hardware redundancy;
– Storage redundancy;
– Monitoring;
– Backup locally;
– Remote backup;
– Snapshotting;
– Incremental backups;
– Bare metal restore options;
– Granular restore options;
– Per database recovery;
… the list goes on.
Scary isn’t it* (yes).
Well – sure – but we are all in the same boat.
If you would like to talk some more on what you are doing, or how we can help you for your solutions – get in touch, chat window on the website, or email sales or support.
Still, bravo to them, and much beer and pizza earned there.
*If you require scaring – then how about this little gem on copy and pasting for fun?