Do Community Members Confirm Symptoms of Defective Disk Drives for Two Droplets?
I have a situation where snapshot restores are not recreating operational websites instances. Not one but ‘two’ different snapshots were created for ‘two’ different websites. Each website has it own droplet and snapshot. This redundancy was created as an extra safeguard.
I’ve been told “If you see a problem after restoring a snapshot, that means it was present when the snapshot was taken too - snapshots are simply a copy of the Droplet’s disk at the time the snapshot was taken, and they don’t change anything in a Droplet’s disk.”
As far as I am concerned, that’s an unacceptable response when my entire situation is duly understood. Let me explain.
From my perspective, it appears both of my droplets have defective disk drives.
These snapshots were taken when the websites were working. Further, on many occasions, over several years, I was able to reboot at least one of these sites after the snapshots were created.
So, if there was something wrong with the underlying kernel or some other system component, I expect the website would ‘not’ have been reboot-able over these years. But, that’s not the case, I was able to stop and restart them over and over again. Therefore, I know for certain, the foundation was valid and reliable.
Given, in this particular situation, I created snapshots of working systems (not one but two), then I and very other Customer should be able to use them as a fallback. If you can’t recover from a snapshot, what good are they?
Let me also be an advocate of the following. DO’s creation of snapshots should include a validation process. One that assures they work! (As a customer, I now know, validating snapshots it critical for which I’ll do going forward).
Further, the following rationale further compounds the situation.
It doesn’t make sense, when a snapshot has been restored followed by the additional activity of also restoring a kernel, why would the instances continue to remain in an unbootable state? The following could be the explanation.
Here is why I believe both droplets have defective disk drives.
After powering on the droplets, the consoles reveal they drop into ‘initramfs’ with the following exception messages: “Gave up waiting for root device” … “dropping to a shell” … “Missing modules” and “ALERT! /dev/disk/by-uuid/[a UUID] does not exist.”
The below commands result in the following outcomes.
1) The command <’ls /dev/mapper’> returns:
and it doesn’t return more than that.
2) The command <’cat /proc/modules’> command returns only a five service modules. It stops at ‘floppy’.
3) The command <’ls’> returns a list of key directories such as ‘root’, ‘bin’, ‘etc’, ‘init’ but not the all important ‘boot’.
4) The command <’ls - /dev/[hs]da* returns:
ls /dev/[hs]da*: No such file or directory.
5) The command <’sudo ls –l boot” returns:
/bin/sh: sudo: not found
6) The directory and files under \etc\default only include ‘keyboard’ and ‘console-setup’ and are missing everything else. There is no grub file.
Any suggestions or comments are greatly appreciated.