@conradargo - If you’re unfamiliar with the Linux CLI (command line interface), updating & upgrading the system / system packages (using the CLI), updating & upgrading software (such as Ghost - and from the CLI, whether full-time or part), etc – an unmanaged environment, such as what is provided by DigitalOcean, may not be the best place to host a website that you actually care about.
Unmanaged, in the simplest of terms, means that you’re on your own when it comes to updating, upgrading and managing server security (and this goes beyond Ghost). DigitalOcean will update the core server (which is where Droplets are deployed - the actual hardware node), though everything else is up to you. The one exception here is that DigitalOcean handles kernel updates (as they do not allow for custom kernels at the moment).
That said, it can provide an absolutely wonderful environment in which to learn each of these things. Learning how to properly secure, optimize and run a server can provide enormous benefit and allow you to more effectively operate a website – things you simply would not be able to learn when in a shared hosting environment.
I mention these things not to scare you off, or turn you away from DigitalOcean, rather, to let you know what comes with such a service. Software development is rapid – new security holes are found and if not patched, are open for exploiting by anyone with knowledge of the exploit. If you fail to update, you’re leaving yourself wide open. This applies server-side (OS, PHP, MySQL, NodeJS, System Packages, SSH etc) and client-side (Ghost, WordPress, Plugins etc). Going 2 years without updating anything simply because it works won’t cut it.
Software is like most things – you wouldn’t go 2 years without an oil change, 2 years without food (extreme example, but an example nonetheless), 2 years without an update on your site (in most cases, otherwise you’d be paying for someone you’re not using..), etc. Just a few examples of course.
–
In regards to finding out where your files went, if you were the target of an attack, a simple rm -rf
from the CLI on a directory would wipe everything in it. If there was an exploit in Ghost that allowed an attacker to gain access to either NodeJS, the CLI etc and escalate privileges, that’d be all it’d take. If NodeJS was being ran as the root
user, the chances of an exploit succeeding are far higher to the point of guaranteed (it shouldn’t be run as root
as this gives full-access, regardless of user-limiting privileges on files/directories – realistically, only what physically has to be run as root should be - NodeJS, PHP etc are not among those that have to be – they do initially, but should drop the privileges or be ran as another user such as www-data or a user-defined user).
As for DigitalOcean snapshots, while beneficial, they are not meant to be utilized (as of right now) as a primary backup source; offsite backups should still be used and used more frequently. The snapshots are not always (IIRC) run at the same time, nor guaranteed to be daily. Beyond that, you’re unable to setup multiple backups per day or rotate on a weekly/monthly basis. They also require that your Droplet be temporarily shutdown until the backup completes – the larger the droplet, the longer it’s going to take to backup. This is another area that you’re responsible for when it comes to unmanaged services – backups should never be expected, nor assumed. Backup and backup frequently.
–
The 2015/10/31 20:49:24
entry shown is most likely from the [error] log. It’s stating that the [GET] request (most likely a browser request as GET is the HTTP request type used when you access a website URI from a browser such as Chrome, IE, FireFox etc) for robots.txt
failed as the web server (NodeJS/Ghost) was unable to fetch the requested file.
A few basic reasons for such an error (not all-inclusive):
1). The web server is not active (i.e. active as in running a current process to serve the request).
2). The web server is active but the file does not exist.
3). The connection simply failed (which would, from the text, be the case here).
–
That said, you can run a few commands from the CLI to see what is running, what is installed etc.
top
Allows you to see currently active processes. You can use shift + m
(on Windows) to sort by memory usage and then c
to sort by CPU usage. This will allow you to view what user is running what service, how much memory is used, how much virtual memory is being used and the command being executed (far right of the output).
which
You can run which node
, as an example, to see which version of NodeJS is installed, if it’s installed. This would be a super-quick way to see if NodeJS was removed without having to dive in to all of the directories, search for files etc. If the return is blank, there’s nothing there. If it returns a path, then NodeJS is still installed (but it doesn’t mean that it works, or that it’s unmodified).
service x start | restart | stop
You can use service node start
, service node restart
or service node stop
to attempt to start, restart and stop the service respectively. You can substitute node
for the name of any common or known service (i.e. php5-fpm, nginx, apache or apache2 etc).
history
This will allow you to see what commands have been run as root
unless the history has been cleared. This would give you a basic idea as to whether someone could have tampered with the files and run a command, such as rm -rf
, thus deleting all files in its path.
have you checked to see whether you have any old snapshots in your Digital Ocean control panel you could use to recover your files/configs?
recommended reading here:
https://www.digitalocean.com/community/tutorials/digitalocean-backups-and-snapshots-explained
No snapshots I’m afraid :/