Where did my Ghost blog go?

Uhm, so my website kinda dropped dead a few days ago - returning a “ERR_CONNECTION_REFUSED” - and after getting in touch with DigitalOcean since I’ve got a droplet here, they helped me restart the server using commands I weren’t at all familiar with. The thing is I don’t really know anything about running my own website, less how to use a web console, so I’ve totally forgotten all about what I had to go through 2 years ago in order to get my website/blog up and running in the first place X)

Having restarted the server and accessing it using PuTTY, I couldn’t find anything on the server though. No files at all. Still, after the restart, my domain would at least feed me an error that I had configured myself - though I only vaguely remember doing so. So something should still exist somewhere…

DigitalOcean asked me to try and restart Ghost using a few commands, but the console/server told me that “ghost” was an unrecognized service… No wonder since the server didn’t seem to have anything on it.

Having read up on some old tutorials, I found a part on how to install Ghost on a server and since I kinda remember doing all that before, I decided to go for it again. So now I’ve got the blog up and running again but naturally, it’s an empty shell and for some reason I couldn’t even log back in without creating a new user! Also, the user interface isn’t what it used to be, but maybe that’s just because I’ve never updated Ghost for the past 2 years since I didn’t really know how to do it and didn’t wanna screw things up (why fix something if it ain’t broken, right?).

Wondering if there’s any way I could find out where all my old files went, or else I’m gonna have to copy-paste all of my updates back onto the blog (I’ve got them all saved as Word documents) ._.

Also, when I checked the error log as requested by DigitalOcean before I reinstalled Ghost, I got this message - which doesn’t tell me anything:

2015/10/31 20:49:24 [error] 22940#0: *83 connect() failed (111: Connection refused) while connecting to upstream, client:, server: conradargo.me_, request: “GET /robots.txt HTTP/1.1”, upstream: “”, host: “

Show comments

Submit an answer

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Want to learn more? Join the DigitalOcean Community!

Join our DigitalOcean community of over a million developers for free! Get help and share knowledge in Q&A, subscribe to topics of interest, and get courses and tools that will help you grow as a developer and scale your project or business.

@conradargo - If you’re unfamiliar with the Linux CLI (command line interface), updating & upgrading the system / system packages (using the CLI), updating & upgrading software (such as Ghost - and from the CLI, whether full-time or part), etc – an unmanaged environment, such as what is provided by DigitalOcean, may not be the best place to host a website that you actually care about.

Unmanaged, in the simplest of terms, means that you’re on your own when it comes to updating, upgrading and managing server security (and this goes beyond Ghost). DigitalOcean will update the core server (which is where Droplets are deployed - the actual hardware node), though everything else is up to you. The one exception here is that DigitalOcean handles kernel updates (as they do not allow for custom kernels at the moment).

That said, it can provide an absolutely wonderful environment in which to learn each of these things. Learning how to properly secure, optimize and run a server can provide enormous benefit and allow you to more effectively operate a website – things you simply would not be able to learn when in a shared hosting environment.

I mention these things not to scare you off, or turn you away from DigitalOcean, rather, to let you know what comes with such a service. Software development is rapid – new security holes are found and if not patched, are open for exploiting by anyone with knowledge of the exploit. If you fail to update, you’re leaving yourself wide open. This applies server-side (OS, PHP, MySQL, NodeJS, System Packages, SSH etc) and client-side (Ghost, WordPress, Plugins etc). Going 2 years without updating anything simply because it works won’t cut it.

Software is like most things – you wouldn’t go 2 years without an oil change, 2 years without food (extreme example, but an example nonetheless), 2 years without an update on your site (in most cases, otherwise you’d be paying for someone you’re not using…), etc. Just a few examples of course.

In regards to finding out where your files went, if you were the target of an attack, a simple rm -rf from the CLI on a directory would wipe everything in it. If there was an exploit in Ghost that allowed an attacker to gain access to either NodeJS, the CLI etc and escalate privileges, that’d be all it’d take. If NodeJS was being ran as the root user, the chances of an exploit succeeding are far higher to the point of guaranteed (it shouldn’t be run as root as this gives full-access, regardless of user-limiting privileges on files/directories – realistically, only what physically has to be run as root should be - NodeJS, PHP etc are not among those that have to be – they do initially, but should drop the privileges or be ran as another user such as www-data or a user-defined user).

As for DigitalOcean snapshots, while beneficial, they are not meant to be utilized (as of right now) as a primary backup source; offsite backups should still be used and used more frequently. The snapshots are not always (IIRC) run at the same time, nor guaranteed to be daily. Beyond that, you’re unable to setup multiple backups per day or rotate on a weekly/monthly basis. They also require that your Droplet be temporarily shutdown until the backup completes – the larger the droplet, the longer it’s going to take to backup. This is another area that you’re responsible for when it comes to unmanaged services – backups should never be expected, nor assumed. Backup and backup frequently.

The 2015/10/31 20:49:24 entry shown is most likely from the [error] log. It’s stating that the [GET] request (most likely a browser request as GET is the HTTP request type used when you access a website URI from a browser such as Chrome, IE, FireFox etc) for robots.txt failed as the web server (NodeJS/Ghost) was unable to fetch the requested file.

A few basic reasons for such an error (not all-inclusive):

1). The web server is not active (i.e. active as in running a current process to serve the request). 2). The web server is active but the file does not exist. 3). The connection simply failed (which would, from the text, be the case here).

That said, you can run a few commands from the CLI to see what is running, what is installed etc.

top Allows you to see currently active processes. You can use shift + m (on Windows) to sort by memory usage and then c to sort by CPU usage. This will allow you to view what user is running what service, how much memory is used, how much virtual memory is being used and the command being executed (far right of the output).

which You can run which node, as an example, to see which version of NodeJS is installed, if it’s installed. This would be a super-quick way to see if NodeJS was removed without having to dive in to all of the directories, search for files etc. If the return is blank, there’s nothing there. If it returns a path, then NodeJS is still installed (but it doesn’t mean that it works, or that it’s unmodified).

service x start | restart | stop You can use service node start, service node restart or service node stop to attempt to start, restart and stop the service respectively. You can substitute node for the name of any common or known service (i.e. php5-fpm, nginx, apache or apache2 etc).

history This will allow you to see what commands have been run as root unless the history has been cleared. This would give you a basic idea as to whether someone could have tampered with the files and run a command, such as rm -rf, thus deleting all files in its path.