SF02 Blues...Servers down, no connection

April 11, 2017 178 views
DigitalOcean Ubuntu 16.04

This is the first downtime I have had since using DigitalOcean for almost two years now, but it has been bad. Apparently the SF02 Servers went down last night and they have been working hard on getting things back up.

This morning I am dealing with multiple websites down, clients calling me, projects due that I cannot work on..I am dead in my tracks because of this.

Hopefully this is a one-off. I am not having a good day.

3 Answers

..everything on my end is back up. I had to power-cycle some of my droplets, and a couple of Droplets were powered off..

  • I just had to go through every one of my various clients DO accounts and power on their droplets.

    If you are having issues with your SF02-based droplet...make sure it is powered on..and power-cycle it if you still have issues.

    All my websites are back up now.

What is your community question?

@sierracircle

This is one of the biggest reasons I push redundancy so hard when it comes to working with clients.

This could happen to and with any provider -- even if you're Google and Amazon. At the end of the day, you're still deploying virtualized servers to a network of bare-metal servers. The cloud is just a way of bringing it all together and making what used to take hours and days to deploy, take only a few minutes or less (depending on what you're doing).

If you have clients that you host, data that you rely on, projects that you need remote access to, etc, you need redundancy. Whether that's in the form of load balancing across multiple data centers, or another means of your choice, it needs to exist as it's the solution (and a relatively low-cost one at that).

I say this not to make an already frustrating situation worse, but as a word of advice to prevent this in the near or distance future. Yes, it's more to manage and something else to look after, though if you're not fond of downtime and need near 100% uptime, load balancing and fail over is the way to go.

  • @jtittle thanks for the ideas. It sounds like that will be my next step in my digital life.

    Do you have any tips for fail-over? tutorials. applications, etc. that you use?

    • @sierracircle

      No problem, and yes :-).

      Honestly, I'm a big fan of NGINX. It shows in my posts here within the community, so that's mainly what I recommend when it comes to anything that it can be used for.

      NGINX functions as a Load Balancer, Reverse Proxy, Caching Reverse Proxy, Web Server, etc. You can actually even mix them all together using a single instance (not really what I'd recommend, but it is very much possible).

      So my recommendation really depends on what you're trying to achieve and how you want to go about achieving it. To get the ball rolling, I'll stick with what I mainly work with and that is PHP.

      ...

      If I were running a few Droplets with NGINX, PHP(-FPM), and MySQL/MariaDB and I begin to run in to issues with uptime across data centers, the first thing I look at what can I do to get rid of this pain point and make it so that it's less painful or totally resolved.

      DNS

      Before we get in to anything, DNS is a big one. You need a way to be able to quickly change IP's. Some hate CloudFlare, but I love it for this particular reason. Changes resolve quick, as in seconds. I can point a domain to one IP now, watch the server go down and swap that IP and have it back up and running almost as soon as the request is submitted. Without quick DNS resolution, you're going to face downtime no matter what you do.

      NGINX as a Load Balancer

      NGINX as a Load Balancer can balance incoming requests to N number of other NGINX instances. The only limit is hardware (mainly CPU/RAM since Load Balancing doesn't really kill the disk in higher traffic scenarios).

      You can use Round Robin, Weighted, or IP Hashing.

      Round Robin just distributes requests evenly, you have no real choice in the matter where, it just does. Weighted means that X, Y, Z servers can be given preference and depending on the weight assigned, one or more servers will receive more requests than those with lower weights. IP Hashing tries to send visitors back to the same sever that first served their request -- if they are using a proxy, have a dynamic IP, or similar, this probably won't work too well (and if you are using Sessions server-side, it can be a major PITA).

      If you use IP Hash and have any type of sessions, sessions need to be offloaded to Redis, Memcached, or another KV store so that you aren't relying on the server the request is being handled by to handle sessions too. File based sessions are useless here (as is the case, in general, when you start scaling).

      Why? You can't rely on X server to give session data to Y server -- there may be a delay, it may not exist after a certain amount of time, etc. Using a KV store fixes that.

      NGINX as a Proxy

      Now, like using NGINX as a Load Balancer, NGINX can actual Proxy a request to another server. It works very similar, the difference is you're not using RR, Weighted, or IP Hash, you're just accepting a request on the Proxy NGINX instance and sending it to another NGINX instance to handle.

      You can change the IP in the server block anytime, reload the configuration and NGINX will proxy to whatever IP you tell it to.

      This is beneficial if you can quickly spin up Droplets and when you don't want to also setup a distributed storage medium (which you'll need if you start load balancing, else you have to replicate data across multiple servers to maintain consistency).

      ...

      So what does all this look like? Well, let's start with the easier of the two, using NGINX as a Proxy.

      upstream @backend {
          server 11.222.333.44;
      }
      
      server
      {
          listen 80;
          listen [::]:80;
          server_name _;
      
          location /
          {
              proxy_pass http://@backend;
              include /etc/nginx/config/proxy/proxy.conf;
          }
      }
      

      The above accepts a request on Port 80, proxies it to 11.222.333.44 (fake IP for this example) and from there, 11.222.333.44 handles the request (i.e. NGINX and whatever stack you've setup).

      It doesn't know or care that the application you're proxying to is PHP, NodeJS, etc. That doesn't matter here. All this server block cares about is accepting a request and sending it on, nothing more, nothing less.

      The proxy.conf file looks like this (and was omitted above for simplicity).

              proxy_buffers 16 32k;
              proxy_buffer_size 64k;
              proxy_busy_buffers_size 128k;
              proxy_cache_bypass $http_pragma $http_authorization;
              proxy_connect_timeout 59s;
              proxy_hide_header X-Powered-By;
              proxy_http_version 1.1;
              proxy_ignore_headers Cache-Control Expires;
              proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;
              proxy_no_cache $http_pragma $http_authorization;
              proxy_pass_header Set-Cookie;
              proxy_read_timeout 600;
              proxy_redirect off;
              proxy_send_timeout 600;
              proxy_temp_file_write_size 64k;
              proxy_set_header Accept-Encoding '';
              proxy_set_header Cookie $http_cookie;
              proxy_set_header Host $host;
              proxy_set_header Proxy '';
              proxy_set_header Referer $http_referer;
              proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
              proxy_set_header X-Forwarded-Host $host;
              proxy_set_header X-Forwarded-Server $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-Proto $scheme;
              proxy_set_header X-Original-Request $request_uri;
      

      That is production ready -- I use it myself, and nothing should really need to be changed if you used it yourself.

      Load Balancing is pretty much just as easy, really. This is a basic load balancer config.

      upstream @backend {
          server server1.domain.com;
          server server2.domain.com;
          server server3.domain.com;
      }
      
      server
      {
          listen 80;
          listen [::]:80;
          server_name _;
      
          location /
          {
              proxy_pass http://@backend;
              include /etc/nginx/config/proxy/proxy.conf;
          }
      }
      

      You can use IPv4, Private Network IPv4, and Hostnames. The above is Round Robin.

      Weighted looks something like:

      upstream @backend {
          server server1.domain.com weight=3;
          server server2.domain.com weight=2;
          server server3.domain.com weight=1;
      }
      
      server
      {
          listen 80;
          listen [::]:80;
          server_name _;
      
          location /
          {
              proxy_pass http://@backend;
              include /etc/nginx/config/proxy/proxy.conf;
          }
      }
      

      While IP Hash looks like:

      upstream @backend {
          ip_hash;
          server server1.domain.com;
          server server2.domain.com;
          server server3.domain.com;
      }
      
      server
      {
          listen 80;
          listen [::]:80;
          server_name _;
      
          location /
          {
              proxy_pass http://@backend;
              include /etc/nginx/config/proxy/proxy.conf;
          }
      }
      

      As you can see, not much really changes, so hopefully you can see why I prefer using NGINX here. You change a few lines, but other than that, very similar configuration can work for the same or different purposes.

      Now there is one more, Least Connections, which, by description, means that when a request comes in, NGINX will send that request to whatever server has the least connections at the time of that request.

      All we'd do to use that is replace ip_hash; with least_conn;.

      ...

      That being said, if you can give me a few ideas as to what you're actually doing, I can make a few more accurate recommendations as to what you might want to do.

      Let me know what kind of projects, what you're using, what is difficult for you now, what would make things easier, etc.

      • @sierracircle

        As a general note, I rarely use repository packages for NGINX -- 99.99% of the time, I'll build it from source. It takes a while, so it may only be something you want to do for the servers you're using as production and not the LB/Proxy servers, but it's really the only way to tweak/tune NGINX for the best overall performance.

        Some modules you probably don't need. With most repositories, you get a select few (which may not be enough) or way too many (which means added bulk to your source).

        If you want to see what I use right now, I wrote an auto-installer for NGINX (Mainline) which compiles NGINX from source, adds support for Brotli compression, Pagespeed, and a few other modules.

        It auto-generates dhparams (for SSL) and has the same configuration examples I posted above in the repo on GitHub.

        I recommend using it on a fresh, clean server. Don't even both running update or upgrade using apt. Ubuntu 16.04 and 16.10 already have git pre-installed, so running it is as simple as:

        cd /opt \
        && git clone https://github.com/serveradminsh/installers.git \
        && cd installers/nginx \
        && chmod +x installer.sh \
        && ./installer.sh
        

        It'll take 20-30 minutes to run as generating a 4096 bit dhparem file is time consuming, but it'll provide you with a setup you can use and tinker with.

        Toss that on a 1GB Droplet and grab some coffee/tea, then get familiar with NGINX a little more :-).

        As a sidenote, my e-mail is in my profile. Feel free to get in touch if you want to talk specifics and don't want to reveal too much on an open forum.

Have another answer? Share your knowledge.