Recommended Security Measures to Protect Your Servers

Introduction

Getting your applications up and running will often be your primary concern when you’re working on cloud infrastructure. As part of your setup and deployment process, it is important to include building in robust and thorough security measures for your systems and applications before they are publicly available. Implementing the security measures in this tutorial before you deploy your applications will ensure that any software that you run on your infrastructure has a secure base configuration, as opposed to ad-hoc measures that may be implemented post-deploy.

This guide highlights some practical security measures that you can take while you are configuring and setting up your server infrastructure. This list is not an exhaustive list of everything that you can do to secure your servers, but this offers you a starting point that you can build upon. Over time you can develop a more tailored security approach that suits the specific needs of your environments and applications.

SSH Keys

SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a server, you’ll likely spend most of your time in a terminal session connected to your server through SSH. A more secure alternative to password-based logins, SSH keys use encryption to provide a secure way of logging into your server and are recommended for all users.

With SSH keys, a private and public key pair are created for the purpose of authentication. The private key is kept secret and secure by the user, while the public key can be shared.

SSH Keys diagram

To configure SSH key authentication, you must place your public SSH key on the server in its proper directory. When your client first connects to the server, the server will ask for proof that you have the associated private key. It does this by generating a random value and sending it to your SSH client. Your SSH client will then use your private key to encrypt the response and then send the encrypted reply to the server. The server then decrypts your client’s reply using your public key. If the server can decrypt the random value, then it means that your client possesses the private key andthe server will let you connect without a password.

To learn more about how SSH-key-based authentication works, check out our article, Understanding the SSH Encryption and Connection Process.

How Do SSH Keys Enhance Security?

With SSH, any kind of authentication — including password authentication — is completely encrypted. However, when password-based logins are allowed, malicious users can repeatedly attempt to access a server, especially if it has a public-facing IP address. With modern computing power, it is possible to gain entry to a server by automating these attempts and trying combination after combination until the right password is found.

Setting up SSH key authentication allows you to disable password-based authentication. SSH keys generally have many more bits of data than a password, meaning that there are significantly more possible combinations that an attacker would have to run through. Many SSH key algorithms are considered uncrackable by modern computing hardware because they would require too much time to run through all of the feasible matches.

How to Implement SSH Keys

SSH keys are the recommended way to log into any Linux server environment remotely. A pair of SSH keys can be generated on your local machine and you can transfer the public key to your servers within a few minutes.

To set up SSH key on your server follow our distribution specific guides How To Set Up SSH Keys for Ubuntu, Debian, or CentOS.

If you would still like password authentication, consider implementing a solution like fail2ban on your servers to limit password guesses.

In either case, it is a best practice to not allow the root user to login directly over SSH. Instead, login as an unprivileged user and then escalate privileges as needed using a tool like sudo. This approach to limiting permissions is known as the principle of least privilege. Once you have connected to your server and created an unprivileged account that you have verified works with SSH, you can disable root logins by setting the PermitRootLogin no directive in /etc/ssh/sshd_config on your server and then restarting the server’s SSH process with a command like sudo systemctl restart sshd.

Firewalls

A firewall is a software or hardware device that controls how services are exposed to the network, and what types of traffic are allowed in and out of a given server or servers. A properly configured firewall will ensure that only services that should be publicly available can be reached from outside your servers or network.

Firewall diagram

On a typical server, a number of services may be running by default. These can be categorized into the following groups:

  • Public services that can be accessed by anyone on the internet, often anonymously. An example of this is a web server that may allow access to your site.
  • Private services that should only be accessed by a select group of authorized accounts or from certain locations. For example, a database control panel like phpMyAdmin.
  • Internal services that should be accessible only from within the server itself, without exposing the service to the public internet. For example, a database that should only accept local connections.

Firewalls can ensure that access to your software is restricted according to the categories above with varying degrees of granularity. Public services can be left open and available to the internet, and private services can be restricted based on different criteria, such as connection types. Internal services can be made completely inaccessible to the internet. For ports that are not being used, access is blocked entirely in most configurations.

How Do Firewalls Enhance Security?

Even if your services implement security features or are restricted to the interfaces you’d like them to run on, a firewall serves as a base layer of protection by limiting connections to and from your services before traffic is handled by an application.

A properly configured firewall will restrict access to everything except the specific services you need to remain open. Exposing only a few pieces of software reduces the attack surface of your server, limiting the components that are vulnerable to exploitation.

How to Implement Firewalls

There are many firewalls available for Linux systems, some are more complex than others. In general though, setting up the firewall should only take a few minutes and will only need to happen during your server’s initial setup or when you make changes to the services running on your server. Here are some options to get up and running:

info
If you are using DigitalOcean, you can also leverage the Cloud Firewall at no additional cost, which can be set up in minutes.

With any of the tutorials mentioned here, be sure that your firewall configuration defaults to blocking unknown traffic. That way any new services that you deploy will not be inadvertently exposed to the Internet. Instead you will have to allow access explicitly, which will force you to evaluate how the service is run, accessed, and who should be able to use it.

VPC Networks

Virtual Private Cloud (VPC) networks are private networks for your infrastructure’s resources. VPC networks provide a more secure connection among resources because the network’s interfaces are inaccessible from the public internet and other VPC networks in the cloud.

How Do VPC Networks Enhance Security

Using private instead of public networking for internal communication is preferable given the choice between the two, as VPC networks allow you to isolate groups of resources into specific private networks. VPC networks will only connect to each other using their private network interfaces over an internal network, which means that the traffic among your systems will not be routed through the public internet where it could be exposed or intercepted. VPC networks can also be used to isolate execution environments and tenants.

Additionally, you can set up internet gateways as the single point of access between your VPC network’s resources and the public internet, giving you more control and visibility into the public traffic connecting to your resources.

How to Implement VPC Networks

Many cloud infrastructure providers enable you to create and add resources to a VPC network inside their data centers.

info
If you are using DigitalOcean and would like to set up your own VPC gateway, you can follow our How to Configure a Droplet as a VPC Gateway guide to learn how on Debian, Ubuntu, and CentOS based servers.

DigitalOcean places each applicable resource (Droplets, load balancers, Kubernetes Clusters, and databases) into a VPC upon creation at no additional cost

Manually configuring your own private network can require advanced server configurations and networking knowledge. An alternative to setting up a VPC network is to use a VPN connection between your servers. If you are using Ubuntu or CentOS, you can follow this How To Set Up and Configure an OpenVPN Server on Ubuntu 20.04
tutorial.

For a less complex VPN between Ubuntu servers follow this How to Install Tinc and Set Up a Basic VPN on Ubuntu 18.04
tutorial.

Service Auditing

A big portion of security involves analyzing our systems, understanding the available attack surfaces, and locking down the components as best as we can.

Service auditing diagram

Service auditing is a way of knowing what services are running on a given system, which ports they are using for communication, and what protocols are accepted. This information can help you configure which services should be publicly accessible, firewall settings, and monitoring and alerting.

How Does Service Auditing Enhance Security?

Servers can run processes for internal purposes and to handle external clients. Each running service, whether it is intended to be internal or public, represents an expanded attack surface for malicious users. The more services that you have running, the greater the chance of a vulnerability affecting your software.

Once you have a good idea of what network services are running on your machine, you can begin to analyze these services. When you perform a service audit, ask yourself the following questions about each running service:

  • Should this service be running?
  • Is the service running on network interfaces that it shouldn’t be running on?
  • Should the service be bound to a public or private network interface?
  • Are my firewall rules structured to pass legitimate traffic to this service?
  • Are my firewall rules blocking traffic that is not legitimate?
  • Do I have a method of receiving security alerts about vulnerabilities for each of these services?

This type of service audit should be standard practice when configuring any new server in your infrastructure. Performing service audits every few months will also help you catch any services with configurations that may have changed unintentionally.

How to Perform Service Audits

To audit network services that are running on your system, use the ss command to list all the TCP and UDP ports that are in use on a server. An example command that shows the program name, PID, and addresses being used for listening for TCP and UDP traffic is:

  • sudo ss -plunt

You will receive output similar to this:

Output
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=812,fd=3)) tcp LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=69226,fd=6),("nginx",pid=69225,fd=6)) tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=812,fd=4)) tcp LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=69226,fd=7),("nginx",pid=69225,fd=7))

The main columns that need your attention are the Netid, Local Address:Port, and Process name columns. If the Local Address:Port is 0.0.0.0, then the service is accepting connections on all IPv4 network interfaces. If the address is [::] then the service is accepting connections on all IPv6 interfaces. In the example output above, SSH and Nginx are both listening on all public interfaces, on both IPv4 and IPv6 networking stacks.

With this example output, you could decide if you want to allow SSH and Nginx to listen on both interfaces, or only on one or the other. Generally you should disable services that are running on unused interfaces. For example if your site should only be reachable via IPv4, you would explicitly prevent a service from listening on IPv6 interfaces to reduce the number of exposed services.

Unattended Updates

Keeping your servers up to date with patches is a must to ensure a good base level of security. Servers that run out of date and insecure versions of software are responsible for the majority of compromises, but regular updates can mitigate vulnerabilities and prevent attackers from gaining a foothold on your servers.

Traditional updates require an administrator to manually check for and install updates for the various packages on their server; this can be time-intensive and it’s possible to forget or miss a major update. By contrast, unattended updates allow the system to update a majority of packages automatically.

How Do Unattended Updates Enhance Security?

Implementing unattended updates lowers the level of effort required to keep your servers secure and shortens the amount of time that your servers may be vulnerable to known bugs. In the event of a vulnerability that affects software on your servers, your servers will be vulnerable for however long it takes for you to run updates. Daily unattended upgrades will ensure that you don’t miss any packages, and that any vulnerable software is patched as soon as fixes are available.

In conjunction with the service auditing previously mentioned, performing updates automatically can greatly reduce your exposure to attacks and lower the amount of time spent on maintaining the security of your server

How to Implement Unattended Updates

Most server distributions now feature unattended updates as an option. For example, on Ubuntu an administrator can run:

  • sudo apt install unattended-upgrades

For more details on how to implement unattended updates, check out these guides for Ubuntu (under Automatic Updates) and Fedora.

[note]
Note: These mechanisms will only auto-update software that is installed through your system’s package manager. Make sure that any additional software you may be running like web applications are either configured for automatic updates or checked manually on a regular basis.

Disable Directory Indexes

Most web servers are configured by default to display directory indexes when a user accesses a directory that lacks an index file. For example, if you were to create a directory called downloads on your web server without any additional configuration, all of the files would be visible to anyone browsing the directory. For many cases, this is not a security concern, but it’s very possible that something confidential could be exposed. For example, if you were to create an index directory on your web server for your website, the directory may contain the file for your website’s homepage and a configuration file that contains credentials to the website’s backend database. Without disabling the directory’s indexes, both of the files in the folder would be visible to anyone browsing the directory.

How Does Disabling Directory Indexes Enhance Security?

Directory indexes have legitimate purposes, but they often unintentionally expose files to visitors. Disabling directory indexes as the default for your web server eliminates the risk of accidental data loss, leakage, or exploitation by making the directory files invisible to visitors. Visitors can still reach the files if they exist in the directory, but disabling indexing makes the files much more difficult to discover unintentionally.

How to Disable Directory Indexes

For most cases, disabling directory indexes is a matter of adding one line to your web server configuration.

  • Nginx disables directory indexes by default, so if you are using Nginx you should not need to make any changes.
  • The DirectoryListings page on the Apache Wiki explains how to disable directory listings. Make sure to use the Options -Indexes option listed there for any of your Apache Directory configuration blocks.

Back Up Frequently

While not strictly a security measure, backups can be crucial in saving compromised systems and data, and in analyzing how the system was compromised. For instance, if your server is compromised by ransomware (a malicious tool or virus that encrypts files and will only decrypt them if the attacker is paid some sum of money), a lack of backups may mean your only choice is to pay to get your data back. If your systems and data are regularly and securely backed up, you will be able to access and recover your data without interacting with the compromised system.

How Do Frequent Backups Enhance Security?

Frequent backups help recover data in the case of accidental deletions, and in the event of an attack where your data is deleted or corrupted. In either case, they help mitigate the risk of data loss by retaining copies of data from before an accidental deletion or before an attack occurred.

In addition to ransomware cases, regular backups can help with forensic analysis of long-term attacks. If you don’t have a history of your data, it can be difficult or even impossible to determine when an attack began and what data was compromised.

How to Implement Frequent Backups

When implementing backups for your systems, treat verifiable recovery of compromised or deleted data as the goal. Ask yourself: if my server disappears tomorrow, what steps need to be taken to get it back up and running securely with the least amount of work?

Here are a few other questions to consider when developing a disaster recovery plan:

  • Should the latest backup always be used? Depending on how frequently your data changes and when a compromise or deletion occurs, it may reduce risk to instead default to an older backup.
  • What is the actual process for restoring the backup? Do you need to create a new server or restore over the existing one?
  • How long can you survive without this server in action?
  • Do you need offsite backups?

info
If you are using DigitalOcean Droplets, you can enable weekly backups from the control panel by following this guide.

How To Back Up Data to an Object Storage Service with the Restic Backup Client is a tutorial that you can use to design your own backup system that will encrypt your backups and store them off of your production systems. The tutorial will work with servers, or even local desktop and laptop computers.

VPNs and Private Networking

Private networks are networks that are only available to certain servers or users. A VPN, or virtual private network, is a way to create secure connections between remote computers and present the connection as if it were a local private network. This provides a way to configure your services as if they were on a private network and connect remote servers over secure connections.

VPN diagram

For example, DigitalOcean private networks enable isolated communication between servers in the same account or team within the same region.

How Do They Enhance Security?

Using private instead of public networking for internal communication is almost always preferable given the choice between the two. However, since other users within the data center are able to access the same network, you still must implement additional measures to secure communication between your servers.

Using a VPN is, effectively, a way to map out a private network that only your servers can see. Communication will be fully private and secure. Other applications can be configured to pass their traffic over the virtual interface that the VPN software exposes. This way, only services that are meant to be consumable by clients on the public internet need to be exposed on the public network.

How Difficult Is This to Implement?

Using private networks in a datacenter that has this capability is as simple as enabling the interface during your server’s creation and configuring your applications and firewall to use the private network. Keep in mind that data center-wide private networks share space with other servers that use the same network.

As for VPN, the initial setup is a bit more involved, but the increased security is worth it for most use-cases. Each server on a VPN must have the shared security and configuration data needed to establish the secure connection installed and configured. After the VPN is up and running, applications must be configured to use the VPN tunnel. To learn about setting up a VPN to securely connect your infrastructure, check out our OpenVPN tutorial.

Public Key Infrastructure and SSL/TLS Encryption

Public key infrastructure, or PKI, refers to a system that is designed to create, manage, and validate certificates for identifying individuals and encrypting communication. SSL or TLS certificates can be used to authenticate different entities to one another. After authentication, they can also be used to establish encrypted communication.

SSL diagram

How Do They Enhance Security?

Establishing a certificate authority (CA) and managing certificates for your servers allows each entity within your infrastructure to validate the other members’ identities and encrypt their traffic. This can prevent man-in-the-middle attacks where an attacker imitates a server in your infrastructure to intercept traffic.

Each server can be configured to trust a centralized certificate authority. Afterwards, any certificate that the authority signs can be implicitly trusted. If the applications and protocols you are using to communicate support TLS/SSL encryption, this is a way of encrypting your system without the overhead of a VPN tunnel (which also often uses SSL internally).

How Difficult Is This to Implement?

Configuring a certificate authority and setting up the rest of the public key infrastructure can involve quite a bit of initial effort. Furthermore, managing certificates can create an additional administration burden when new certificates need to be created, signed, or revoked.

For many users, implementing a full-fledged public key infrastructure will make more sense as their infrastructure needs grow. Securing communications between components using VPN may be a good stop-gap measure until you reach a point where PKI is worth the extra administration costs.

If you would like to create your own certificate authority, you can refer to one of our How To Set Up and Configure a Certificate Authority (CA) guides depending on the Linux distribution that you are using.

File Auditing and Intrusion Detection Systems

File auditing is the process of comparing the current system against a record of the files and file characteristics of your system when it is a known-good state. This is used to detect changes to the system that may have been authorized.

File audit diagram

An intrusion detection system, or IDS, is a piece of software that monitors a system or network for unauthorized activity. Many host-based IDS implementations use file auditing as a method of checking whether the system has changed.

How Do They Enhance Security?

Similar to the above service-level auditing, if you are serious about ensuring a secure system, it is very useful to be able to perform file-level audits of your system. This can be done periodically by the administrator or as part of an automated process in an IDS.

These strategies are some of the only ways to be absolutely sure that your filesystem has not been altered by some user or process. For many reasons, intruders often wish to remain hidden so that they can continue to exploit the server for an extended period of time. They might replace binaries with compromised versions. Doing an audit of the filesystem will tell you if any of the files have been altered, allowing you to be confident in the integrity of your server environment.

How Difficult Is This to Implement?

Implementing an IDS or conducting file audits can be quite an intensive process. The initial configuration involves telling the auditing system about any non-standard changes you’ve made to the server and defining paths that should be excluded to create a baseline reading.

It also makes day-to-day operations more involved. It complicates updating procedures as you will need to re-check the system prior to running updates and then recreate the baseline after running the update to catch changes to the software versions. You will also need to offload the reports to another location so that an intruder cannot alter the audit to cover their tracks.

While this may increase your administration load, being able to check your system against a known-good copy is one of the only ways of ensuring that files have not been altered without your knowledge. Some popular file auditing / intrusion detection systems are Tripwire and Aide.

Isolated Execution Environments

Isolating execution environments refers to any method in which individual components are run within their own dedicated space.

Isolated environments diagram

This can mean separating out your discrete application components to their own servers or may refer to configuring your services to operate in chroot environments or containers. The level of isolation depends heavily on your application’s requirements and the realities of your infrastructure.

How Do They Enhance Security?

Isolating your processes into individual execution environments increases your ability to isolate any security problems that may arise. Similar to how bulkheads and compartments can help contain hull breaches in ships, separating your individual components can limit the access that an intruder has to other pieces of your infrastructure.

How Difficult Is This to Implement?

Depending on the type of containment you choose, isolating your applications can have varying levels of complexity. By packaging your individual components in containers, you can quickly achieve some measure of isolation, but note that Docker does not consider its containerization a security feature.

Setting up a chroot environment for each piece can provide some level of isolation as well, but this also is not a foolproof method of isolation as there are often ways of breaking out of a chroot environment. Moving components to dedicated machines is the best level of isolation, and in many cases may be the least complex, but incur additional costs due to the need for additional machines.

Conclusion

The strategies outlined in this tutorial are an overview of some of the steps that you can take to improve the security of your systems. It is important to recognize that security measures decrease in their effectiveness the longer you wait to implement them. Accordingly, security should not be an afterthought and must be implemented when you first provision your infrastructure. Once you have a secure base to build upon, you can then start deploying your services and applications with some assurances that they are running in a secure environment by default.

Even with a secure starting environment, keep in mind that security is an ongoing and iterative process. Good security requires a mindset of constant vigilance and awareness. Always be sure to ask yourself what the security implications of any change might be, and what steps you can take to ensure that you are always creating secure default configurations and environments for your software.

20 Comments

Creative Commons License