// Conceptual-article //

Cloud Servers: An Introduction

Published on June 27, 2022
Default avatar
By Alex Garnett
Senior DevOps Technical Writer
Cloud Servers: An Introduction

Introduction to Cloud Servers

What is a Cloud Server?

A cloud server is internet infrastructure that provides computing resources to users remotely. You can think of a cloud server as a private computer that you can set up and control in the same way as an on-premise computer, such as a laptop or desktop. This conceptual article outlines several key components of cloud server architecture, the difference between cloud servers and other cloud offerings, and how to determine which cloud offering is right for your website or web application.

Note that you will sometimes see “cloud server,” “web server,” and plain “server” used interchangeably. Typically, a cloud server refers to an entire Linux environment, or effectively an entire computer. In practice, cloud servers will always be running as virtual machines, or software systems that emulate computers, within much larger server clusters in a process known as virtualization. For more information about this technical context, you can review An Introduction to Cloud Hosting.

Cloud software

To understand cloud servers, it’s helpful to understand the type of software that runs in the cloud.

Operating systems: To set up a cloud server, one of the first things you need to do is install an operating system. Today, nearly all cloud customers use a Linux-based operating system (such as Ubuntu or Rocky Linux) due to broad support, free or flexible licensing, and overall ubiquity in server computing. You can refer to How to Choose a Linux Distribution for more information.

Server-side software: This is a class of software that’s designed to run in a cloud environment, which does not have a desktop environment or a display connected to it. Usually, this means that the software is installed and configured via a command line interface, and then accessed by regular users through a web browser or another application. Though the types of software and tooling you install on your cloud server can vary greatly, understanding a few key components will help prepare you to plan and set up your own cloud server.

Web servers: This software enables your cloud server to communicate with users or applications on the internet using the HTTP protocol. Server-side software, like a web server, has to respond in a well-defined way to certain types of requests from clients or client-side software. For example, when a user enters a URL into a web browser, the web browser (known here as the client) makes a request to the server. In response, the server fetches the HTML document and sends it back to the browser where it is loaded as a web page. If you are setting up a cloud server from scratch to host a website or web application, you will likely need to install and set up server software, with Nginx and the Apache HTTP Web Server, being the two most popular options. You can read more about web server software in our guide An Introduction to Web Servers.

API servers: APIs (Application Programming Interfaces) are a type of software mediary that enable applications to communicate with one another. A web server is a type of an API server that implements the HTTP APIs. There are many other different types of APIs that enable your cloud server to send or receive data to and from external applications and data resources, such as pulling weather data, flight information, or other types of data to use with your application. Individual API implementations are also sometimes called API endpoints, or just “endpoints”.

Database servers: Database servers, also called databases, are another type of API server. Unlike web servers, which can be accessed via a web browser and usually render an HTML interface, database servers are usually accessed via a database query API. Some database deployments will be externally facing, and can implement their own web interfaces for anyone needing to interact with them in a browser, whereas others may only be internally accessible to your other cloud software via these queries.

Note: Running Linux without virtualization of any kind on a dedicated physical machine not shared with other tenants is usually called bare-metal hosting. Although relatively few cloud providers still offer bare-metal servers other than at the very high end, the most common modern equivalent to running a bare-metal server is running a Linux environment on a Raspberry Pi, usually for smaller projects.

Cloud Servers and their Alternatives

Because a cloud server is effectively a whole virtual computer, other cloud product offerings can be understood in relation to them. For example, some cloud providers will offer dedicated web hosting, or dedicated database hosting. Any product offering that provides a database or a web server on its own has effectively abstracted out the actual cloud server in the equation. There are various ways of doing this, which will typically still involve virtualized server clusters, but the principle is consistent. The primary distinction is that a cloud server (sometimes called a VPS, or virtual private server, to clarify that it is a virtual machine) can be made to run any software in any way, whereas any other cloud offering is effectively an optimized and constrained subset of server features.

The market for these offerings has changed considerably over the past few decades. Before virtualization was widely available, there used to be a market of web hosts who would instead provision a web server like Nginx (or at that time, Apache) to support dozens of different users with their own unique sets of permissions, and offer hosting per-user. This was convenient because it did not require users to take on any server administration duties, but it was limited in practice to only supporting static websites (i.e., HTML, CSS and javascript only, with no backend engine) or drop-in PHP applications that had no dependencies other than the web server.

Since then, VPS offerings — full cloud servers — have become more commonly available. Committing to running an entire cloud server, especially in a production deployment, requires a certain amount of knowledge of Linux best practices, usually formalized in dedicated System Administration (“sysadmin”) or Development Operations (“DevOps”) roles for dealing with security, deployment, and so on. Being able to perform these roles on an occasional or an as-needed basis is very useful, but can be complex. This is especially true when considering that it is not strictly necessary to know how to interact with a Linux server or a command line at all to develop most software.

Should I Use a Cloud Server?

Cloud servers typically have a number of security features built into them, and it is not necessary to provision a commercial-scale production deployment to safely and reliably run open-source software on a cloud server. Most server packages ship with carefully configured default settings and are frequently updated to avoid any security risks. It is often sufficient to deploy a firewall like ufw that can expose network ports on an individual basis to keep a server secure, or to at least offload the responsibility for that security to the maintainers of software like Nginx, which is used on millions of servers worldwide.

There are also other modern offerings which are more comparable to drop-in web hosts. Modern static websites can use modern javascript features to, in some cases, eliminate the need for a backend server entirely. Some cloud providers refer to this type of hosting as a “headless CMS” and provide other authoring tools and web forms as part of a larger software-as-a-service offering.

In addition to this static site functionality, some providers also support deploying what are called serverless functions. These are one-off scripts that can leverage backend server functionality on a discrete basis, which are deployed into an environment that can run them directly. When used together with static site deployments, this approach is sometimes called the Jamstack.

Static site and serverless deployments are highly portable and, like legacy web hosting, they avoid nearly all of the security and maintenance concerns around full server deployments. However, they are far more limited in scope. For example, as part of your stack, you may need to deploy a Docker container behind an Nginx web server in a particular way: for this, or any configuration like this, you need an entire cloud server.

In general, any software that can be deployed to a cloud server can also be deployed to a local computer. Although the differences can be instructive – notably, many people do not run Linux on their local computers, and server-side software isn’t always packaged to work directly on macOS or Windows – those differences are small in practice. This is the main value offering of a cloud server: for all intents and purposes, it is an entire computer that you can do anything with.

How to Scope your Server

Like bare-metal computers, cloud servers will be more performant depending on their hardware specifications, and are priced accordingly. Each cloud server is allocated a certain amount of resources within the cluster. Unlike bare-metal computers, cloud server specs can be quickly scaled up and down as needed. When assessing servers, you should have an idea of how these specifications will impact your needs.

Cloud servers are typically provisioned by their number of available CPU cores, their total available memory (RAM), and their attached disk storage. While disk speed and CPU speed typically vary under real-world conditions, most cloud providers have standardized on an average disk speed roughly comparable to consumer solid-state disk drives (SSDs) and a CPU speed comparable to an Intel Xeon core. Some providers will also constrain lower-tier cloud servers by their total allowed number of disk input/output operations (IOPS) or their total allowable network traffic, after which traffic may be throttled, causing bottlenecks for some software.

Almost all cloud providers will also allow you to purchase additional storage, such as block storage or object storage, that can be attached to your VPS on an as-needed basis. It is usually a good idea to use this additional storage rather than continuing to expand the baseline storage allocation of your VPS. Storing all of your data on a single root partition can make scaling more challenging.

To be accessible on the open internet, cloud servers must have a public IP address assigned to them. This can be an IPv4 address, which follows the pattern, or an IPv6 address, which follows the pattern 2001:0db8:0000:0000:0000:ff00:0042:8329. Almost all network-capable software can parse and access these IP addresses directly, though most of the time, server IP addresses will be behind an assigned a domain name, such as https://my_domain.com. Some cloud providers will automatically allocate you one IP address for each VPS, whereas others may require you to purchase IP addresses and assign them to your servers individually. These are called reserved IPs, and they can be more flexible in large deployments.

Domain names are usually purchased and configured from separate registrars using DNS records, although some cloud providers will offer both products together.


To connect and work with cloud servers, you will need to know how to work in a terminal environment, both locally and remotely. Remote terminal connections mostly make use of a protocol called SSH, or Secure Shell. Along with HTTP, this is one of the most commonly used protocols, although SSH is naturally used more often by administrators rather than end users. HTTP runs on port 80 (and port 443 for HTTPS). SSH typically runs on port 22. Cloud administration can be broadly understood in terms of these protocols, servers, and services.


This curriculum provides an overview of evaluating, working with, and understanding the landscape of cloud servers. It is helpful to understand the range of product offerings, and the way that deployment preferences have changed over time, to leverage existing software documentation for your own use cases. A DigitalOcean Droplet – a cloud VPS – is a good starting point for many different projects:

Additional resources


  • A General Introduction to Cloud Computing. This tutorial provides an overview of the history and the business context of cloud computing. It contrasts different service models and explains other considerations around risks, costs, and privacy.

  • Initial Server Setup. This is a collection of DigitalOcean’s “Initial Server Setup” articles for many popular Linux environments, designed to get you up and running with SSH, a package manager, and a firewall as efficiently as possible.

  • A Linux Command Line Primer. This tutorial covers the essentials of working on a command line, including many core Linux commands, shortcuts, and the fundamentals of argument syntax and directory navigation.

  • SSH Essentials: Working with SSH Servers, Clients, and Keys. This tutorial explains the mechanics of SSH, or secure shell, which is the universally preferred method of connecting to and working with remote servers using a terminal.


If you’ve enjoyed this tutorial and our broader community, consider checking out our DigitalOcean products which can also help you achieve your development goals.

Learn more here

Tutorial Series: Getting Started With Cloud Computing

This curriculum introduces open-source cloud computing to a general audience along with the skills necessary to deploy applications and websites securely to the cloud.

About the authors
Default avatar
Senior DevOps Technical Writer

Still looking for an answer?

Was this helpful?
Leave a comment

This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!