Sr Technical Writer

Setting up and configuring an application server on Ubuntu 24.04 is essential for deploying web applications in production environments. Whether you’re hosting Node.js applications, Python web services, or PHP-based sites, a properly configured application server provides the foundation for reliable, secure, and scalable deployments.
This tutorial guides you through setting up a production-ready application server on a DigitalOcean Droplet running Ubuntu 24.04. You’ll learn how to install and configure essential components, set up a reverse proxy with Nginx, secure your server with SSL certificates, and deploy applications using systemd for process management. The configuration follows production best practices, ensuring your server is ready to handle real-world workloads.
systemd for reliable process management.Before you begin, ensure you have the following:
sudo privileges and configuring a firewall.Start by updating your system’s package index and upgrading existing packages to ensure you have the latest security patches and software versions:
sudo apt update
sudo apt upgrade -y
This ensures your Ubuntu 24.04 server has the most recent package information and security updates. Ubuntu 24.04 includes updated package versions compared to Ubuntu 22.04, including newer versions of systemd, Nginx, and other core components.
Nginx serves as a reverse proxy, handling incoming HTTP and HTTPS requests and forwarding them to your application server. This setup provides several benefits: SSL termination, load balancing capabilities, and improved security by keeping your application server behind the proxy.
Install Nginx using apt:
sudo apt install nginx -y
After installation, start and enable Nginx to run automatically on boot:
sudo systemctl start nginx
sudo systemctl enable nginx
Verify that Nginx is running:
sudo systemctl status nginx
You should see output indicating that Nginx is active and running.
Output● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-12-18 07:03:01 UTC; 32s ago
Docs: man:nginx(8)
Main PID: 46284 (nginx)
Tasks: 3 (limit: 4656)
Memory: 2.4M (peak: 5.3M)
CPU: 46ms
CGroup: /system.slice/nginx.service
├─46284 "nginx: master process /usr/sbin/nginx -g daemon on; master_process on;"
├─46286 "nginx: worker process"
└─46287 "nginx: worker process"
Dec 18 07:03:01 Anish-Ubuntu-24 systemd[1]: Starting nginx.service - A high performance web server and a reverse proxy server...
Dec 18 07:03:01 Anish-Ubuntu-24 systemd[1]: Started nginx.service - A high performance web server and a reverse proxy server.
Ubuntu 24.04 uses UFW (Uncomplicated Firewall) for firewall management. Configure it to allow HTTP, HTTPS, and SSH traffic while blocking other unnecessary ports.
First, ensure UFW is installed and check its status:
sudo ufw status
If UFW is not active, enable it and configure the necessary rules:
sudo ufw allow OpenSSH
sudo ufw allow 'Nginx Full'
sudo ufw enable
Outputsudo ufw allow 'Nginx Full'
sudo ufw enable
Rules updated
Rules updated (v6)
Rules updated
Rules updated (v6)
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
The Nginx Full profile allows both HTTP (port 80) and HTTPS (port 443) traffic. Verify the firewall rules:
sudo ufw status verbose
You should see rules allowing OpenSSH and Nginx Full traffic.
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN Anywhere
80,443/tcp (Nginx Full) ALLOW IN Anywhere
22/tcp (OpenSSH (v6)) ALLOW IN Anywhere (v6)
80,443/tcp (Nginx Full (v6)) ALLOW IN Anywhere (v6)
Securing your application server with SSL/TLS encryption is essential for production deployments. Let’s Encrypt provides free SSL certificates that can be automatically renewed.
Install Certbot and the Nginx plugin:
sudo apt install certbot python3-certbot-nginx -y
Obtain an SSL certificate for your domain. Replace your_domain with your actual domain name:
sudo certbot --nginx -d your_domain -d www.your_domain
Certbot will prompt you for an email address and ask whether you want to redirect HTTP traffic to HTTPS. Choose to redirect for better security. The certificate will be automatically renewed before expiration.
Test the automatic renewal process:
sudo certbot renew --dry-run
If this command completes without errors, automatic renewal is configured correctly.
Create an Nginx server block configuration to route traffic to your application server. This example shows a generic configuration that you can adapt for different application types.
Create a new configuration file for your domain:
sudo nano /etc/nginx/sites-available/your_domain
Add the following configuration, replacing your_domain with your actual domain and adjusting the proxy_pass URL to match your application’s port:
server {
listen 80;
listen [::]:80;
server_name your_domain www.your_domain;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name your_domain www.your_domain;
ssl_certificate /etc/letsencrypt/live/your_domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
This configuration:
Enable the configuration by creating a symbolic link:
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/
Test the Nginx configuration for syntax errors:
sudo nginx -t
If the test passes, reload Nginx to apply the changes:
sudo systemctl reload nginx
The runtime dependencies you need depend on your application type. This section covers setup for Node.js, Python, and PHP applications.
For Node.js applications, install Node.js using NodeSource’s repository to get the latest LTS version:
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
Verify the installation:
node --version
npm --version
Outputv20.19.6
10.8.2
Ubuntu 24.04 includes Python 3.12 by default. Install pip and virtual environment tools:
sudo apt install python3-pip python3-venv -y
Verify the installation:
python3 --version
pip3 --version
Outputpip3 --version
Python 3.12.3
pip 24.0 from /usr/lib/python3/dist-packages/pip (python 3.12)
For PHP applications, install PHP and PHP-FPM (FastCGI Process Manager):
sudo apt install php-fpm php-mysql php-mbstring php-xml php-curl -y
Verify PHP-FPM is running:
sudo systemctl status php8.3-fpm
Output Loaded: loaded (/usr/lib/systemd/system/php8.3-fpm.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-12-18 08:02:36 UTC; 1min 56s ago
Docs: man:php-fpm8.3(8)
Process: 59077 ExecStartPost=/usr/lib/php/php-fpm-socket-helper install /run/php/php-fpm.sock /etc/php/8.3/fpm/pool.d/www.conf 83 (code=exited, >
Main PID: 59074 (php-fpm8.3)
Status: "Processes active: 0, idle: 2, Requests: 0, slow: 0, Traffic: 0req/sec"
Tasks: 3 (limit: 4656)
Memory: 9.5M (peak: 10.4M)
CPU: 97ms
CGroup: /system.slice/php8.3-fpm.service
├─59074 "php-fpm: master process (/etc/php/8.3/fpm/php-fpm.conf)"
├─59075 "php-fpm: pool www"
└─59076 "php-fpm: pool www"
Dec 18 08:02:36 Anish-Ubuntu-24 systemd[1]: Starting php8.3-fpm.service - The PHP 8.3 FastCGI Process Manager...
<$> Note: Ubuntu 24.04 includes PHP 8.3. Adjust version numbers if your system uses a different version. <$>
Systemd manages your application as a service, ensuring it starts automatically on boot and restarts if it crashes. This example creates a systemd service for a Node.js application, but you can adapt it for other application types.
Create a systemd service file:
sudo nano /etc/systemd/system/your-app.service
Add the following configuration, adjusting paths and environment variables as needed:
[Unit]
Description=Your Application Server
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/var/www/your-app
Environment="NODE_ENV=production"
Environment="PORT=3000"
ExecStart=/usr/bin/node /var/www/your-app/server.js
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
Key configuration points:
Type=simple: For applications that run in the foregroundUser=www-data: Runs the application as a non-root user for securityWorkingDirectory: Sets the application’s working directoryRestart=on-failure: Automatically restarts the application if it crashesRestartSec=10: Waits 10 seconds before restartingReload systemd and start your service:
sudo systemctl daemon-reload
sudo systemctl start your-app
sudo systemctl enable your-app
Check the service status:
sudo systemctl status your-app
Output● your-app.service
Loaded: loaded (/etc/systemd/system/your-app.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-12-18 08:07:30 UTC; 4ms ago
Main PID: 59408 ((node))
Tasks: 1 (limit: 4656)
Memory: 512.0K (peak: 512.0K)
CPU: 936us
CGroup: /system.slice/your-app.service
└─59408 "(node)"
Dec 18 08:07:30 Anish-Ubuntu-24 systemd[1]: your-app.service: Scheduled restart job, restart counter is at 4.
Dec 18 08:07:30 Anish-Ubuntu-24 systemd[1]: Started your-app.service.
For a Python application using Gunicorn, your systemd service might look like this:
[Unit]
Description=Your Python Application
After=network.target
[Service]
Type=notify
User=www-data
WorkingDirectory=/var/www/your-app
Environment="PATH=/var/www/your-app/venv/bin"
ExecStart=/var/www/your-app/venv/bin/gunicorn --bind 127.0.0.1:8000 app:app
Restart=on-failure
[Install]
WantedBy=multi-user.target
For PHP applications, you typically use PHP-FPM with Nginx. Configure Nginx to pass requests to PHP-FPM:
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
}
Create a directory for your application and set appropriate permissions:
sudo mkdir -p /var/www/your-app
sudo chown -R www-data:www-data /var/www/your-app
Deploy your application files to this directory. You can use Git, SCP, or other deployment methods:
cd /var/www/your-app
sudo -u www-data git clone https://github.com/your-username/your-repo.git .
Install dependencies and build your application as needed. For Node.js:
sudo -u www-data npm install --production
For Python:
sudo -u www-data python3 -m venv venv
sudo -u www-data ./venv/bin/pip install -r requirements.txt
After deploying, restart your application service:
sudo systemctl restart your-app
Apply these security measures to harden your application server for production use.
Edit the SSH configuration:
sudo nano /etc/ssh/sshd_config
Set the following:
PermitRootLogin no
PasswordAuthentication no
Restart SSH:
sudo systemctl restart sshd
Enable automatic security updates:
sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure -plow unattended-upgrades
Configure log rotation to prevent log files from consuming disk space. Edit the logrotate configuration:
sudo nano /etc/logrotate.d/your-app
Add:
/var/www/your-app/logs/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data www-data
sharedscripts
}
Set up monitoring to alert you when disk space is low. Install and configure monitoring tools:
sudo apt install htop iotop -y
This section addresses common problems you might encounter when setting up your application server.
Check the service status and logs:
sudo systemctl status your-app
sudo journalctl -u your-app -n 50 --no-pager
Common issues include:
This usually means Nginx cannot connect to your application. Check:
sudo systemctl status your-app
sudo netstat -tlnp | grep 3000
proxy_pass URL in your Nginx configuration matches your application’s port.If SSL certificates fail to renew:
sudo tail -f /var/log/letsencrypt/letsencrypt.log
dig your_domain
Monitor memory usage:
free -h
htop
If your application consumes too much memory, consider:
An application server is a server environment that runs and manages web applications. On Ubuntu 24.04, this typically involves configuring a runtime environment (like Node.js, Python, or PHP), setting up a reverse proxy (like Nginx), and using systemd to manage application processes. The application server handles incoming requests, executes application code, and returns responses to clients.
To configure systemd for your application, you’ll need to create a service file. Here’s a step-by-step example:
Suppose your application is a Node.js app located at /home/ubuntu/my-app, and you want to run it with npm start using the ubuntu user.
sudo nano /etc/systemd/system/my-app.service
[Unit]
Description=My Node.js App
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/ubuntu/my-app
ExecStart=/usr/bin/npm start
Restart=on-failure
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable my-app
sudo systemctl start my-app
Now, your app will run in the background, restart on failure, and automatically start at boot. You can check its status with:
sudo systemctl status my-app
This method works with most applications. Just adjust the ExecStart, User, and WorkingDirectory as needed for your stack.
Both Nginx and Apache work well as reverse proxies, but Nginx is often preferred for modern application servers due to its lower memory footprint, better performance under high concurrency, and simpler configuration for reverse proxy scenarios.
Nginx also excels at handling static content and proxying requests to application servers, making it ideal for production deployments. Apache remains a solid choice, especially if you need specific modules or are more familiar with its configuration syntax.
Secure your application server by: enabling UFW firewall and allowing only necessary ports, installing SSL certificates with Let’s Encrypt, running applications as non-root users, disabling root SSH login, configuring automatic security updates, keeping software packages updated, and implementing proper file permissions. Additionally, use strong passwords or SSH keys, configure fail2ban to prevent brute-force attacks, and regularly review application and system logs for suspicious activity.
Yes, you can host multiple applications on a single Ubuntu 24.04 server. Configure separate systemd services for each application, each listening on different ports (e.g., 3000, 3001, 8000). Set up Nginx server blocks (virtual hosts) for each domain or subdomain, with each block proxying to the appropriate application port. This approach allows you to efficiently use server resources while maintaining isolation between applications.
Verify your application is running by checking the systemd service status with sudo systemctl status your-app. Review application logs using sudo journalctl -u your-app -f to see real-time log output. Test the application by accessing it through your domain or IP address. Check that Nginx is routing requests correctly by reviewing Nginx access and error logs at /var/log/nginx/access.log and /var/log/nginx/error.log. Monitor resource usage with htop or free -h to ensure your application isn’t consuming excessive CPU or memory.
You’ve successfully set up and configured a production ready application server on Ubuntu 24.04. Your DigitalOcean Droplet is now equipped with Nginx as a reverse proxy, SSL/TLS encryption, systemd service management, and security hardening measures. This setup supports deploying Node.js, Python, PHP, and other web applications in a secure, scalable environment.
The configuration follows production best practices, including automatic service restarts, SSL certificate renewal, firewall protection, and proper user permissions. Your application server is ready to handle real-world workloads and can be scaled as your needs grow.
For further learning and advanced configurations, explore these related DigitalOcean tutorials:
Ready to deploy your applications? Get started with DigitalOcean Droplets to create scalable, reliable infrastructure for your application server. With features like automated backups, monitoring, and flexible pricing, DigitalOcean Droplets provide the foundation you need for production deployments.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
I help Businesses scale with AI x SEO x (authentic) Content that revives traffic and keeps leads flowing | 3,000,000+ Average monthly readers on Medium | Sr Technical Writer @ DigitalOcean | Ex-Cloud Consultant @ AMEX | Ex-Site Reliability Engineer(DevOps)@Nutanix
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.