When you first create a new CentOS server, there are a few configuration steps that you should take early on as part of the basic setup. This will increase the security and usability of your server and will give you a solid foundation for subsequent actions.
To log into your server, you will need to know your server’s public IP address. You will also need the password or, if you installed an SSH key for authentication, the private key for the root user’s account. If you have not already logged into your server, you may want to follow our documentation on how to connect to your Droplet with SSH, which covers this process in detail.
If you are not already connected to your server, log in as the root user now using the following command (substitute the highlighted portion of the command with your server’s public IP address):
- ssh root@your_server_ip
Accept the warning about host authenticity if it appears. If you are using password authentication, provide your root password to log in. If you are using an SSH key that is passphrase protected, you may be prompted to enter the passphrase the first time you use the key each session. If this is your first time logging into the server with a password, you may also be prompted to change the root password.
The root user is the administrative user in a Linux environment, and it has very broad privileges. Because of the heightened privileges of the root account, you are discouraged from using it on a regular basis. This is because part of the power inherent with the root account is the ability to make very destructive changes, even by accident.
As such, the next step is to set up an alternative user account with a reduced scope of influence for day-to-day work. This account will still be able to gain increased privileges when necessary.
Once you are logged in as root, you can create the new user account that we will use to log in from now on.
This example creates a new user called sammy, but you should replace it with any username that you prefer:
- adduser sammy
Next, set a strong password for the sammy
user:
- passwd sammy
You will be prompted to enter the password twice. After doing so, your user will be ready to use, but first we’ll give this user additional privileges to use the sudo
command. This will allow us to run commands as root when necessary.
Now, we have a new user account with regular account privileges. However, we may sometimes need to do administrative tasks.
To avoid having to log out of our normal user and log back in as the root account, we can set up what is known as “superuser” or root privileges for our normal account. This will allow our normal user to run commands with administrative privileges by putting the word sudo
before each command.
To add these privileges to our new user, we need to add the new user to the wheel group. By default, on CentOS, users who belong to the wheel group are allowed to use the sudo
command.
As root, run this command to add your new user to the wheel group (substitute the highlighted word with your new username):
- usermod -aG wheel sammy
Now, when logged in as your regular user, you can type sudo
before commands to perform actions with superuser privileges.
Firewalls provide a basic level of security for your server. These applications are responsible for denying traffic to every port on your server, except for those ports/services you have explicitly approved. CentOS has a service called firewalld
to perform this function. A tool called firewall-cmd
is used to configure firewalld
firewall policies.
Note: If your servers are running on DigitalOcean, you can optionally use DigitalOcean Cloud Firewalls instead of firewalld
. We recommend using only one firewall at a time to avoid conflicting rules that may be difficult to debug.
First install firewalld
:
- dnf install firewalld -y
The default firewalld
configuration allows ssh
connections, so we can turn the firewall on immediately:
- systemctl start firewalld
Check the status of the service to make sure it started:
- systemctl status firewalld
Output● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-02-06 16:39:40 UTC; 3s ago
Docs: man:firewalld(1)
Main PID: 13180 (firewalld)
Tasks: 2 (limit: 5059)
Memory: 22.4M
CGroup: /system.slice/firewalld.service
└─13180 /usr/libexec/platform-python -s /usr/sbin/firewalld --nofork --nopid
Note that it is both active
and enabled
, meaning it will start by default if the server is rebooted.
Now that the service is up and running, we can use the firewall-cmd
utility to get and set policy information for the firewall.
First let’s list which services are already allowed:
- firewall-cmd --permanent --list-all
Outputpublic (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: cockpit dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
To see the additional services that you can enable by name, type:
- firewall-cmd --get-services
To add a service that should be allowed, use the --add-service
flag:
- firewall-cmd --permanent --add-service=http
This would add the http
service and allow incoming TCP traffic to port 80
. The configuration will update after you reload the firewall:
- firewall-cmd --reload
Remember that you will have to explicitly open the firewall (with services or ports) for any additional services that you may configure later.
Now that we have a regular non-root user for daily use, we need to make sure we can use it to SSH into our server.
Note: Until verifying that you can log in and use sudo
with your new user, we recommend staying logged in as root. This way, if you have problems, you can troubleshoot and make any necessary changes as root. If you are using a DigitalOcean Droplet and experience problems with your root SSH connection, you can log into the Droplet using the DigitalOcean Console.
The process for configuring SSH access for your new user depends on whether your server’s root account uses a password or SSH keys for authentication.
If you logged in to your root account using a password, then password authentication is enabled for SSH. You can SSH to your new user account by opening up a new terminal session and using SSH with your new username:
- ssh sammy@your_server_ip
After entering your regular user’s password, you will be logged in. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
To enhance your server’s security, we strongly recommend setting up SSH keys instead of using password authentication. Follow our guide on setting up SSH keys on CentOS to learn how to configure key-based authentication.
If you logged in to your root account using SSH keys, then password authentication is disabled for SSH. You will need to add a copy of your public key to the new user’s ~/.ssh/authorized_keys
file to log in successfully.
Since your public key is already in the root account’s ~/.ssh/authorized_keys
file on the server, we can copy that file and directory structure to our new user account.
The simplest way to copy the files with the correct ownership and permissions is with the rsync
command. This will copy the root user’s .ssh
directory, preserve the permissions, and modify the file owners, all in a single command. Make sure to change the highlighted portions of the command below to match your regular user’s name:
Note: The rsync
command treats sources and destinations that end with a trailing slash differently than those without a trailing slash. When using rsync
below, be sure that the source directory (~/.ssh
) does not include a trailing slash (check to make sure you are not using ~/.ssh/
).
If you accidentally add a trailing slash to the command, rsync
will copy the contents of the root account’s ~/.ssh
directory to the sudo
user’s home directory instead of copying the entire ~/.ssh
directory structure. The files will be in the wrong location and SSH will not be able to find and use them.
- rsync --archive --chown=sammy:sammy ~/.ssh /home/sammy
Now, back in a new terminal on your local machine, open up a new SSH session with your non-root user:
- ssh sammy@your_server_ip
You should be logged in to the new user account without using a password. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
At this point, you have a solid foundation for your server. You can install any software you need on it now.
]]>Keepalived
is an open-source software that provides high availability by using the Virtual Router Redundancy Protocol (VRRP) for Linux systems. Its primary use is to ensure service availability by routing network traffic to a backup server if the primary server fails.
In the vast universe of Linux, where commands reign supreme and root powers can shape the destiny of systems, one configuration file stands out as the guardian of control and security: the sudoers file. For the uninitiated, its syntax might appear cryptic, reminiscent of arcane spells. However, for those who venture to understand its depth, it unveils unparalleled power, allowing users to execute commands with elevated privileges while maintaining the sanctity of the system. Whether you’re a seasoned system administrator or a curious enthusiast, join us on a journey to understand the rationale behind the sudoers file and the art of harnessing its potential. Welcome to our comprehensive guide on the sudoers file, a cornerstone of Linux administration.
]]>In any case, I’ve inherited a project / client from another dev / team that is no longer available.
I’ve poked around in /etc/ssh via the console button in the DO CP.
Also, I’ve read:
https://www.digitalocean.com/community/tutorials/how-to-use-ssh-to-connect-to-a-remote-server
https://docs.digitalocean.com/products/droplets/how-to/add-ssh-keys/
Long to short, I presume I should remove anything from /etc/ssh and start that process over. Else, again as I understand it, anyone previous who had SSH access actually still does.
Maybe there’s also a tutorial on this situation? I can’t be the first new dev on an long time DO account / droplet, eh :) It’s a new project / client so I’m treading lightly. That said, this needs to get sorted out sooner rather later. Any help is greatly appreciated. Again, please type slowly :)
]]>not install packages due to an OSError
Yesterday it deployed the sam app with the same requirements.txt just fine. Can anybody help to understand what the problem is?
023-03-07 23:34:50] │ -----> Requirements file has been changed, clearing cached dependencies
[2023-03-07 23:34:51] │ -----> Installing python-3.10.6
[2023-03-07 23:34:53] │ -----> Installing pip 22.2.2, setuptools 63.4.3 and wheel 0.37.1
[2023-03-07 23:34:59] │ -----> Installing SQLite3
[2023-03-07 23:35:06] │ -----> Installing requirements with pip
[2023-03-07 23:35:06] │ Processing /opt/concourse/worker/volumes/live/6ca6f098-d773-4461-5c91-a24a17435bda/volume/appnope_1606859448531/work
[2023-03-07 23:35:06] │ ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/opt/concourse/worker/volumes/live/6ca6f098-d773-4461-5c91-a24a17435bda/volume/appnope_1606859448531/work'
[2023-03-07 23:35:06] │
[2023-03-07 23:35:06] │ ERROR: failed to build: exit status 1
]]>The sudo
command provides a mechanism for granting administrator privileges — ordinarily only available to the root user — to normal users. This guide will show you how to create a new user with sudo
access on Rocky Linux 8, without having to modify your server’s /etc/sudoers
file.
Note: If you want to configure sudo
for an existing Rocky Linux user, skip to step 3.
SSH in to your server as the root user:
- ssh root@your_server_ip_address
Use your server’s IP address or hostname in place of your_server_ip_address
above.
Use the adduser
command to add a new user to your system:
- adduser sammy
Be sure to replace sammy
with the username you’d like to create.
Use the passwd
command to update the new user’s password:
- passwd sammy
Remember to replace sammy
with the user that you just created. You will be prompted twice for a new password:
OutputChanging password for user sammy.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Use the usermod
command to add the user to the wheel group:
- usermod -aG wheel sammy
Once again, be sure to replace sammy
with the username you’d like to give sudo
privileges to. By default, on Rocky Linux, all members of the wheel group have full sudo
access.
sudo
AccessTo test that the new sudo
permissions are working, first use the su
command to switch from the root user to the new user account:
- su - sammy
As the new user, verify that you can use sudo
by prepending sudo
to the command that you want to run with superuser privileges:
- sudo command_to_run
For example, you can list the contents of the /root
directory, which is normally only accessible to the root user:
- sudo ls -la /root
The first time you use sudo
in a session, you will be prompted for the password of that user’s account. Enter the password to proceed:
Output[sudo] password for sammy:
Note: This is not asking for the root password! Enter the password of the sudo-enabled user, not the root password.
If your user is in the proper group and you entered the password correctly, the command that you used with sudo
will run with root privileges.
In this quickstart tutorial you created a new user account and added it to the wheel group to enable sudo
access. For more detailed information on setting up a Rocky Linux 8 server, please read our Initial Server Setup with Rocky Linux 8 tutorial.
When working with big JSON files, it can be hard to find and manipulate the information you need. You could copy and paste all relevant snippets to calculate totals manually, but this is a time-consuming process and could be prone to human error. Another option is to use general-purpose tools for finding and manipulating information. All modern Linux systems come installed with three established text processing utilities: sed
, awk
, and grep
. While these commands are helpful when working with loosely structured data, other options exist for machine-readable data formats like JSON.
jq
, a command-line JSON processing tool, is a good solution for dealing with machine-readable data formats and is especially useful in shell scripts. Using jq
can aid you when you need to manipulate data. For example, if you run a curl
call to a JSON API, jq
can extract specific information from the server’s response. You could also incorporate jq
into your data ingestion process as a data engineer. If you manage a Kubernetes cluster, you could use the JSON output of kubectl
as an input source for jq
to extract the number of available replicas for a specific deployment.
In this article, you will use jq
to transform a sample JSON file about ocean animals. You’ll apply data transformations using filters and merge pieces of transformed data into a new data structure. By the end of the tutorial, you will be able to use a jq
script to answer questions about the data you have manipulated.
To complete this tutorial, you will need the following:
jq
, a JSON parsing and transformation tool. It is available from the repositories for all major Linux distributions. If you are using Ubuntu, run sudo apt install jq
to install it.jq
CommandIn this step, you will set up your sample input file and test the setup by running a jq
command to generate an output of the sample file’s data. jq
can take input from either a file or a pipe. You will use the former.
You’ll begin by generating the sample file. Create and open a new file named seaCreatures.json
using your preferred editor (this tutorial uses nano
):
- nano seaCreatures.json
Copy the following contents into the file:
[
{ "name": "Sammy", "type": "shark", "clams": 5 },
{ "name": "Bubbles", "type": "orca", "clams": 3 },
{ "name": "Splish", "type": "dolphin", "clams": 2 },
{ "name": "Splash", "type": "dolphin", "clams": 2 }
]
You’ll work with this data for the rest of the tutorial. By the end of the tutorial, you will have written a one-line jq
command that answers the following questions about this data:
Save and close the file.
In addition to an input file, you will need a filter that describes the exact transformation you’d like to do. The .
(period) filter, also known as the identity operator, passes the JSON input unchanged as output.
You can use the identity operator to test whether your setup works. If you see any parse errors, check that seaCreatures.json
contains valid JSON.
Apply the identity operator to the JSON file with the following command:
- jq '.' seaCreatures.json
When using jq
with files, you always pass a filter followed by the input file. Since filters may contain spacing and other characters that hold a special meaning to your shell, it is a good practice to wrap your filter in single quotation marks. Doing so tells your shell that the filter is a command parameter. Rest assured that running jq
will not modify your original file.
You’ll receive the following output:
Output[
{
"name": "Sammy",
"type": "shark",
"clams": 5
},
{
"name": "Bubbles",
"type": "orca",
"clams": 3
},
{
"name": "Splish",
"type": "dolphin",
"clams": 2
},
{
"name": "Splash",
"type": "dolphin",
"clams": 2
}
]
By default, jq
will pretty print its output. It will automatically apply indentation, add new lines after every value, and color its output when possible. Coloring may improve readability, which can help many developers as they examine JSON data produced by other tools. For example, when sending a curl
request to a JSON API, you may want to pipe the JSON response into jq '.'
to pretty print it.
You now have jq
up and running. With your input file set up, you’ll manipulate the data using a few different filters in order to compute the values of all three attributes: creatures
, totalClams
, and totalDolphinClams
. In the next step, you’ll find the information from the creatures
value.
creatures
ValueIn this step, you will generate a list of all sea creatures, using the creatures
value to find their names. At the end of this step, you will have generated the following list of names:
Output[
"Sammy",
"Bubbles",
"Splish",
"Splash"
],
Generating this list requires extracting the names of the creatures and then merging them into an array.
You’ll have to refine your filter to get the names of all creatures and discard everything else. Since you’re working on an array, you’ll need to tell jq
you want to operate on the values of that array instead of the array itself. The array value iterator, written as .[]
, serves this purpose.
Run jq
with the modified filter:
- jq '.[]' seaCreatures.json
Every array value is now output separately:
Output{
"name": "Sammy",
"type": "shark",
"clams": 5
}
{
"name": "Bubbles",
"type": "orca",
"clams": 3
}
{
"name": "Splish",
"type": "dolphin",
"clams": 2
}
{
"name": "Splash",
"type": "dolphin",
"clams": 2
}
Instead of outputting every array item in full, you’ll want to output the value of the name
attribute and discard the rest. The pipe operator |
will allow you to apply a filter to each output. If you have used find | xargs
on the command line to apply a command to every search result, this pattern will feel familiar.
A JSON object’s name
property can be accessed by writing .name
. Combine the pipe with the filter and run this command on seaCreatures.json
:
- jq '.[] | .name' seaCreatures.json
You’ll notice that the other attributes have disappeared from the output:
Output"Sammy"
"Bubbles"
"Splish"
"Splash"
By default, jq
outputs valid JSON, so strings will appear in double quotation marks (""
). If you need the string without double quotes, add the -r
flag to enable raw output:
- jq -r '.[] | .name' seaCreatures.json
The quotation marks have disappeared:
OutputSammy
Bubbles
Splish
Splash
You now know how to extract specific information from the JSON input. You’ll use this technique to find other specific information in the next step and then to generate the creatures
value in the final step.
totalClams
Value with map
and add
In this step, you’ll find the total information for how many clams the creatures own. You can calculate the answer by aggregating a few pieces of data. Once you’re familiar with jq
, this will be faster than manual calculations and less prone to human error. The expected value at the end of this step is 12
.
In Step 2, you extracted specific bits of information from a list of items. You can reuse this technique to extract the values of the clams
attribute. Adjust the filter for this new attribute and run the command:
- jq '.[] | .clams' seaCreatures.json
The individual values of the clams
attribute will be output:
Output5
3
2
2
To find the sum of individual values, you will need the add
filter. The add
filter works on arrays. However, you are currently outputting array values, so you must wrap them in an array first.
Surround your existing filter with []
as follows:
- jq '[.[] | .clams]' seaCreatures.json
The values will appear in a list:
Output[
5,
3,
2,
2
]
Before applying the add
filter, you can improve the readability of your command with the map
function, which also makes it easier to maintain. Iterating over an array, applying a filter to each of those items, and then wrapping the results in an array can be achieved with one map
invocation. Given an array of items, map
will apply its argument as a filter to each item. For example, if you apply the filter map(.name)
to [{"name": "Sammy"}, {"name": "Bubbles"}]
, the resulting JSON object will be ["Sammy", "Bubbles"]
.
Rewrite the filter to generate an array to use a map
function instead, then run it:
- jq 'map(.clams)' seaCreatures.json
You will receive the same output as before:
Output[
5,
3,
2,
2
]
Since you have an array now, you can pipe it into the add
filter:
- jq 'map(.clams) | add' seaCreatures.json
You’ll receive a sum of the array:
Output12
With this filter, you have calculated the total number of clams, which you’ll use to generate the totalClams
value later. You’ve written filters for two out of three questions. You have one more filter to create, after which you can generate the final output.
totalDolphinClams
Value with the add
FilterNow that you know how many clams the creatures own, you can identify how many of those clams the dolphins have. You can generate the answer by adding only the values of array elements that satisfy a specific condition. The expected value at the end of this step is 4
, which is the total number of clams the dolphins have. In the final step, the resulting value will be used by the totalDolphinClams
attribute.
Instead of adding all clams
values as you did in Step 3, you’ll count only clams held by creatures with the "dolphin"
type. You’ll use the select
function to select a specific condition: select(condition)
. Any input for which the condition evaluates to true
is passed on. All other input is discarded. If, for example, your JSON input is "dolphin"
and your filter is select(. == "dolphin")
, the output would be "dolphin"
. For the input "Sammy"
, the same filter would output nothing.
To apply select
to every value in an array, you can pair it with map
. In doing so, array values that don’t satisfy the condition will be discarded.
In your case, you only want to retain array values whose type
value equals "dolphin"
. The resulting filter is:
- jq 'map(select(.type == "dolphin"))' seaCreatures.json
Your filter will not match Sammy the shark and Bubbles the orca, but it will match the two dolphins:
Output[
{
"name": "Splish",
"type": "dolphin",
"clams": 2
},
{
"name": "Splash",
"type": "dolphin",
"clams": 2
}
]
This output contains the number of clams per creature, as well as some information that isn’t relevant. To retain only the clams
value, you can append the name of the field to the end of map
’s parameter:
- jq 'map(select(.type == "dolphin").clams)' seaCreatures.json
The map
function receives an array as input and will apply map
’s filter (passed as an argument) to each array element. As a result, select
gets called four times, once per creature. The select
function will produce output for the two dolphins (as they match the condition) and omit the rest.
Your output will be an array containing only the clams
values of the two matching creatures:
Output[
2,
2
]
Pipe the array values into add
:
- jq 'map(select(.type == "dolphin").clams) | add' seaCreatures.json
Your output will return the sum of the clams
values from creatures of the "dolphin"
type:
Output4
You’ve successfully combined map
and select
to access an array, select array items matching a condition, transform them, and sum the result of that transformation. You can use this strategy to calculate totalDolphinClams
in the final output, which you will do in the next step.
In the previous steps, you wrote filters to extract and manipulate the sample data. Now, you can combine these filters to generate an output that answers your questions about the data:
To find the names of the sea creatures in list form, you used the map
function: map(.name)
. To find how many clams the creatures own in total, you piped all clams
values into the add
filter: map(.clams) | add
. To find how many of those clams are owned by dolphins, you used the select
function with the .type == "dolphin"
condition: map(select(.type == "dolphin").clams) | add
.
You’ll combine these filters into one jq
command that does all of the work. You will create a new JSON object that merges the three filters in order to create a new data structure that displays the information you desire.
As a reminder, your starting JSON file matches the following:
[
{ "name": "Sammy", "type": "shark", "clams": 5 },
{ "name": "Bubbles", "type": "orca", "clams": 3 },
{ "name": "Splish", "type": "dolphin", "clams": 2 },
{ "name": "Splash", "type": "dolphin", "clams": 2 }
]
Your transformed JSON output will generate the following:
Final Output{
"creatures": [
"Sammy",
"Bubbles",
"Splish",
"Splash"
],
"totalClams": 12,
"totalDolphinClams": 4
}
Here is a demonstration of the syntax for the full jq
command with empty input values:
- jq '{ creatures: [], totalClams: 0, totalDolphinClams: 0 }' seaCreatures.json
With this filter, you create a JSON object containing three attributes:
Output{
"creatures": [],
"totalClams": 0,
"totalDolphinClams": 0
}
That’s starting to look like the final output, but the input values are not correct because they have not been pulled from your seaCreatures.json
file.
Replace the hard-coded attribute values with the filters you created in each prior step:
- jq '{ creatures: map(.name), totalClams: map(.clams) | add, totalDolphinClams: map(select(.type == "dolphin").clams) | add }' seaCreatures.json
The above filter tells jq
to create a JSON object containing:
creatures
attribute containing a list of every creature’s name
value.totalClams
attribute containing a sum of every creature’s clams
value.totalDolphinClams
attribute containing a sum of every creature’s clams
value for which type
equals "dolphin"
.Run the command, and the output of this filter should be:
Output{
"creatures": [
"Sammy",
"Bubbles",
"Splish",
"Splash"
],
"totalClams": 12,
"totalDolphinClams": 4
}
You now have a single JSON object providing relevant data for all three questions. Should the dataset change, the jq
filter you wrote will allow you to re-apply the transformations at any time.
When working with JSON input, jq
can help you perform a wide variety of data transformations that would be difficult with text manipulation tools like sed
. In this tutorial, you filtered data with the select
function, transformed array elements with map
, summed arrays of numbers with the add
filter, and learned how to merge transformations into a new data structure.
To learn about jq
advanced features, dive into the jq
reference documentation. If you often work with non-JSON command output, you can explore our guides on sed
, awk
or grep
for information on text processing techniques that will work on any format.
To save yourself some trouble with your web server, you can configure logging. Logging information on your server gives you access to the data that will help you troubleshoot and assess situations as they arise.
In this tutorial, you will examine Nginx’s logging capabilities and discover how to configure these tools to best serve your needs. You will use an Ubuntu 22.04 virtual private server as an example, but any modern distribution should function similarly.
To follow this tutorial, you will need:
sudo
-enabled user with a firewall. Follow our Initial Server Setup to get started.With Nginx running on your Ubuntu 22.04 server, you’re ready to begin.
Error_log
DirectiveNginx uses a few different directives to control system logging. The one included in the core module is called error_log
.
error_log
SyntaxThe error_log
directive is used to handle logging general error messages. If you are familiar with Apache, this is very similar to Apache’s ErrorLog
directive.
The error_log
directive applies the following syntax:
error_log log_file log_level
The log_file
specifies the file where the logs will be written. The log_level
specifies the lowest level of logging that you would like to record.
The error_log
directive can be configured to log more or less information as required. The level of logging can be any one of the following:
emerg
: Emergency situations where the system is in an unusable state.alert
: Severe situations where action is needed promptly.crit
: Important problems that need to be addressed.error
: An error has occurred and something was unsuccessful.warn
: Something out of the ordinary happened, but is not a cause for concern.notice
: Something normal, but worth noting what has happened.info
: An informational message that might be nice to know.debug
: Debugging information that can be useful to pinpoint where a problem is occurring.The levels higher on the list are considered a higher priority. If you specify a level, the log captures that level, and any level higher than the specified level.
For example, if you specify error
, the log will capture messages labeled error
, crit
, alert
, and emerg
.
An example of this directive in use is inside the main configuration file. Use your preferred text editor to access the following configuration file. This example uses nano
:
- sudo nano /etc/nginx/nginx.conf
Scroll down the file to the # Logging Settings
section and notice the following directives:
. . .
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
. . .
If you do not want the error_log
to log anything, you must send the output into /dev/null
:
. . .
error_log /dev/null crit;
. . .
The other logging directive, access_log
, will be discussed in the following section.
HttpLogModule
Logging DirectivesWhile the error_log
directive is part of the core module, the access_log
directive is part of the HttpLogModule
. This provides the ability to customize the logs.
There are a few other directives included with this module that assist in configuring custom logs.
log_format
DirectiveThe log_format
directive is used to describe the format of a log entry using plain text and variables.
There is one format that comes predefined with Nginx called combined
. This is a common format used by many servers.
The following is an example of the combined format if it was not defined internally and needed to be specified with the log_format
directive:
log_format combined '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
This definition spans multiple lines until it finds the semi-colon (;).
The lines beginning with a dollar sign ($
) indicate variables, while the characters like -
, [
, and ]
are interpreted literally.
The general syntax of the directive is:
log_format format_name string_describing_formatting;
You can use variables supported by the core module to formulate your logging strings.
access_log
DirectiveThe access_log
directive uses similar syntax to the error_log
directive, but is more flexible. It is used to configure custom logging.
The access_log
directive uses the following syntax:
access_log /path/to/log/location [ format_of_log buffer_size ];
The default value for access_log
is the combined
format mentioned in the log_format
section. You can use any format defined by a log_format
definition.
The buffer size is the maximum size of data that Nginx will hold before writing it all to the log. You can also specify compression of the log file by adding gzip
into the definition:
access_log /path/to/log/location format_of_log gzip;
Unlike the error_log
directive, if you do not want logging, you can turn it off by updating it in the configuration file:
. . .
##
# Logging Settings
##
access_log off;
error_log /var/log/nginx/error.log;
. . .
It is not necessary to write to /dev/null
in this case.
As log files grow, it becomes necessary to manage the logging mechanisms to avoid filling up your disk space. Log rotation is the process of switching out log files and possibly archiving old files for a set amount of time.
Nginx does not provide tools to manage log files, but it does include mechanisms to assist with log rotation.
To manually rotate your logs, you can create a script to rotate them. For example, move the current log to a new file for archiving. A common scheme is to name the most recent log file with a suffix of .0
, and then name older files with .1
, and so on:
- mv /path/to/access.log /path/to/access.log.0
The command that actually rotates the logs is kill -USR1 /var/run/nginx.pid
. This does not kill the Nginx process, but instead sends it a signal causing it to reload its log files. This will cause new requests to be logged to the refreshed log file:
- kill -USR1 `cat /var/run/nginx.pid`
The /var/run/nginx.pid
file is where Nginx stores the master process’s PID. It is specified at the top of the /etc/nginx/nginx.conf
configuration file with the line that begins with pid
:
- sudo nano /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
...
After the rotation, execute sleep 1
to allow the process to complete the transfer. You can then zip the old files or do whatever post-rotation processes you like:
- sleep 1
- [ post-rotation processing of old log file ]
logrotate
The logrotate
application is a program used to rotate logs. It is installed on Ubuntu by default, and Nginx on Ubuntu comes with a custom logrotate
script.
Use your preferred text editor to access the rotation script. This example uses nano
:
- sudo nano /etc/logrotate.d/nginx
The first line of the file specifies the location that the subsequent lines will apply to. Keep this in mind if you switch the location of logging in the Nginx configuration files.
The rest of the file specifies that the logs will be rotated daily and that 52 older copies will be preserved.
Notice that the postrotate
section contains a command similar to the manual rotation mechanisms previously employed:
. . .
postrotate
[ ! -f /var/run/nginx.pid ] || kill -USR1 `cat /var/run/nginx.pid`
endscript
. . .
This section tells Nginx to reload the log files once the rotation is complete.
Proper log configuration and management can save you time and energy in the event of a problem with your server. Having access to the information that will help you diagnose a problem can be the difference between a trivial fix and a persistent headache.
It is important to keep an eye on server logs in order to maintain a functional site and ensure that you are not exposing sensitive information. This guide serves only as an introduction to your experience with logging. You can learn more general tips in our tutorial on How To Troubleshoot Common Nginx Errors.
]]>Logical Volume Management, or LVM, is a storage device management technology that gives users the power to pool and abstract the physical layout of component storage devices for flexible administration. Using the device mapper Linux kernel framework, the current iteration, LVM2, can be used to gather existing storage devices into groups and allocate logical units from the combined space as needed.
In this tutorial, you’ll learn how to manage LVM by displaying information about volumes and potential targets, create and destroy volumes of various types, and modify existing volumes through resizing or transformation.
To follow along, you will need to have a non-root user with sudo
privileges configured on an Ubuntu 18.04 server. You can follow our Ubuntu 18.04 Initial Server Setup guide to get started.
Also, if you’re not familiar with LVM components and concepts, you can review our Introduction to LVM guide for more information.
When you are ready, log into your server with your sudo
user.
Accessing information about the various LVM components on your system is essential for managing your physical and logical volumes. LVM provides a number of tools for displaying information about every layer in the LVM stack.
To display all of the available block storage devices that LVM can potentially manage, use the lvmdiskscan
command:
- sudo lvmdiskscan
Output /dev/sda [ 200.00 GiB]
/dev/sdb [ 100.00 GiB]
2 disks
2 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
Notice the devices that can potentially be used as physical volumes for LVM.
This will likely be your first step when adding new storage devices to use with LVM.
A header is written to storage devices to mark them as free to use as LVM components. Devices with these headers are called physical volumes.
You can display all of the physical devices on your system by using lvmdiskscan
with the -l
option, which will only return physical volumes:
- sudo lvmdiskscan -l
Output WARNING: only considering LVM devices
/dev/sda [ 200.00 GiB] LVM physical volume
/dev/sdb [ 100.00 GiB] LVM physical volume
2 LVM physical volume whole disks
0 LVM physical volumes
The pvscan
command is similar in that it searches all available devices for LVM physical volumes. The output format includes a small amount of additional information:
- sudo pvscan
Output PV /dev/sda VG LVMVolGroup lvm2 [200.00 GiB / 0 free]
PV /dev/sdb VG LVMVolGroup lvm2 [100.00 GiB / 10.00 GiB free]
Total: 2 [299.99 GiB] / in use: 2 [299.99 GiB] / in no VG: 0 [0 ]
If you need additional details about your volume, the pvs
and pvdisplay
commands can do that for you.
The pvs
command is highly configurable and can display information in many different formats. Because its output can be tightly controlled, it is frequently used when scripting or automation is needed. Its basic output provides a useful at-a-glance summary similar to the earlier commands:
- sudo pvs
Output PV VG Fmt Attr PSize PFree
/dev/sda LVMVolGroup lvm2 a-- 200.00g 0
/dev/sdb LVMVolGroup lvm2 a-- 100.00g 10.00g
For more verbose, human-readable output, the pvdisplay
command is a good option:
- sudo pvdisplay
Output --- Physical volume ---
PV Name /dev/sda
VG Name LVMVolGroup
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 51199
Free PE 0
Allocated PE 51199
PV UUID kRUOyU-0ib4-ujPh-kAJP-eeQv-ztRL-4EkaDQ
--- Physical volume ---
PV Name /dev/sdb
VG Name LVMVolGroup
PV Size 100.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 25599
Free PE 2560
Allocated PE 23039
PV UUID udcuRJ-jCDC-26nD-ro9u-QQNd-D6VL-GEIlD7
To discover the logical extents that have been mapped to each volume, pass in the -m
option to pvdisplay
:
- sudo pvdisplay -m
Output --- Physical volume ---
PV Name /dev/sda
VG Name LVMVolGroup
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 51199
Free PE 38395
Allocated PE 12804
PV UUID kRUOyU-0ib4-ujPh-kAJP-eeQv-ztRL-4EkaDQ
--- Physical Segments ---
Physical extent 0 to 0:
Logical volume /dev/LVMVolGroup/db_rmeta_0
Logical extents 0 to 0
Physical extent 1 to 5120:
Logical volume /dev/LVMVolGroup/db_rimage_0
Logical extents 0 to 5119
. . .
This can be very useful when trying to determine which data is held on which physical disk for management purposes.
LVM also has plenty of tools to display information about volume groups.
The vgscan
command can be used to scan the system for available volume groups. It also rebuilds the cache file when necessary. It is a good command to use when you are importing a volume group into a new system:
- sudo vgscan
Output Reading all physical volumes. This may take a while...
Found volume group "LVMVolGroup" using metadata type lvm2
This command does not output very much information, but it should be able to find every available volume group on the system. To display more information, the vgs
and vgdisplay
commands are available.
Like its physical volume counterpart, the vgs
command is versatile and can display a large amount of information in a variety of formats. Because its output can be manipulated, it is frequently used when scripting or automation is needed. For example, some helpful output modifications are to show the physical devices and the logical volume path:
- sudo vgs -o +devices,lv_path
Output VG #PV #LV #SN Attr VSize VFree Devices Path
LVMVolGroup 2 4 0 wz--n- 299.99g 10.00g /dev/sda(0) /dev/LVMVolGroup/projects
LVMVolGroup 2 4 0 wz--n- 299.99g 10.00g /dev/sda(2560) /dev/LVMVolGroup/www
LVMVolGroup 2 4 0 wz--n- 299.99g 10.00g /dev/sda(3840) /dev/LVMVolGroup/db
LVMVolGroup 2 4 0 wz--n- 299.99g 10.00g /dev/sda(8960) /dev/LVMVolGroup/workspace
LVMVolGroup 2 4 0 wz--n- 299.99g 10.00g /dev/sdb(0) /dev/LVMVolGroup/workspace
Likewise, for more verbose, human-readable output, use the vgdisplay
command. Adding the -v
flag provides information about the physical volumes the volume group is built upon, and the logical volumes that were created using the volume group:
- sudo vgdisplay -v
Output Using volume group(s) on command line.
--- Volume group ---
VG Name LVMVolGroup
. . .
--- Logical volume ---
LV Path /dev/LVMVolGroup/projects
. . .
--- Logical volume ---
LV Path /dev/LVMVolGroup/www
. . .
--- Logical volume ---
LV Path /dev/LVMVolGroup/db
. . .
--- Logical volume ---
LV Path /dev/LVMVolGroup/workspace
. . .
--- Physical volumes ---
PV Name /dev/sda
. . .
PV Name /dev/sdb
. . .
The vgdisplay
command is useful because it can tie together information about many different elements of the LVM stack.
To display information about logical volumes, LVM has a related set of tools.
As with the other LVM components, the lvscan
option scans the system and outputs minimal information about the logical volumes it finds:
- sudo lvscan
Output ACTIVE '/dev/LVMVolGroup/projects' [10.00 GiB] inherit
ACTIVE '/dev/LVMVolGroup/www' [5.00 GiB] inherit
ACTIVE '/dev/LVMVolGroup/db' [20.00 GiB] inherit
ACTIVE '/dev/LVMVolGroup/workspace' [254.99 GiB] inherit
For more complete information, the lvs
command is flexible and powerful to use in scripts:
- sudo lvs
Output LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
db LVMVolGroup -wi-ao---- 20.00g
projects LVMVolGroup -wi-ao---- 10.00g
workspace LVMVolGroup -wi-ao---- 254.99g
www LVMVolGroup -wi-ao---- 5.00g
To find the number of stripes and the logical volume type, use the --segments
option:
- sudo lvs --segments
Output LV VG Attr #Str Type SSize
db LVMVolGroup rwi-a-r--- 2 raid1 20.00g
mirrored_vol LVMVolGroup rwi-a-r--- 3 raid1 10.00g
test LVMVolGroup rwi-a-r--- 3 raid5 10.00g
test2 LVMVolGroup -wi-a----- 2 striped 10.00g
test3 LVMVolGroup rwi-a-r--- 2 raid1 10.00g
The most human-readable output is produced by the lvdisplay
command.
When the -m
flag is added, the tool will also display information about how the logical volume is broken down and distributed:
- sudo lvdisplay -m
Output --- Logical volume ---
LV Path /dev/LVMVolGroup/projects
LV Name projects
VG Name LVMVolGroup
LV UUID IN4GZm-ePJU-zAAn-DRO3-1f2w-qSN8-ahisNK
LV Write Access read/write
LV Creation host, time lvmtest, 2016-09-09 21:00:03 +0000
LV Status available
# open 1
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:0
--- Segments ---
Logical extents 0 to 2559:
Type linear
Physical volume /dev/sda
Physical extents 0 to 2559
. . .
In this example, the /dev/LVMVolGroup/projects
logical volume is contained entirely within the /dev/sda
physical volume. This information is useful if you need to remove that underlying device and wish to move the data off to specific locations.
This section discusses how to create and expand physical volumes, volume groups, and logical volumes.
To use storage devices with LVM, they must first be marked as a physical volume. This specifies that LVM can use the device within a volume group.
First, use the lvmdiskscan
command to find all block devices that LVM can access and use:
- sudo lvmdiskscan
Output /dev/sda [ 200.00 GiB]
/dev/sdb [ 100.00 GiB]
2 disks
2 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
Here, notice the devices that are suitable to be turned into physical volumes for LVM.
Warning: Make sure that you double-check that the devices you intend to use with LVM do not have any important data already written to them. Using these devices within LVM will overwrite the current contents. If you have important data on your server, make backups before proceeding.
To mark the storage devices as LVM physical volumes, use pvcreate
. You can pass in multiple devices at once:
- sudo pvcreate /dev/sda /dev/sdb
This command writes an LVM header on all of the target devices to mark them as LVM physical volumes.
To create a new volume group from LVM physical volumes, use the vgcreate
command. You have to provide a volume group name, followed by at least one LVM physical volume:
- sudo vgcreate volume_group_name /dev/sda
This example creates your volume group with a single initial physical volume. You can pass in more than one physical volume at creation if you’d like:
- sudo vgcreate volume_group_name /dev/sda /dev/sdb /dev/sdc
Usually, you only need a single volume group per server. All LVM-managed storage can be added to that pool and then logical volumes can be allocated from that.
One reason you may wish to have more than one volume group is if you feel you need to use different extent sizes for different volumes. You don’t typically have to set the extent size (the default size of 4M is adequate for most uses), but if you need to, you can do so upon volume group creation by passing the -s
option:
- suod vgcreate -s 8M volume_group_name /dev/sda
This will create a new volume group with an 8M extent size.
To expand a volume group by adding additional physical volumes, use the vgextend
command. This command takes a volume group followed by the physical volumes to add. You can pass in multiple devices at once if you’d like:
- sudo vgextend volume_group_name /dev/sdb
The physical volume will be added to the volume group, expanding the available capacity of the storage pool.
To create a logical volume from a volume group storage pool, use the lvcreate
command. Specify the size of the logical volume with the -L
option, then specify a name with the -n
option, and pass in the volume group to allocate the space from.
For instance, to create a 10G logical volume named test
from the LVMVolGroup
volume group, write:
- sudo lvcreate -L 10G -n test LVMVolGroup
If the volume group has enough free space to accommodate the volume capacity, the new logical volume will be created.
If you wish to create a volume using the remaining free space within a volume group, use the vgcreate
command with the -n
option to name and pass in the volume group like the previous step. Instead of passing in a size, use the -l 100%FREE
option, which uses the remaining extents within the volume group to form the logical volume:
- sudo lvcreate -l 100%FREE -n test2 LVMVolGroup
This should use up the remaining space in the logical volume.
Logical volumes can be created with some advanced options. Some options that you may wish to consider are:
--type
: This specifies the type of logical volume, which determines how the logical volume is allocated. Some available types will not be available if there are not enough underlying physical volumes to correctly create the chosen topography. Some of the most common types are:
linear
: The default type. The underlying physical devices used, if more than one, will be appended to each other, one after the other.striped
: Similar to RAID 0, the striped topology divides data into chunks and spread in a round-robin fashion across the underlying physical volumes. This can lead to performance improvements, but might lead to greater data vulnerability. This requires the -i
option and a minimum of two physical volumes.raid1
: Creates a mirrored RAID 1 volume. By default, the mirror will have two copies, but more can be specified by the -m
. This requires a minimum of two physical volumes.raid5
: Creates a RAID 5 volume. This requires a minimum of three physical volumes.raid6
: Creates a RAID 6 volume. This requires a minimum of four physical volumes.-m
: Specifies the number of additional copies of data to keep. A value of “1” specifies that one additional copy is maintained, for a total of two sets of data.-i
: Specifies the number of stripes that should be maintained. This is required for the striped
type, and can modify the default behavior of some of the other RAID options.-s
: Specifies that the action should create a snapshot from an existing logical volume instead of a new independent logical volume.To demonstrate, begin by creating a striped volume. You must specify at least two stripes for this method. This topology and stripe count requires a minimum of two physical volumes with available capacity:
- sudo lvcreate --type striped -i 2 -L 10G -n striped_vol LVMVolGroup
To create a mirrored volume, use the raid1
type. If you want more than two sets of data, use the -m
option. This example uses -m 2
to create a total of three sets of data. LVM counts this as one original data set with two mirrors. You need at least three physical volumes for this to succeed:
- sudo lvcreate --type raid1 -m 2 -L 20G -n mirrored_vol LVMVolGroup
To create a snapshot of a volume, you must provide the original logical volume to snapshot instead of the volume group. Snapshots do not take up much space initially, but grow in size as changes are made to the logical volume it is tracking. The size used during this procedure is the maximum size that the snapshot can be. Snapshots that grow past this size are broken and cannot be used, however, snapshots approaching their capacity can be extended:
- sudo lvcreate -s -L 10G -n snap_test LVMVolGroup/test
Note: To revert a logical volume to the point-in-time of a snapshot, use the lvconvert --merge
command:
- sudo lvconvert --merge LVMVolGroup/snap_test
This will bring the origin of the snapshot back to the state when the snapshot was taken.
There are a number of options that can dramatically alter the way that your logical volumes function.
One of the main advantages of LVM is the flexibility it provides in provisioning logical volumes. You can adjust the number or size of volumes on the fly without stopping the system.
To grow the size of an existing logical volume, use the lvresize
command. Use the -L
flag to specify a new size. You can also use relative sizes by adding a +
size. In that case, LVM will increase the size of the logical volume by the amount specified. To automatically resize the filesystem being used on the logical volume, pass in the --resizefs
flag.
To correctly provide the name of the logical volume to expand, you need to give the volume group, followed by a slash, followed by the logical volume:
- sudo lvresize -L +5G --resizefs LVMVolGroup/test
In this example, the logical volume and the filesystem of the test
logical volume on the LVMVolGroup
volume group will both be increased by 5G.
If you wish to handle the filesystem expansion manually, take out the --resizefs
option and use the filesystem’s native expansion utility afterwards. For example, for an Ext4 filesystem, write:
- sudo lvresize -L +5G LVMVolGroup/test
- sudo resize2fs /dev/LVMVolGroup/test
This returns the same result.
Since capacity reduction can result in data loss, the procedures to shrink the available capacity, either by reducing the size of or removing components are typically a bit more involved.
To shrink a logical volume, you should first back up your data. Because this reduces the available capacity, mistakes can lead to data loss.
When you are ready, check on how much space is currently being used:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
. . .
/dev/mapper/LVMVolGroup-test 4.8G 521M 4.1G 12% /mnt/test
In this example, a little over 521M of the space is currently in use. Use this to help you estimate the size that you can reduce the volume to.
Unlike expansions, filesystem shrinking should be performed when unmounted. First, make sure you’re in the root directory:
- cd ~
Next, unmount the filesystem:
- sudo umount /dev/LVMVolGroup/test
After unmounting, check the filesystem to ensure that everything is in working order. Pass in the filesystem type with the -t
option. Use -f
to check when the filesystem appears:
- sudo fsck -t ext4 -f /dev/LVMVolGroup/test
After checking the filesystem, you can reduce the filesystem size using the filesystem’s native tools. For Ext4 filesystems, this would be the resize2fs
command. Pass in the final size for the filesystem:
Warning: The safest option here is to choose a final size that is a fair amount larger than your current usage. Give yourself some buffer room to avoid data loss and ensure that you have backups in place.
- sudo resize2fs -p /dev/LVMVolGroup/test 3G
Once the operation is complete, resize the logical volume by passing the same size to the lvresize
command with the -L
flag:
- sudo lvresize -L 3G LVMVolGroup/test
You are warned about the possibility of data loss. If you are ready, enter y
to proceed.
After the logical volume has been reduced, check the filesystem again:
- sudo fsck -t ext4 -f /dev/LVMVolGroup/test
If everything is functioning correctly, you can remount the filesystem using your usual mount command:
- sudo mount /dev/LVMVolGroup/test /mnt/test
Your logical volume should now be reduced to the appropriate size.
If you no longer need a logical volume, you can remove it with the lvremove
command.
First, unmount the logical volume if it is currently mounted:
- cd ~
- sudo umount /dev/LVMVolGroup/test
Afterwards, remove the logical volume by entering this command:
- sudo lvremove LVMVolGroup/test
You are asked to confirm the procedure. If you are certain you want to delete the logical volume, press y
.
To remove an entire volume group, including all of the logical volumes within it, use the vgremove
command.
Before you remove a volume group, you should remove the logical volumes using the procedure previously discussed. At the very least, you must make sure that you unmount any logical volumes that the volume group contains:
- sudo umount /dev/LVMVolGroup/www
- sudo umount /dev/LVMVolGroup/projects
- sudo umount /dev/LVMVolGroup/db
Afterwards, you can delete the entire volume group by passing the volume group name to the vgremove
command:
- sudo vgremove LVMVolGroup
You are then prompted to confirm that you wish to remove the volume group. If you have any logical volumes still present, you are given individual confirmation prompts for those before removing.
To remove a physical volume from LVM management, the procedure you need depends on whether the device is currently being used by LVM.
If the physical volume is in use, you have to move the physical extents located on the device to a different location. This requires the volume group to have enough other physical volumes to handle the physical extents. If you are using more complex logical volume types, you might need additional physical volumes even when you have plenty of free space to accommodate the topology.
When you have enough physical volumes in the volume group to handle the physical extents, move them off of the physical volume you wish to remove by running:
- sudo pvmove /dev/sda
This process can take time depending on the size of the volumes and the amount of data to transfer.
Once the extents have been relocated to peer volumes, you can remove the physical volume from the volume group:
- sudo vgreduce LVMVolGroup /dev/sda
This removes the vacated physical volume from the volume group. After this is complete, you can remove the physical volume marker from the storage device:
- sudo pvremove /dev/sda
You can now use the removed storage device for other purposes or remove it from the system entirely.
You now have an understanding of how to manage storage devices on Ubuntu 18.04 with LVM. You also know how to get information about the state of existing LVM components, how to use LVM to compose your storage system, and how to modify volumes to meet your needs. Feel free to test these concepts in a safe environment to get a better grasp of how they fit together.
]]>Emacs is one of the oldest and most versatile text editors. The GNU Emacs version was originally written in 1984 and is well known for its powerful and rich editing features. It can be customized and extended with different modes, enabling it to be used like an Integrated Development Environment (IDE) for programming languages such as Java, C, and Python.
For those who have used both the Vi and the user-friendly nano text editors, Emacs presents itself as an in-between. Its strengths and features resemble those of Vi, while its menus, help files, and command-keys compare with nano.
In this article, you’ll learn how to install Emacs on an Ubuntu 22.04 server and use it for basic text editing.
To follow this tutorial, you’ll need an Ubuntu 22.04 server set up with a non-root user with sudo
privileges and firewall enabled. You can set this up by following our Initial Server Setup with Ubuntu 22.04 guide.
Begin by checking if your system already has Emacs installed:
- emacs
If the program is installed, the editor will start with the default welcome message. If not, you’ll receive this output:
OutputCommand 'emacs' not found, but can be installed with:
sudo apt install e3 # version 1:2.82+dfsg-2
sudo apt install emacs-gtk # version 1:27.1+1-3ubuntu5
sudo apt install emacs-lucid # version 1:27.1+1-3ubuntu5
sudo apt install emacs-nox # version 1:27.1+1-3ubuntu5
sudo apt install jove # version 4.17.3.6-2
See 'snap info emacs' for additional versions.
To install Emacs, use the following command:
- sudo apt install emacs
After installing Emacs on your machine, you’re ready to move on to the next step.
Start Emacs by issuing the command emacs
in your terminal:
- emacs
Emacs starts with an empty editing buffer and waits for you to start typing. When Emacs is started without a specified file, the program displays a welcome message:
To start a new file, move the cursor over to the link “Visit New File” by pressing the TAB
key and then press ENTER
. You can also press CTRL+X
,then CTRL+F
to create a new file. A prompt appears at the end of your terminal requesting a file name:
Enter a filename to get started with text editing. In the following example, myfile.txt
is used. You can name this file whatever you like. Once you enter your file name, press ENTER
to proceed.
An empty file will be ready for text entry:
At the top of the screen there is a menu. After the menu, there is a large editing space. This is called the main buffer where you type your text or view the contents of a file.
When Emacs edits an existing file on disk, a copy of that document is first loaded into memory and then displayed in the main editing window. This area in memory is called a buffer. As you work through the document, all the changes you make in the editing space are applied to the buffer, while the original file on disk remains unchanged. Occasionally, Emacs will auto-save in the background, but it’s only when you manually save the document that the changes are written to the disk. The same applies for a new file as well. All changes are made on the buffer until you save it. The main editing space in Emacs is your view to the buffer.
After the main buffer, a highlighted bar of text is displayed near the bottom of the screen. This is called the status bar or the mode line. The text revealed here depends on what mode Emacs is currently in. Among other things, the status bar includes:
Name of the current file
Current cursor location
Current editing mode
The status of the file (-- for an unmodified file, ** for a file with un-saved changes and %% for read-only files)
Finally, a single line of space exists after the status bar where the screen ends. In this example, it’s showing the text “(New File)”. This area is called the mini buffer. Emacs is a command driven tool and the mini buffer is your main point of interaction. This is where Emacs prompts you for command inputs and reveals output.
The text-based version of Emacs treats windows differently from its GUI-based version. Unlike GUI-based applications, text-based Emacs windows don’t pop out as they can’t physically do so in a terminal or console session. When Emacs needs to start a new window its main buffer is split into two parts, like having two frames in a browser. The top half shows the main buffer and the bottom half displays the new content. An example of Emacs spawning a new window is when you are accessing its help files or tutorials.
When Emacs starts, it usually takes up the whole screen. Most of its functions are accessible from a menu bar located at the top of the screen.
Unlike GUI-based programs, text-based menus can’t be dropped down by a mouse click. In fact, you can’t highlight and scroll through the menus with a shortcut key.
To access the menus, press the F10
key. This opens another window under the main buffer, and displays a list of keys to access the menu items. The mini buffer will prompt you to enter the required key. Once you press that key, the contents of the new window will change, reflecting the next level of options.
To exit the menus, no matter how deep you are in, press the ESC
key three times. This typically closes the menu window and takes you back into the main buffer.
Here are some of the options available from the Tools
menu:
Emacs has an extensive help system along with tutorials. To access it, you can either use the menu by pressing F10
and press the RIGHT
or LEFT
arrow keys to select Help
, or press the CTRL+H
then a corresponding key. For example, you can enter one of the following keys after pressing CTRL+H
to review FAQs, tutorials, news, and other topics:
t
to enter an Emacs TutorialCTRL+F
for an FAQCTRL+P
to learn about known bugs and problemsCTRL+R
to read the Emacs ManualCTRL+E
to find extra packagesNow that you are familiar with the user interface, you can start familiarizing yourself with Emacs’ command keys. When you open a file, you can start typing and issuing commands at the same time.
Command functions usually involve two or three keys. The most common is the CTRL
key, followed by the ALT
or ESC
key. CTRL
is shown in short form as “C” within the Emacs environment. Notes within Emacs like, C-x C-c
, means that you press the CTRL+X
keys together, then press CTRL+C
. Similarly, C-h t
, means press CTRL+H
together, then release both keys and press t
.
ALT
and ESC
keys are referred to as meta keys in Emacs. On Apple machines, instead of ALT
, use the OPTION
key. Other keyboards use an EDIT
key. Similar to the CTRL
key, Emacs uses multi-key functions with the meta key. For example, a notation like M-x
means that you press ALT
or OPTION
and x
together. Likewise, you could use ESC+X
to accomplish the same command.
The ENTER
key is shown as RET
in Emacs, which is short for return. The ESC
key is often shown as E
.
The ESC
key can be used to back out of a command or prompt. For example, you can press ESC
multiple times to exit out of a specific menu. Another way of canceling an operation is by pressing CTRL+G
.
Once you have made some changes to your document or written some text, you can save it by pressing CTRL+X
, followed by CTRL+S
. The mini buffer will output the following message:
OutputWrote /home/sammy/myfile.txt
You can exit out of Emacs by pressing CTRL+X
, then CTRL+C
.
If it didn’t manually save the file before exiting out, you’ll receive this message:
OutputSave file /home/sammy/myfile.txt? (y, n, !, ., q, C-r, C-f, d or C-h)
Press Y
to save the file.
If you press N
for no, you’ll receive this message:
OutputModified buffers exist; exit anyway? (yes or no)
Enter yes
to exit out without saving.
Navigating through a long document or help topic can be a tedious task. Fortunately, in Emacs there are multiple ways to navigate a file.
Here is a list of some common navigation functions:
To perform this function | Use these keys |
---|---|
Moving to the next line | CTRL+N (N for Next) |
Moving to the previous line | CTRL+P (P for Previous) |
Moving one character forward | CTRL+F (F for Forward) |
Moving one character backward | CTRL+B (B for Backward) |
Moving one word forward | META+F (F for Forward) |
Moving one word backward | META+B (B for Backward) |
Moving to the start of a line | CTRL+A |
Moving to the end of a line | CTRL+E (E for End) |
Moving to the start of a sentence | META+A |
Moving to the end of a sentence | META+E (E for End) |
Moving one page down | CTRL+V (or PgDn) |
Moving one page up | META+V (or PgUp) |
Moving to the beginning of the file | META+< (Alt + Shift + “<”) |
Moving to the end of the file | META+> (Alt + Shift + “>”) |
Remember that META
means you could use any of the following keys: ALT
, ESC
, OPTION
, or EDIT
.
If you need to perform more specialized tasks common to popular word processors, like selecting or highlighting a specific section of a text file, you can do that in Emacs.
To mark a text region follow these steps:
Move the cursor to the position where you would like the selection to start. You can use any of the methods described previously to move the cursor.
Press CTRL+SPACEBAR
or CTRL+@
to set a mark to begin your text highlighting. The mini buffer will show a status message of Mark Activated
.
Move the cursor to the position where you want the region to end. By using any of the key combinations described before.
The text will be highlighted up to the point where your cursor is now located.
Press CTRL-SPACEBAR
or CTRL+@
twice to unmark the highlighted text. The mini buffer will show a status message of Mark Deactivated
.
Alternatively, like a word processor, you can hold the SHIFT
key and move your cursor with the UP
or DOWN
arrow keys on your keyboard to make your selection.
If you want to select the paragraph your cursor is currently on, press META+H
. Pressing META+h
continuously thereafter will select the next paragraphs in your text file.
If you want to select all the contents of the main buffer (i.e. “select all”), press CTRL+X
then h
.
Similar to a word processor, you can copy, cut, and paste text:
To copy the text you’ve selected, press META+W
.
To cut the text selection, press CTRL-W
.
To paste a text selection, press CTRL-Y
.
Deleting text by using the Backspace
and Delete
keys work the way you would expect them to.
To delete a whole word quickly, move the cursor to the beginning of a word and press META+D
. To delete multiple words, press and hold the META
key and continuously press D
. Words will be deleted one by one.
To delete a whole line, position the cursor where you want it, then press CTRL+K
. This deletes the text right up to the end of the line.
To delete a sentence, press META+K
. Please note, however, that Emacs will delete a whole line or more if there aren’t two spaces after the full stop. The two spaces after a full stop is how Emacs determines when a sentence has broken across multiple lines.
You can undo the last operation by pressing CTRL+X
then u
. An alternative key combination is CTRL+_
(The key press here would be CTRL
, SHIFT
, and -
to perform an underscore).
To redo your last undo, press CTRL+G
, followed by CTRL+_
.
There are two search directions in Emacs: forward and backward. In forward search, the word you specify will be searched forward from the current cursor position. For backward search, it’s the other way round.
Press CTRL+S
for forward search. Then input the text you’re searching for in the mini-buffer prompt.
Press CTRL+R
for backward search.
Immediately after you input your search term, Emacs will search for it and highlight any matches it finds in the main buffer.
For example, searching for the word “cat” in a text file will reveal every occurrence in the main buffer as a highlighted text:
To replace text, follow these steps:
Press META+%
. The mini buffer will prompt for the text to be searched with Query replace:
.
Input the text that you’re replacing and press ENTER
.
The mini buffer will display Query replace your_search_term with:
.
Enter the word or phrase you want to replace the your_search_term with and press ENTER
.
Each match will be highlighted, and you will be given a prompt to make a replacement. The mini buffer will ask Query replacing your_search_word with your_replacement_word: (C-h for help)
.
Press y
to replace the current match found.
Press n
to skip to the next match.
Press q
to exit without any replacements.
Press !
to do a global replacement without any prompts. The mini buffer will output this message: replaced number occurrences
.
To center a line, move the cursor to the beginning of that line and press META+O
, then META+S
.
To justify a selected text region do the following:
Highlight the text you wish to justify.
Press META+X
. The mini buffer will await a response.
Input set-justifiction-
and press the TAB
key.
You will be given the following completion options: set-justification-center
, set-justification-left
, set-justification-right
, set-justification-none
and set-justification-full
.
Complete the justification command, by selecting set-justification-right
or one of your choice, then press ENTER
.
The selected text will be justified to the direction of your choosing.
Here is an example of the text assigned to the different justification settings:
You can convert casing with a few different commands. Here’s a list of some command keys:
To perform this function | Use these keys |
---|---|
Capitalizing a word after the cursor | META+C (C for capitalize) |
Converting a word to lowercase | META+L (L for lowercase) |
Converting a word to uppercase | META+U (U for uppercase) |
Converting a paragraph to uppercase | Block select, then CTRL+X CTRL+U |
Converting a paragraph to lowercase | Block select, then CTRL+X CTRL+L |
If you’re converting a full paragraph or more to uppercase or lowercase, you’ll be given a new window and message:
WindowYou have typed C-x C-l, invoking disabled command downcase-region. It is disabled because new users often find it confusing. Here’s the first part of its description: Convert the region to lowercaselower case. In programs, it wants two arguments.These arguments specify the starting and ending character numbers of the region to operate on. When used as a command, the text between point and the mark is operated on. Do you want to use this command anyway? You can now type 'y' to try it and enable it (no questions if you use it again). 'n' to cancel--don’t try the command, and it remains disabled. 'SPC' to try the command just this once, but leave it disabled. '!' to try it, and enable all disabled commands for this session only.
Proceed by pressing the mentioned keys.
Managing windows within Emacs can help you work more efficiently with your files.
For example, from your main buffer, switch into the Emacs tutorial by pressing CTRL+h
then t
. Your main buffer window is now the Emacs tutorial. If you wanted to switch back to the myfile.txt
buffer, press CTRL+X
, then b
. This is the switch buffer command. Emacs will prompt you for a buffer name to switch into. Start typing the buffer name,myfile.txt
, and press ENTER
. This will take you from the Emacs tutorial, to the file you specified.
One of the reasons Emacs has been adopted so widely in the UNIX community is due to its ability to assume different modes. A mode can enhance the functionality of Emacs.
Depending on the mode selected, Emacs can be used as a word processor for writing text files, or it can be adapted for advanced tasks like writing Python, C, or Java code. For example, you can change Emacs’ mode to make it work with version control systems, run shell commands, or read man pages.
There are two different types of Emacs modes. One is called the major mode. In major mode, Emacs can be used as an integrated development environment (IDE) for programming or scripting languages. In this mode, the program offers specialized features like color syntax-highlighting, indentation and formatting, language specific menu options, or automatic interfacing with debuggers and compilers.
To demonstrate, you can write a “Hello World” app in Python using Emacs.
Inside your terminal and in your root directory, enter the following commands:
- cd ~
- emacs hello.py
Emacs recognizes the file extension and will start in Python mode. In the main buffer, enter the following Python code:
print "hello world!\n"
The keywords are now indicated with color syntax-highlighting. Also notice that the status line above the mini buffer reveals the mode that you’re currently in. The main menu also has a separate entry specifically for Python:
Save the buffer with CTRL+X
CTRL+S
.
To change the major mode from within Emacs, press META+X
. The mini buffer will wait for your response. You can then enter a different mode. Here are some examples of major modes:
Compared to major modes, minor modes offer more specific features. These features can be tied to a specific major mode, or have a system-wide effect irrespective of the major mode. Also, unlike major modes, there can be multiple minor modes in effect at any one time. Minor modes are like switches: some are enabled by default, some are not. If a minor mode is already on, calling it will switch it off. If it is off, it will be switched back on.
An example of a minor mode is the option for setting justification used in the previous examples.
Another example of a minor mode is the auto-fill-mode
. To enter this mode in your Emacs editor, press the META+X
key, then enter auto-fill-mode
.
This mode enables a line of text to break and wrap to the next line when its length becomes more than 70 characters. Remember that when you invoke a minor mode, it’s very much like a toggle switch. Invoking the same command again will disable the line wrap.
Here are some more examples of minor modes:
auto-save-mode
: This toggles the property of auto saving that periodically saves the contents of the main buffer behind the scene.
line-number-mode
: This toggles the display of the current line number in the status bar.
linum-mode
: Toggles the display of line numbers along the left edge of the window.
column-number-mode
: Shows the current position of the cursor in status bar.
overwrite-mode
: This is like pressing the INS
key on your keyboard. When switched on, it will overwrite text on the right side of the cursor as you type.
menu-bar-mode
: This can switch the main menu on or off.
In this tutorial, you’ve learned about the various commands, editing features, and modes in Emacs.
To further your understanding of the Emacs editor, the GNU Emacs web page has a wealth of information including links to other resources like Emacs Wiki. You can also read the GNU Emacs manual.
]]>Linux system administrators often need to look at log files for troubleshooting purposes. This is one of the first things a sysadmin would do.
Linux and the applications that run on it can generate all different types of messages, which are recorded in various log files. Linux uses a set of configuration files, directories, programs, commands and daemons to create, store and recycle these log messages. Knowing where the system keeps its log files and how to make use of related commands can therefore help save valuable time during troubleshooting.
In this tutorial, we will have a look at different parts of the Linux logging mechanism.
Disclaimer
The commands in this tutorial were tested in plain vanilla installations of CentOS 9, Ubuntu 22.10, and Debian 11.
The default location for log files in Linux is /var/log
. You can view the list of log files in this directory with the following command:
- ls -l /var/log
You’ll see something similar to this on your CentOS system:
Output[root@centos-9-trim ~]# ls -l /var/log
total 49316
drwxr-xr-x. 2 root root 6 Sep 27 19:17 anaconda
drwx------. 2 root root 99 Jan 3 08:23 audit
-rw-rw----. 1 root utmp 1234560 Jan 3 16:16 btmp
-rw-rw----. 1 root utmp 17305344 Jan 1 00:00 btmp-20230101
drwxr-x---. 2 chrony chrony 6 Aug 10 2021 chrony
-rw-r--r--. 1 root root 130466 Dec 8 22:12 cloud-init.log
-rw-r-----. 1 root adm 10306 Dec 8 22:12 cloud-init-output.log
-rw-------. 1 root root 36979 Jan 3 16:03 cron
-rw-------. 1 root root 27360 Dec 10 23:15 cron-20221211
-rw-------. 1 root root 94140 Dec 17 23:07 cron-20221218
-rw-------. 1 root root 95126 Dec 24 23:14 cron-20221225
-rw-------. 1 root root 95309 Dec 31 23:04 cron-20230101
…
Here are some common log files you will find under /var/log
:
wtmp
utmp
dmesg
messages
maillog
or mail.log
spooler
auth.log
or secure
The wtmp
and utmp
files keep track of users logging in and out of the system. You cannot directly read the contents of these files using cat
commands in the terminal – there are other specific commands for that and you will use some of these commands.
To see who is currently logged in to the Linux server, use the who
command. This command gets its values from the /var/run/utmp
file (for CentOS and Debian) or /run/utmp
(for Ubuntu).
Here is an example from Ubuntu:
Outputroot@ubuntu-22:~# who
root pts/0 2023-01-03 16:23 (198.211.111.194)
In this particular case, we are the sole user of the system.
The last
command tells you the login history of users:
Outputroot@ubuntu-22:~# last
root pts/0 198.211.111.194 Tue Jan 3 16:23 still logged in
reboot system boot 5.19.0-23-generi Thu Dec 8 21:48 still running
wtmp begins Thu Dec 8 21:48:51 2022
You can also use the last
command with a pipe (|
) to add a grep
search for specific users.
To find out when the system was last rebooted, you can run the following command:
- last reboot
The result may look like this in Debian:
Outputroot@debian-11-trim:~# last reboot
reboot system boot 5.10.0-11-amd64 Thu Dec 8 21:49 still running
wtmp begins Thu Dec 8 21:49:39 2022
To see when did someone last logged in to the system, use lastlog
:
- lastlog
On a Debian server, you may see output like this:
Outputroot@debian-11-trim:~# lastlog
Username Port From Latest
root pts/0 162.243.188.66 Tue Jan 3 16:23:03 +0000 2023
daemon **Never logged in**
bin **Never logged in**
sys **Never logged in**
sync **Never logged in**
games **Never logged in**
man **Never logged in**
lp **Never logged in**
mail **Never logged in**
news **Never logged in**
uucp **Never logged in**
proxy **Never logged in**
www-data **Never logged in**
backup **Never logged in**
list **Never logged in**
irc **Never logged in**
gnats **Never logged in**
nobody **Never logged in**
_apt **Never logged in**
messagebus **Never logged in**
uuidd **Never logged in**
…
For other text-based log files, you can use cat
, head
or tail
commands to read the contents.
In the example below, you’re trying to look at the last ten lines of the /var/log/messages
file on a Debian server:
- sudo tail /var/log/messages
You’ll receive an output similar to this:
Outputroot@debian-11-trim:~# tail /var/log/messages
Jan 1 00:10:14 debian-11-trim rsyslogd: [origin software="rsyslogd" swVersion="8.2102.0" x-pid="30025" x-info="https://www.rsyslog.com"] rsyslogd was HUPed
Jan 3 16:23:01 debian-11-trim DropletAgent[808]: INFO:2023/01/03 16:23:01 ssh_watcher.go:65: [SSH Watcher] Port knocking detected.
Jan 3 16:23:01 debian-11-trim DropletAgent[808]: INFO:2023/01/03 16:23:01 do_managed_keys_actioner.go:43: [DO-Managed Keys Actioner] Metadata contains 1 ssh keys and 1 dotty keys
Jan 3 16:23:01 debian-11-trim DropletAgent[808]: INFO:2023/01/03 16:23:01 do_managed_keys_actioner.go:49: [DO-Managed Keys Actioner] Attempting to update 1 dotty keys
Jan 3 16:23:01 debian-11-trim DropletAgent[808]: INFO:2023/01/03 16:23:01 do_managed_keys_actioner.go:70: [DO-Managed Keys Actioner] Updating 2 keys
Jan 3 16:23:01 debian-11-trim DropletAgent[808]: INFO:2023/01/03 16:23:01 do_managed_keys_actioner.go:75: [DO-Managed Keys Actioner] Keys updated
rsyslog
DaemonAt the heart of the logging mechanism is the rsyslog
daemon. This service is responsible for listening to log messages from different parts of a Linux system and routing the message to an appropriate log file in the /var/log
directory. It can also forward log messages to another Linux server.
rsyslog
Configuration FileThe rsyslog
daemon gets its configuration information from the rsyslog.conf
file. The file is located under the /etc
directory.
The rsyslog.conf
file tells the rsyslog
daemon where to save its log messages. This instruction comes from a series of two-part lines within the file.
This file can be found at rsyslog.d/50-default.conf
on Ubuntu.
The two part instruction is made up of a selector and an action. The two parts are separated by white space.
The selector part specifies what the source and importance of the log message is and the action part says what to do with the message.
The selector itself is again divided into two parts separated by a dot (.
). The first part before the dot is called facility (the origin of the message) and the second part after the dot is called priority (the severity of the message).
Together, the facility/priority and the action pair tell rsyslog
what to do when a log message matching the criteria is generated.
You can see an excerpt from a CentOS /etc/rsyslog.conf
file with this command:
- cat /etc/rsyslog.conf
You should see something like this as the output:
Output# rsyslog configuration file
# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# or latest version online at http://www.rsyslog.com/doc/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html
#### GLOBAL DIRECTIVES ####
# Where to place auxiliary files
global(workDirectory="/var/lib/rsyslog")
# Use default timestamp format
module(load="builtin:omfile" Template="RSYSLOG_TraditionalFileFormat")
# Include all config files in /etc/rsyslog.d/
include(file="/etc/rsyslog.d/*.conf" mode="optional")
#### MODULES ####
module(load="imuxsock" # provides support for local system logging (e.g. via logger command)
SysSock.Use="off") # Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
module(load="imjournal" # provides access to the systemd journal
StateFile="imjournal.state") # File to store the position in the journal
#module(load="imklog") # reads kernel messages (the same are read from journald)
#module(load="immark") # provides --MARK-- message capability
# Provides UDP syslog reception
# for parameters see http://www.rsyslog.com/doc/imudp.html
#module(load="imudp") # needs to be done just once
#input(type="imudp" port="514")
…
To understand what this all means, let’s consider the different types of facilities recognized by Linux. Here is a list:
And here is a list of priorities in ascending order:
So now let’s consider the following line from the file:
Output…
# Log cron stuff
cron.* /var/log/cron
…
This just tells the rsyslog
daemon to save all messages coming from the cron
daemon in a file called /var/log/cron
. The asterisk (*
) after the dot means messages of all priorities will be logged. Similarly, if the facility was specified as an asterisk, it would mean all sources.
Facilities and priorities can be related in a number of ways.
In its default form, when there is only one priority specified after the dot, it means all events equal to or greater than that priority will be trapped. So the following directive causes any messages coming from the mail subsystem with a priority of warning or higher to be logged in a specific file under /var/log
:
Outputmail.warn /var/log/mail.warn
This will log every message equal to or greater than the warn priority, but leave everything below it. So messages with err
, crit
, alert
, or emerg
will also be recorded in this file.
Using an equal sign (=
) after the dot will cause only the specified priority to be logged. So if we wanted to trap only the info messages coming from the mail subsystem, the specification would be something like the following:
Outputmail.=info /var/log/mail.info
Again, if we wanted to trap everything from mail subsystem except info messages, the specification would be something like the following
Outputmail.!info /var/log/mail.info
or
Outputmail.!=info /var/log/mail.info
In the first case, the mail.info
file will contain everything with a priority lower than info. In the second case, the file will contain all messages with a priority above info.
Multiple facilities in the same line can be separated by commas.
Multiple sources (facility.priority
) in the same line are separated by a semicolon.
When an action is marked as an asterisk, it means all users. This is an entry in your CentOS rsyslog.conf
file is saying exactly that:
Output# Everybody gets emergency messages
*.emerg :omusrmsg:*
Try to see what the rsyslog.conf
is saying in your Linux system. Here is an excerpt from a Debian server for another example:
Output# /etc/rsyslog.conf configuration file for rsyslog
#
# For more information install rsyslog-doc and see
# /usr/share/doc/rsyslog-doc/html/configuration/index.html
#################
#### MODULES ####
#################
module(load="imuxsock") # provides support for local system logging
module(load="imklog") # provides kernel logging support
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
#module(load="imudp")
#input(type="imudp" port="514")
# provides TCP syslog reception
#module(load="imtcp")
#input(type="imtcp" port="514")
###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
#
# Set the default permissions for all log files.
#
$FileOwner root
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
…
As you can see, Debian saves all security/authorization level messages in /var/log/auth.log
whereas CentOS saves it under /var/log/secure
.
The configurations for rsyslog
can come from other custom files as well. These custom configuration files are usually located in different directories under /etc/rsyslog.d
. The rsyslog.conf
file includes these directories using the $IncludeConfig
directive.
Here is what it looks like in Ubuntu:
Output…
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
…
Check out the contents under the /etc/rsyslog.d
directory with:
- ls -l /etc/rsyslog.d
You’ll see something similar to this in your terminal:
Output-rw-r--r-- 1 root root 314 Sep 19 2021 20-ufw.conf
-rw-r--r-- 1 root root 255 Sep 30 22:07 21-cloudinit.conf
-rw-r--r-- 1 root root 1124 Nov 16 2021 50-default.conf
The destination for a log message does not necessarily have to be a log file; the message can be sent to a user’s console. In this case, the action field will contain the username. If more than one user needs to receive the message, their usernames are separated by commas. If the message needs to be broadcast to every user, it’s specified by an asterisk (*) in the action field.
Because of being part of a network operating system, rsyslog
daemon can’t only save log messages locally, it can also forward them to another Linux server in the network or act as a repository for other systems. The daemon listens for log messages in UDP port 514. The example below will forward kernel critical messages to a server called “texas”.
Outputkern.crit @texas
So now it’s time for you to create your own log files. To test this, you will do the following:
/etc/rsyslog.conf
filersyslog
daemonIn the following example, you’ll add two new lines in your CentOS Linux system’s rsyslog.conf
file. As you can see with the following command, each of them are coming from a facility called local4 and they have different priorities.
- vi /etc/rsyslog.conf
Here’s the output:
Output…
# New lines added for testing log message generation
local4.crit /var/log/local4crit.log
local4.=info /var/log/local4info.log
Next, the service you’ll restart so the config file data is reloaded:
- /etc/init.d/rsyslog restart
To generate the log message now, the logger application is called:
- logger -p local4.info " This is a info message from local 4"
Looking under the /var/log
directory now shows two new files:
Output…
-rw------- 1 root root 0 Jan 3 11:21 local4crit.log
-rw------- 1 root root 72 Jan 3 11:22 local4info.log
…
The size of the local4info.log
is non-zero. So when you open it, you’ll see the message has been recorded:
- cat /var/log/local4info.log
OutputJan 3 11:22:32 TestLinux root: This is a info message from local 4
As more and more information is written to log files, they get bigger and bigger. This obviously poses a potential performance problem. Also, the management of the files becomes cumbersome.
Linux uses the concept of rotating log files instead of purging or deleting them. When a log is rotated, a new log file is created and the old log file is renamed and optionally compressed. A log file can thus have multiple old versions remaining online. These files will go back over a period of time and will represent the backlog. Once a certain number of backlogs have been generated, a new log rotation will cause the oldest log file to be deleted.
The rotation is initiated through the logrotate
utility.
logrotate
Configuration FileLike rsyslog
, logrotate
also depends on a configuration file and the name of this file is logrotate.conf
. It’s located under /etc
.
Here is what you see in the logrotate.conf
file of your Debian server:
- cat /etc/logrotate.conf
Output# see "man logrotate" for details
# global options do not affect preceding include directives
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
#dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may also be configured here.
By default, log files are to be rotated weekly with four backlogs remaining online at any one time. When the program runs, a new, empty log file will be generated and optionally the old ones will be compressed.
The only exception is for wtmp
and btmp
files. wtmp
keeps track of system logins and btmp
keeps track of bad login attempts. Both these log files are to be rotated every month and no error is returned if any previous wtmp
or btmp
file can be found.
Custom log rotation configurations are kept under the /etc/logrotate.d
directory. These are also included in the logrotate.conf
with the include
directive. The Debian installation shows you the content of this directory:
- ls -l /etc/logrotate.d
Outputtotal 32
-rw-r--r-- 1 root root 120 Jan 30 2021 alternatives
-rw-r--r-- 1 root root 173 Jun 10 2021 apt
-rw-r--r-- 1 root root 130 Oct 14 2019 btmp
-rw-r--r-- 1 root root 160 Oct 19 2021 chrony
-rw-r--r-- 1 root root 112 Jan 30 2021 dpkg
-rw-r--r-- 1 root root 374 Feb 17 2021 rsyslog
-rw-r--r-- 1 root root 235 Feb 19 2021 unattended-upgrades
-rw-r--r-- 1 root root 145 Oct 14 2019 wtmp
The contents of the rsyslog
shows how to recycle a number of log files:
- cat /etc/logrotate.d/rsyslog
Output/var/log/syslog
/var/log/mail.info
/var/log/mail.warn
/var/log/mail.err
/var/log/mail.log
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
rotate 4
weekly
missingok
notifempty
compress
delaycompress
sharedscripts
postrotate
/usr/lib/rsyslog/rsyslog-rotate
endscript
}
As you can see, the messages
file will be reinitialized every day with four days’ worth of logs being kept online. Other log files are rotated every week.
Also worth noting is the postrotate
directive. This specifies the action that happens after the whole log rotation has completed.
logrotate
can be manually run to recycle one or more files. And to do that, you specify the relevant configuration file as an argument to the command.
To see how this works, here is a partial list of log files under /var/log
directory in a test CentOS server:
- ls -l /var/log
Outputtotal 49324
…
-rw-------. 1 root root 84103 Jan 3 17:20 messages
-rw-------. 1 root root 165534 Dec 10 23:12 messages-20221211
-rw-------. 1 root root 254743 Dec 18 00:00 messages-20221218
-rw-------. 1 root root 217810 Dec 25 00:00 messages-20221225
-rw-------. 1 root root 237726 Dec 31 23:45 messages-20230101
drwx------. 2 root root 6 Mar 2 2022 private
drwxr-xr-x. 2 root root 6 Feb 24 2022 qemu-ga
lrwxrwxrwx. 1 root root 39 Mar 2 2022 README -> ../../usr/share/doc/systemd/README.logs
-rw-------. 1 root root 2514753 Jan 3 17:25 secure
-rw-------. 1 root root 2281107 Dec 10 23:59 secure-20221211
-rw-------. 1 root root 9402839 Dec 17 23:59 secure-20221218
-rw-------. 1 root root 8208657 Dec 25 00:00 secure-20221225
-rw-------. 1 root root 7081010 Dec 31 23:59 secure-20230101
drwxr-x---. 2 sssd sssd 6 Jan 17 2022 sssd
-rw-------. 1 root root 0 Dec 8 22:11 tallylog
-rw-rw-r--. 1 root utmp 2688 Jan 3 16:22 wtmp
The partial contents of the logrotate.conf
file looks like this:
- cat /etc/logrotate.conf
Output# see "man logrotate" for details
# global options do not affect preceding include directives
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# system-specific logs may be also be configured here.
Next you run the logrotate
command:
- logrotate -fv /etc/logrotate.conf
Messages scroll over as new files are generated, errors are encountered etc. When the dust settles, you check for new mail
, secure
, or messages
files:
- ls -l /var/log/mail*
Output-rw------- 1 root root 0 Dec 17 18:34 /var/log/maillog
-rw-------. 1 root root 1830 Dec 16 16:35 /var/log/maillog-20131216
-rw------- 1 root root 359 Dec 17 18:25 /var/log/maillog-20131217
- ls -l /var/log/messages*
Output-rw------- 1 root root 148 Dec 17 18:34 /var/log/messages
-rw-------. 1 root root 180429 Dec 16 16:35 /var/log/messages-20131216
-rw------- 1 root root 30554 Dec 17 18:25 /var/log/messages-20131217
ls -l /var/log/secure*
-rw------- 1 root root 0 Jan 3 12:34 /var/log/secure
-rw-------. 1 root root 4187 Jan 3 16:41 /var/log/secure-20230103
-rw------- 1 root root 591 Jan 3 18:28 /var/log/secure-20230103 ```
As we can see, all three new log files have been created. The maillog
and secure
files are still empty, but the new messages
file already has some data in it.
Hopefully this tutorial has given you some ideas about Linux logging. You can try to look into your own development or test systems to have a better idea. Once you are familiar with the location of the log files and their configuration settings, use that knowledge for supporting your production systems. Then you can create some aliases to point to these files to save some typing time as well.
]]>LVM, or Logical Volume Management, is a storage device management technology that gives users the power to pool and abstract the physical layout of component storage devices for flexible administration. Utilizing the device mapper Linux kernel framework, the current iteration, LVM2, can be used to gather existing storage devices into groups and allocate logical units from the combined space as needed.
The main advantages of LVM are increased abstraction, flexibility, and control. Logical volumes can have meaningful names like “databases” or "root-backup”. Volumes can also be resized dynamically as space requirements change, and migrated between physical devices within the pool on a running system or exported. LVM also offers advanced features like snapshotting, striping, and mirroring.
In this guide, you’ll learn how LVM works and practice basic commands to get up and running quickly on a bare metal machine.
Before diving into LVM administrative commands, it is important to have a basic understanding of how LVM organizes storage devices and some of the terminology it employs.
LVM functions by layering abstractions on top of physical storage devices. The basic layers that LVM uses, starting with the most primitive, are:
Physical Volumes: The LVM utility prefix for physical volumes is pv...
. This physicallyl blocks devices or other disk-like devices (for example, other devices created by device mapper, like RAID arrays) and are used by LVM as the raw building material for higher levels of abstraction. Physical volumes are regular storage devices. LVM writes a header to the device to allocate it for management.
Volume Groups: The LVM utility prefix for volume groups is vg...
.
LVM combines physical volumes into storage pools known as volume groups. Volume groups abstract the characteristics of the underlying devices and function as a unified logical device with combined storage capacity of the component physical volumes.
Logical Volumes: The LVM utility prefix for logical volumes is lv...
, generic LVM utilities might begin with lvm...
. A volume group can be sliced up into any number of logical volumes. Logical volumes are functionally equivalent to partitions on a physical disk, but with much more flexibility. Logical volumes are the primary component that users and applications will interact with.
LVM can be used to combine physical volumes into volume groups to unify the storage space available on a system. Afterwards, administrators can segment the volume group into arbitrary logical volumes, which act as flexible partitions.
Each volume within a volume group is segmented into small, fixed-size chunks called extents. The size of the extents is determined by the volume group. All volumes within the group conform to the same extent size.
The extents on a physical volume are called physical extents, while the extents of a logical volume are called logical extents. A logical volume is a mapping that LVM maintains between logical and physical extents. Because of this relationship, the extent size represents the smallest amount of space that can be allocated by LVM.
Extents are behind much of the flexibility and power of LVM. The logical extents that are presented as a unified device by LVM do not have to map to continuous physical extents. LVM can copy and reorganize the physical extents that compose a logical volume without any interruption to users. Logical volumes can also be expanded or shrunk by adding extents to, or removing extents from, the volume.
Now that you are familiar with some of the terminology and structures LVM uses, you can explore some common ways to use LVM. You’ll start with a procedure that will use two physical disks to form four logical volumes.
Begin by scanning the system for block devices that LVM can access and manage. You can do this with the following command:
- sudo lvmdiskscan
The output will return all available block devices that LVM can interact with:
Output /dev/ram0 [ 64.00 MiB]
/dev/sda [ 200.00 GiB]
/dev/ram1 [ 64.00 MiB]
. . .
/dev/ram15 [ 64.00 MiB]
/dev/sdb [ 100.00 GiB]
2 disks
17 partitions
0 LVM physical volume whole disks
0 LVM physical volumes
In this example, notice that there are currently two disks and 17 partitions. The partitions are mostly /dev/ram*
partitions that are used in the system as a RAM disk for performance enhancements. The disks in this example are /dev/sda
, which has 200G of space, and /dev/sdb
, which has 100G.
Warning: Make sure to double-check that the devices you intend to use with LVM do not have any important data already written to them. Using these devices within LVM will overwrite the current contents. If you have important data on your server, make backups before proceeding.
Now that you know the physical devices you want to use, mark them as physical volumes within LVM using the pvcreate
command:
- sudo pvcreate /dev/sda /dev/sdb
Output Physical volume "/dev/sda" successfully created
Physical volume "/dev/sdb" successfully created
This will write an LVM header to the devices to indicate that they are ready to be added to a volume group.
Verify that LVM has registered the physical volumes by running pvs
:
- sudo pvs
Output PV VG Fmt Attr PSize PFree
/dev/sda lvm2 --- 200.00g 200.00g
/dev/sdb lvm2 --- 100.00g 100.00g
Note that both of the devices are present under the PV
column, which stands for physical volume.
Now that you have created physical volumes from your devices, you can create a volume group. Most of the time, you only have a single volume group per system for maximum flexibility in allocation. The following volume group example is named LVMVolGroup
. You can name your volume group whatever you’d like.
To create the volume group and add both of your physical volumes to it, run:
- sudo vgcreate LVMVolGroup /dev/sda /dev/sdb
Output Volume group "LVMVolGroup" successfully created
Checking the pvs
output again will indicate that your physical volumes are now associated with the new volume group:
- sudo pvs
Output PV VG Fmt Attr PSize PFree
/dev/sda LVMVolGroup lvm2 a-- 200.00g 200.00g
/dev/sdb LVMVolGroup lvm2 a-- 100.00g 100.00g
List a short summary of the volume group with vgs
:
- sudo vgs
Output VG #PV #LV #SN Attr VSize VFree
LVMVolGroup 2 0 0 wz--n- 299.99g 299.99g
Your volume group currently has two physical volumes, zero logical volumes, and has the combined capacity of the underlying devices.
Now that you have a volume group available, you can use it as a pool to allocate logical volumes from. Unlike conventional partitioning, when working with logical volumes, you do not need to know the layout of the volume since LVM maps and handles this for you. You only need to supply the size of the volume and a name.
In the following example, you’ll create four separate logical volumes out of your volume group:
To create logical volumes, use the lvcreate
command. You must pass in the volume group to pull from, and can name the logical volume with the -n
option. To specify the size directly, you can use the -L
option. If, instead, you wish to specify the size in terms of the number of extents, you can use the -l
option.
Create the first three logical volumes with the -L
option:
- sudo lvcreate -L 10G -n projects LVMVolGroup
- sudo lvcreate -L 5G -n www LVMVolGroup
- sudo lvcreate -L 20G -n db LVMVolGroup
Output Logical volume "projects" created.
Logical volume "www" created.
Logical volume "db" created.
You can view the logical volumes and their relationship to the volume group by selecting a custom output from the vgs
command:
- sudo vgs -o +lv_size,lv_name
Output VG #PV #LV #SN Attr VSize VFree LSize LV
LVMVolGroup 2 3 0 wz--n- 299.99g 264.99g 10.00g projects
LVMVolGroup 2 3 0 wz--n- 299.99g 264.99g 5.00g www
LVMVolGroup 2 3 0 wz--n- 299.99g 264.99g 20.00g db
In this example, you added the last two columns of the output. It indicates how much space is allocated to your logical volumes.
Now, you can allocate the rest of the space in the volume group to the "workspace"
volume using the -l
flag, which works in extents. You can also provide a percentage and a unit to better communicate your intentions. In this example, allocate the remaining free space, so you can pass in 100%FREE
:
- sudo lvcreate -l 100%FREE -n workspace LVMVolGroup
Output Logical volume "workspace" created.
Checking the volume group information with the custom vgs
command, notice that you have used up all of the available space:
- sudo vgs -o +lv_size,lv_name
Output VG #PV #LV #SN Attr VSize VFree LSize LV
LVMVolGroup 2 4 0 wz--n- 299.99g 0 10.00g projects
LVMVolGroup 2 4 0 wz--n- 299.99g 0 5.00g www
LVMVolGroup 2 4 0 wz--n- 299.99g 0 20.00g db
LVMVolGroup 2 4 0 wz--n- 299.99g 0 264.99g workspace
The workspace
volume has been created and the LVMVolGroup
volume group is completely allocated.
Now that you have logical volumes, you can use them as normal block devices.
The logical devices are available within the /dev
directory like other storage devices. You can access them in two places:
/dev/volume_group_name/logical_volume_name
/dev/mapper/volume_group_name-logical_volume_name
To format your four logical volumes with the Ext4 filesystem, run the following commands:
- sudo mkfs.ext4 /dev/LVMVolGroup/projects
- sudo mkfs.ext4 /dev/LVMVolGroup/www
- sudo mkfs.ext4 /dev/LVMVolGroup/db
- sudo mkfs.ext4 /dev/LVMVolGroup/workspace
Alternatively, you can run the following:
- sudo mkfs.ext4 /dev/mapper/LVMVolGroup-projects
- sudo mkfs.ext4 /dev/mapper/LVMVolGroup-www
- sudo mkfs.ext4 /dev/mapper/LVMVolGroup-db
- sudo mkfs.ext4 /dev/mapper/LVMVolGroup-workspace
After formatting, create mount points:
- sudo mkdir -p /mnt/{projects,www,db,workspace}
Then mount the logical volumes to the appropriate location:
- sudo mount /dev/LVMVolGroup/projects /mnt/projects
- sudo mount /dev/LVMVolGroup/www /mnt/www
- sudo mount /dev/LVMVolGroup/db /mnt/db
- sudo mount /dev/LVMVolGroup/workspace /mnt/workspace
To make the mounts persistent, use your preferred text editor to add them to /etc/fstab
file. The following example uses nano
:
- sudo nano /etc/fstab
. . .
/dev/LVMVolGroup/projects /mnt/projects ext4 defaults,nofail 0 0
/dev/LVMVolGroup/www /mnt/www ext4 defaults,nofail 0 0
/dev/LVMVolGroup/db /mnt/db ext4 defaults,nofail 0 0
/dev/LVMVolGroup/workspace /mnt/workspace ext4 defaults,nofail 0 0
After editing your file, save and exit. If you’re using nano
, press CTRL+c
, then y
, then ENTER
.
The operating system should now mount the LVM logical volumes automatically at boot.
You now have an understanding of the various components that LVM manages to create a flexible storage system, and how to get storage devices up and running in an LVM setup.
To learn more about working with LVM, check out our guide to using LVM with Ubuntu 18.04.
]]>Along with JPG and PNG, GIFs are one of the most common image formats that have been circulating since the 1990s. Unlike JPG and PNG, GIFs can contain multiple frames of animation, and the humble “animated GIF” is a ubiquitous building block of the internet.
GIFs are actually an old technology, and they are now less efficient than embedding web videos in many contexts. This is because most web video uses modern video compression technologies and more popular modern codecs than GIF. Codecs are used to encode and decode videos, and most platforms have dedicated hardware to play those codecs. GIFs, on the other hand, are always decoded directly with the CPU. The CPU overhead of a low resolution GIF with only a few frames of animation is negligible, but you could technically create a GIF with a comparable resolution and framerate to a YouTube video, and you would be surprised by how many of your system resources it consumes.
However, GIFs are still useful because they are considered images and not videos. Because of the way the web and other applications work, that means they will render and animate automatically in many more contexts, and do not need to be embedded or linked separately. This can be handy for everything from reaction images to interactive fiction development or other presentation formats.
In this tutorial, you will try out several tools for creating GIFs from video clips, optimizing them for size and quality, and ensuring you can use them in many contexts. You can also combine these tools to integrate into another application stack.
This tutorial will provide installation instructions for a Ubuntu 22.04 server. You can set this up by following our guide on Initial Server Setup with Ubuntu 22.04.
You will also need to have installed the Homebrew package manager to install one of the tools in this tutorial.
In this tutorial, you will need three tools to follow along with the examples. The first is: ffmpeg for cutting and manipulating videos, then Gifski for creating GIFs, and finally Gifsicle for optimizing and further manipulating your GIFs. These tools are available on most platforms.
Both ffmpeg
and gifsicle
are available in Ubuntu’s default repositories, and can be installed with the apt
package manager. Begin by updating your package sources with apt update
:
- sudo apt update
Then, install the ffmpeg
and gifsicle
packages with apt install
:
- sudo apt install ffmpeg gifsicle
The last tool, gifski
, is available via Homebrew. Install it with brew install
(this will take a few minutes as Homebrew installs other dependencies):
- brew install gifski
You now have all the necessary tools installed on your machine. Next, you’ll start by acquiring a sample video to create a GIF from.
You can make a GIF from any existing video clip. If you don’t already have one that you want to use, you can use our video on Introducing App Platform by DigitalOcean as a starting point.
You can download a copy of this video from elsewhere on our servers using curl
:
- curl -O https://deved-images.nyc3.cdn.digitaloceanspaces.com/gif-cli/app-platform.webm
curl
is a command line tool for making all kinds of web requests. Using the -O
flag with a URL directs curl
to download a remote file and store it with the same filename locally.
Now that you have a copy of the video locally, you can check some of its metadata. This will be relevant when trying to create a high-quality GIF. When you installed ffmpeg
earlier, it also came with a command called ffprobe
, which can be used to check resolution, framerate, and other information in media files. Review these details by running ffprobe
on the app-platform.webm
video you downloaded:
- ffprobe app-platform.webm
Output…
Input #0, matroska,webm, from 'app-platform.webm':
Metadata:
ENCODER : Lavf59.27.100
Duration: 00:01:59.04, start: -0.007000, bitrate: 1362 kb/s
Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, bt709), 1920x1080, SAR 1:1 DAR 16:9, 25 fps, 25 tbr, 1k tbn (default)
Metadata:
DURATION : 00:01:59.000000000
Stream #0:1(eng): Audio: opus, 48000 Hz, stereo, fltp (default)
Metadata:
DURATION : 00:01:59.041000000
The output lists any streams contained in the file (usually one video and at least one audio stream), as well as the sample rate, codecs, and other properties of the streams. From the highlighted information in the output, you learn that this video is encoded to 1080p resolution, and is played at 25 frames per second. It is also almost two minutes long, which you may have learned from watching it on YouTube, and is probably too long for one GIF!
This is enough information to move on to the next step where you’ll cut a clip out of this video to make a GIF from it.
You now have a two minute long video file that you know the properties of. The only thing you need to do before cutting it to a GIF is extract a shorter clip from it.
It isn’t very convenient to play a video in a terminal shell, so you can watch along with the video on YouTube to find an ideal place to cut. In this tutorial, you’ll cut from 00:00:09 to 00:00:12, which produces a pretty smooth animation:
You can make that cut by passing the app-platform.webm
video to ffmpeg
:
- ffmpeg -ss 00:00:09 -to 00:00:12 -i app-platform.webm -c copy clip.webm
This command is broken down by:
-ss 00:00:09 -to 00:00:12
is how ffmpeg
understands timecodes. In this case, cutting from a starting position to an ending position in the clip. You can also clip based on duration, or to fractions of a second.-i app-platform.webm
is the path to your input file, preceded by -i
.-c copy
is where you would normally specify an output audio or video codec to ffmpeg
. Using copy
in lieu of a codec skips reencoding the video at all, which can be much quicker and avoid any loss in quality, as long as you don’t need to target a different output format. Because you’re making this clip into a GIF later anyway, preserving the native input format is fine, and saves time.clip.webm
is the path to your output file.This creates a new threesecond video called clip.webm
. You can verify that it exists and check its size using ls -lh
:
- ls -lh clip.webm
Output-rw-r--r-- 1 sammy sammy 600K Nov 16 14:27 clip.webm
It turns out that three seconds of video are only 600K large. This will make a good point of comparison when creating your GIF in the next step.
Note: If you are working on a local machine, you can use an open-source GUI tool called Lossless Cut to perform this same operation. Lossless Cut is particularly useful because it runs the same ffmpeg
commands to quickly extract clips from a video based on timecodes, without needing to re-encode the video. Unlike running ffmpeg
on its own on the command line, Lossless Cut includes a built-in video player and navigation interface.
Now that you have a three-second long video and an upper limit in mind for its frame rate and resolution, you can make a GIF from it. If you were developing an automated conversion pipeline for uploading videos to GIFs, it could be helpful to automatically extract the video resolution and framerate from ffprobe
, to pass directly to these next few commands. In this tutorial, you’ll onlybe hardcoding some sensible resolution and framerate values for your output.
You have a few options for making a GIF on the command line. You can do it with ffmpeg
by itself, but the syntax can be very hard to change or understand:
- ffmpeg -filter_complex "[0:v] fps=12,scale=w=540:h=-1,split [a][b];[a] palettegen [p];[b][p] paletteuse" -i clip.webm ffmpeg-sample.gif
Note that in this example, you’ve cut the resolution and the framerate of the clip down by approximately half, to 12fps and 540p resolution. This is usually a good starting point for a GIF. Because GIFs are treated like images, they are downloaded in full when a web page is loaded, and unlike videos, they have no concept of gradually streaming in at a lower resolution. Using CDNs can help optimize the delivery of any static site assets like these, but you should still try to avoid serving huge images without a particular reason. Therefore, you should always avoid making your GIFs larger than necessary; under 3M is usually good practice for images. You can check the file size of your new GIF using ls -lh
:
- ls -lh ffmpeg-sample.gif
Output-rw-r--r-- 1 sammy sammy 2.0M Nov 16 14:28 ffmpeg-sample.gif
You’ve created a 2M GIF this way. While this is good, you can produce an even better result using less complicated syntax with gifski
. Try this gifski
command:
- gifski --fps 12 --width 540 -o gifski-sample.gif clip.webm
Notice how you only need to preserve the important details — framerate and resolution — along with your input and output file names. Check the output file afterward:
- ls -lh gifski-sample.gif
Output-rw-r--r-- 1 sammy sammy 1.3M Nov 16 14:33 gifski-sample.gif
This one is only 1.3M, a significant improvement at the same quality level. At this point, you might be tempted to try making a full-framerate, full-resolution version of the GIF. Here’s a point of comparison:
- gifski --fps 25 --width 1080 -o gifski-high.gif clip.webm
Check the size of this last test file:
- ls -lh gifski-high.gif
Output-rw-r--r-- 1 sammy sammy 6.9M Nov 16 14:37 gifski-high.gif
6.9M is definitely too large, considering your original video clip was only 0.6M! Remember, GIFs are not particularly efficient compared to modern video codecs. You need to make some minor sacrifices when encoding them down to a reasonable file size. In the final step of this tutorial, you’ll further optimize your GIFs.
Note: At any point in this tutorial, if you are working on a remote server, you can download and inspect your GIFs locally, or even move them into a web-accessible directory so you can view them in a web browser. This will allow you to get a visual reference for the animation quality.
For the last step of this tutorial, you’ll use gifsicle
to refine your GIFs. gifsicle
is to GIFs what ffmpeg
is to audio and video: it can do almost anything, but can be quite complicated as a result. For that reason, you can stick to gifski
for actually creating GIFs, and focus on a few gifsicle
commands to improve or manipulate them.
Start by running a standard gifsicle
optimization command:
- gifsicle -O3 --lossy=80 --colors 256 gifski-sample.gif -o optimized.gif
In this command, you provided the -O3
option for the most aggressive optimization, --lossy 80
to allow an up to 20% loss in image quality from the source input, and --colors 256
to use a maximum of 256 colors in your output image. This will produce a higher quality image than you may expect, with almost no visible loss in image quality, because GIFs do not use modern inter-frame video compression the way that video codecs do, nor do they use JPEG-style image compression techniques by default. Also, in this context, 256 colors refers to any 256 color-palette based on what’s already in your GIF, rather than a restricted palette of only the most common 256 colors, as you may otherwise associate with small color palettes. In general, GIF compression is not very perceptible.
As with the last step, check the size of optimized.gif
:
- ls -lh optimized.gif
Output-rw-r--r-- 1 sammy sammy 935K Nov 16 14:44 optimized.gif
This last step has successfully reduced the file size to only slightly larger than the original video, a very reasonable 935K for an animated image. This is the same optimized GIF that was displayed earlier in this tutorial.
You can refer to the Gifsicle manual to learn about other ways of manipulating your GIFs. For example, you can “explode” the GIF into multiple image files, one for each frame of animation:
- gifsicle --explode optimized.gif
This creates multiple files named optimized.gif.000
, optimized.gif.001
, and so on, for every individual image:
- ls -lh optimized*
Output-rw-r--r-- 1 sammy sammy 935K Nov 16 14:46 optimized.gif
-rw-r--r-- 1 sammy sammy 20K Nov 16 14:54 optimized.gif.000
-rw-r--r-- 1 sammy sammy 17K Nov 16 14:54 optimized.gif.001
-rw-r--r-- 1 sammy sammy 22K Nov 16 14:54 optimized.gif.002
-rw-r--r-- 1 sammy sammy 22K Nov 16 14:54 optimized.gif.003
…
You can also rotate your GIF using the --rotate-90
or --rotate-180
options:
- gifsicle --rotate-90 optimized.gif -o rotated.gif
Despite their inefficiency, GIFs remain useful because they can be used nearly anywhere. When you need a short clip to animate automatically, or you specifically need an image and not a video format, sometimes there’s no substitute for a good old GIF.
In this tutorial, you used multiple tools to create a well-optimized GIF from an existing video. You also reviewed the ecosystem of open-source video manipulation and GIF manipulation tools, as well as some other options for further editing your GIFs. GIFs are an interesting, anachronistic technology.n many ways, they are not modern, but they still have no real replacement in some contexts, and the tools for working with GIFs are robust and well-maintained. With that being said, go forth and use your GIFs wisely.
Next, you may want to learn how to build a media processing API in Node.js.
]]>One way to guard against out-of-memory errors in applications is to add some swap space to your server. In this guide, we will cover how to add a swap file to an Debian 11 server.
Swap is a portion of hard drive storage that has been set aside for the operating system to temporarily store data that it can no longer hold in RAM. This lets you increase the amount of information that your server can keep in its working memory, with some caveats. The swap space on the hard drive will be used mainly when there is no longer sufficient space in RAM to hold in-use application data.
The information written to disk will be significantly slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fallback for when your system’s RAM is depleted can be a good safety net against out-of-memory exceptions on systems with non-SSD storage available.
Before we begin, we can check if the system already has some swap space available. It is possible to have multiple swap files or swap partitions, but generally one should be enough.
We can see if the system has any configured swap by typing:
- sudo swapon --show
If you don’t get back any output, this means your system does not have swap space available currently.
You can verify that there is no active swap using the free
utility:
- free -h
Output total used free shared buff/cache available
Mem: 976Mi 75Mi 623Mi 0.0Ki 276Mi 765Mi
Swap: 0B 0B 0B
As you can see in the Swap row of the output, no swap is active on the system.
Before we create our swap file, we’ll check our current disk usage to make sure we have enough space. Do this by entering:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
udev 472M 0 472M 0% /dev
tmpfs 98M 500K 98M 1% /run
/dev/vda1 25G 1.1G 23G 5% /
tmpfs 489M 0 489M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda15 124M 5.9M 118M 5% /boot/efi
tmpfs 98M 0 98M 0% /run/user/0
The device with /
in the Mounted on
column is our disk in this case. We have plenty of space available in this example (only 1.4G used). Your usage will probably be different.
Although there are many opinions about the appropriate size of a swap space, it really depends on your personal preferences and your application requirements. Generally, an amount equal to or double the amount of RAM on your system is a good starting point. Another good rule of thumb is that anything over 4G of swap is probably unnecessary if you are just using it as a RAM fallback.
Now that we know our available hard drive space, we can create a swap file on our filesystem. We will allocate a file of the size that we want called swapfile
in our root (/
) directory.
The best way of creating a swap file is with the fallocate
program. This command instantly creates a file of the specified size.
Since the server in our example has 1G of RAM, we will create a 1G file in this guide. Adjust this to meet the needs of your own server:
- sudo fallocate -l 1G /swapfile
We can verify that the correct amount of space was reserved by typing:
- ls -lh /swapfile
- -rw-r--r-- 1 root root 1.0G Aug 23 11:14 /swapfile
Our file has been created with the correct amount of space set aside.
Now that we have a file of the correct size available, we need to actually turn this into swap space.
First, we need to lock down the permissions of the file so that only users with root privileges can read the contents. This prevents normal users from being able to access the file, which would have significant security implications.
Make the file only accessible to root by typing:
- sudo chmod 600 /swapfile
Verify the permissions change by typing:
- ls -lh /swapfile
Output-rw------- 1 root root 1.0G Aug 23 11:14 /swapfile
As you can see, only the root user has the read and write flags enabled.
We can now mark the file as swap space by typing:
- sudo mkswap /swapfile
OutputSetting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=6e965805-2ab9-450f-aed6-577e74089dbf
After marking the file, we can enable the swap file, allowing our system to start using it:
- sudo swapon /swapfile
Verify that the swap is available by typing:
- sudo swapon --show
OutputNAME TYPE SIZE USED PRIO
/swapfile file 1024M 0B -2
We can check the output of the free
utility again to corroborate our findings:
- free -h
Output total used free shared buff/cache available
Mem: 976Mi 85Mi 612Mi 0.0Ki 279Mi 756Mi
Swap: 1.0Gi 0B 1.0Gi
Our swap has been set up successfully and our operating system will begin to use it as necessary.
Our recent changes have enabled the swap file for the current session. However, if we reboot, the server will not retain the swap settings automatically. We can change this by adding the swap file to our /etc/fstab
file.
Back up the /etc/fstab
file in case anything goes wrong:
- sudo cp /etc/fstab /etc/fstab.bak
Add the swap file information to the end of your /etc/fstab
file by typing:
- echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Next we’ll review some settings we can update to tune our swap space.
There are a few options that you can configure that will have an impact on your system’s performance when dealing with swap.
The swappiness
parameter configures how often your system swaps data out of RAM to the swap space. This is a value between 0 and 100 that represents a percentage.
With values close to zero, the kernel will not swap data to the disk unless absolutely necessary. Remember, interactions with the swap file are “expensive” in that they take a lot longer than interactions with RAM and they can cause a significant reduction in performance. Telling the system not to rely on the swap much will generally make your system faster.
Values that are closer to 100 will try to put more data into swap in an effort to keep more RAM space free. Depending on your applications’ memory profile or what you are using your server for, this might be better in some cases.
We can see the current swappiness value by typing:
- cat /proc/sys/vm/swappiness
Output60
For a Desktop, a swappiness setting of 60 is not a bad value. For a server, you might want to move it closer to 0.
We can set the swappiness to a different value by using the sysctl
command.
For instance, to set the swappiness to 10, we could type:
- sudo sysctl vm.swappiness=10
Outputvm.swappiness = 10
This setting will persist until the next reboot. We can set this value automatically at restart by adding the line to our /etc/sysctl.conf
file:
- sudo nano /etc/sysctl.conf
At the bottom, you can add:
vm.swappiness=10
Save and close the file when you are finished.
Another related value that you might want to modify is the vfs_cache_pressure
. This setting configures how much the system will choose to cache inode and dentry information over other data.
Basically, this is access data about the filesystem. This is generally very costly to look up and very frequently requested, so it’s an excellent thing for your system to cache. You can see the current value by querying the proc
filesystem again:
- cat /proc/sys/vm/vfs_cache_pressure
Output100
As it is currently configured, our system removes inode information from the cache too quickly. We can set this to a more conservative setting like 50 by typing:
- sudo sysctl vm.vfs_cache_pressure=50
Outputvm.vfs_cache_pressure = 50
Again, this is only valid for our current session. We can change that by adding it to our configuration file like we did with our swappiness setting:
- sudo nano /etc/sysctl.conf
At the bottom, add the line that specifies your new value:
vm.vfs_cache_pressure=50
Save and close the file when you are finished.
Following the steps in this guide will give you some breathing room in cases that would otherwise lead to out-of-memory exceptions. Swap space can be incredibly useful in avoiding some of these common problems.
If you are running into out-of-memory errors, or if you find that your system is unable to use the applications you need, the best solution is to optimize your application configurations or upgrade your server.
]]>In popular usage, “Linux” often refers to a group of operating system distributions built around the Linux kernel. In the strictest sense, though, Linux refers only to the presence of the kernel itself. To build out a full operating system, Linux distributions often include tooling and libraries from the GNU project and other sources. More developers have been using Linux recently to build and run mobile applications; it has also played a key role in the development of affordable devices such as Chromebooks, which run operating systems on the kernel. Within cloud computing and server environments in general, Linux is a popular choice for some practical reasons:
Linux also traces its origins to the free and open-source software movement, and as a consequence some developers choose it for a combination of ethical and practical reasons:
To understand Linux’s role within the developer community (and beyond), this article will outline a brief history of Linux by way of Unix, and discuss some popular Linux distributions.
Linux has its roots in Unix and Multics, two projects that shared the goal of developing a robust multi-user operating system.
Unix developed out of the Multics project iteration at the Bell Laboratories’ Computer Sciences Research Center. The developers working on Multics at Bell Labs and elsewhere were interested in building a multi-user operating system with single-level storage, dynamic linking (in which a running process can request that another segment be added to its address space, enabling it to execute that segment’s code), and a hierarchical file system.
Bell Labs stopped funding the Multics project in 1969, but a group of researchers, including Ken Thompson and Dennis Ritchie, continued working with the project’s core principles. In 1972-3 they made the decision to rewrite the system in C, which made Unix uniquely portable: unlike other contemporary operating systems, it could both move from and outlive its hardware.
Research and development at Bell Labs (later AT&T) continued, with Unix System Laboratories developing versions of Unix, in partnership with Sun Microsystems, that would be widely adopted by commercial Unix vendors. Meanwhile, research continued in academic circles, most notably the Computer Systems Research Group at the University of California Berkeley. This group produced the Berkeley Software Distribution (BSD), which inspired a range of operating systems, many of which are still in use today. Two BSD distributions of historical note are NeXTStep, the operating system pioneered by NeXT, which became the basis for macOS, among other products, and MINIX, an educational operating system that formed a comparative basis for Linus Torvalds as he developed Linux.
Unix is oriented around principles of clarity, portability, and simultaneity.
Unix raised important questions for developers, but it also remained proprietary in its earliest iterations. The next chapter of its history is thus the story of how developers worked within and against it to create free and open-source alternatives.
Richard Stallman was a central figure among the developers who were inspired to create non-proprietary alternatives to Unix. While working at MIT’s Artificial Intelligence Laboratory, he initiated work on the GNU project (recursive for “GNU’s not Unix!”), eventually leaving the Lab in 1984 so he could distribute GNU components as free software. The GNU kernel, known as GNU HURD, became the focus of the Free Software Foundation (FSF), founded in 1985 and currently headed by Stallman.
Meanwhile, another developer was at work on a free alternative to Unix: Finnish undergraduate Linus Torvalds. After becoming frustrated with licensure for MINIX, Torvalds announced to a MINIX user group on August 25, 1991 that he was developing his own operating system, which resembled MINIX. Though initially developed on MINIX using the GNU C compiler, the Linux kernel quickly became a unique project with a core of developers who released version 1.0 of the kernel with Torvalds in 1994.
Torvalds had been using GNU code, including the GNU C Compiler, with his kernel, and it remains true that many Linux distributions draw on GNU components. Stallman has lobbied to expand the term “Linux” to “GNU/Linux,” which he argues would capture both the role of the GNU project in Linux’s development and the underlying ideals that fostered the GNU project and the Linux kernel. Today, “Linux” is often used to indicate both the presence of the Linux kernel and GNU elements. At the same time, embedded systems on many handheld devices and smartphones often use the Linux kernel with few to no GNU components.
Though the Linux kernel inherited many goals and properties from Unix, it differs from the earlier system in the following ways:
Developers maintain many popular Linux distributions today. Among the longest-standing is Debian, a free and open-source distribution that has 50,000 software packages. Debian inspired another popular distribution, Ubuntu, funded by Canonical Ltd. Ubuntu uses Debian’s deb package format and package management tools, and Ubuntu’s developers push changes back upstream to Debian.
A similar relationship exists between Red Hat, Fedora, and CentOS. Red Hat created a Linux distribution in 1993, and ten years later split its efforts into Red Hat Enterprise Linux and Fedora, a community-based operating system that utilizes the Linux kernel and elements from the GNU Project. Red Hat also has a relationship with the CentOS Project, another popular Linux distribution for web servers. This relationship does not include paid maintenance, however. Like Debian, CentOS is maintained by a community of developers.
In this article, we have covered Linux’s roots in Unix and some of its defining features. If you are interested in learning more about the history of Linux and Unix variations (including FreeBSD), a good step might be our series on FreeBSD. Another option might be to consider our introductory series on getting started with Linux. You can also check out this introduction to the filesystem layout in Linux, this discussion of how to use find
and locate
to search for files on a Linux VPS, or this introduction to regular expressions on the command line.
SSH is the de facto method of connecting to a cloud server. It is durable, and it is extensible — as new encryption standards are developed, they can be used to generate new SSH keys, ensuring that the core protocol remains secure. However, no protocol or software stack is totally foolproof, and SSH being so widely deployed across the internet means that it represents a very predictable attack surface or attack vector through which people can try to gain access.
Any service that is exposed to the network is a potential target in this way. If you review the logs for your SSH service running on any widely trafficked server, you will often see repeated, systematic login attempts that represent brute force attacks by users and bots alike. Although you can make some optimizations to your SSH service to reduce the chance of these attacks succeeding to near-zero, such as disabling password authentication in favor of SSH keys, they can still pose a minor, ongoing liability.
Large-scale production deployments for whom this liability is completely unacceptable will usually implement a VPN such as WireGuard in front of their SSH service, so that it is impossible to connect directly to the default SSH port 22 from the outside internet without additional software abstraction or gateways. These VPN solutions are widely trusted, but will add complexity, and can break some automations or other small software hooks.
Prior to or in addition to committing to a full VPN setup, you can implement a tool called Fail2ban. Fail2ban can significantly mitigate brute force attacks by creating rules that automatically alter your firewall configuration to ban specific IPs after a certain number of unsuccessful login attempts. This will allow your server to harden itself against these access attempts without intervention from you.
In this guide, you’ll see how to install and use Fail2ban on a Rocky Linux 9 server.
To complete this guide, you will need:
A Rocky Linux 9 server and a non-root user with sudo privileges. You can learn more about how to set up a user with these privileges in our Initial Server Setup with Rocky Linux 9 guide. You should also have firewalld
running on the server, which is covered in our initial server setup guide.
Optionally, a second server that you can connect to your first server from, which you will use to test getting deliberately banned.
Fail2ban is not available in Rocky’s default software repositories. However, it is available in the EPEL, or Enhanced Packages for Enterprise Linux repository, which is commonly used for third-party packages on Red Hat and Rocky Linux. If you have not already added EPEL to your system package sources, you can add the repository using dnf
, like you would install any other package:
- sudo dnf install epel-release -y
The dnf
package manager will now check EPEL in addition to your default package sources when installing new software. Proceed to install Fail2ban:
- sudo dnf install fail2ban -y
Fail2ban will automatically set up a background service after being installed. However, it is disabled by default, because some of its default settings may cause undesired effects. You can verify this by using the systemctl
command:
- systemctl status fail2ban.service
Output○ fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; disabled; vendor preset: disabled
Active: inactive (dead)
Docs: man:fail2ban(1)
You could enable Fail2ban right away, but first, you’ll review some of its features.
The fail2ban service keeps its configuration files in the /etc/fail2ban
directory. There is a file with defaults called jail.conf
. Go to that directory and print the first 20 lines of that file using head -20
:
- cd /etc/fail2ban
- head -20 jail.conf
Output#
# WARNING: heavily refactored in 0.9.0 release. Please review and
# customize settings for your setup.
#
# Changes: in most of the cases you should not modify this
# file, but provide customizations in jail.local file,
# or separate .conf files under jail.d/ directory, e.g.:
#
# HOW TO ACTIVATE JAILS:
#
# YOU SHOULD NOT MODIFY THIS FILE.
#
# It will probably be overwritten or improved in a distribution update.
#
# Provide customizations in a jail.local file or a jail.d/customisation.local.
# For example to change the default bantime for all jails and to enable the
# ssh-iptables jail the following (uncommented) would appear in the .local file.
# See man 5 jail.conf for details.
#
# [DEFAULT]
As you’ll see, the first several lines of this file are commented out – they begin with #
characters indicating that they are to be read as documentation rather than as settings. As you’ll also see, these comments are directing you not to modify this file directly. Instead, you have two options: either create individual profiles for Fail2ban in multiple files within the jail.d/
directory, or create and collect all of your local settings in a jail.local
file. The jail.conf
file will be periodically updated as Fail2ban itself is updated, and will be used as a source of default settings for which you have not created any overrides.
In this tutorial, you’ll create jail.local
. You can do that by copying jail.conf
:
- sudo cp jail.conf jail.local
Now you can begin making configuration changes. Open the file in vi
or your favorite text editor:
- sudo vi jail.local
While you are scrolling through the file, this tutorial will review some options that you may want to update. The settings located under the [DEFAULT]
section near the top of the file will be applied to all of the services supported by Fail2ban. Elsewhere in the file, there are headers for [sshd]
and for other services, which contain service-specific settings that will apply over top of the defaults.
[DEFAULT]
. . .
bantime = 10m
. . .
The bantime
parameter sets the length of time that a client will be banned when they have failed to authenticate correctly. This is measured in seconds. By default, this is set to 10 minutes.
[DEFAULT]
. . .
findtime = 10m
maxretry = 5
. . .
The next two parameters are findtime
and maxretry
. These work together to establish the conditions under which a client is found to be an illegitimate user that should be banned.
The maxretry
variable sets the number of tries a client has to authenticate within a window of time defined by findtime
, before being banned. With the default settings, the fail2ban service will ban a client that unsuccessfully attempts to log in 5 times within a 10 minute window.
[DEFAULT]
. . .
destemail = root@localhost
sender = root@<fq-hostname>
mta = sendmail
. . .
If you need to receive email alerts when Fail2ban takes action, you should evaluate the destemail
, sendername
, and mta
settings. The destemail
parameter sets the email address that should receive ban messages. The sendername
sets the value of the “From” field in the email. The mta
parameter configures what mail service will be used to send mail. By default, this is sendmail
, but you may want to use Postfix or another mail solution.
[DEFAULT]
. . .
action = %(action_)s
. . .
This parameter configures the action that Fail2ban takes when it wants to institute a ban. The value action_
is defined in the file shortly before this parameter. The default action is to update your firewall configuration to reject traffic from the offending host until the ban time elapses.
There are other action_
scripts provided by default which you can replace $(action_)
with above:
…
# ban & send an e-mail with whois report to the destemail.
action_mw = %(action_)s
%(mta)s-whois[sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(action_)s
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# See the IMPORTANT note in action.d/xarf-login-attack for when to use this action
#
# ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines
# to the destemail.
action_xarf = %(action_)s
xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"]
# ban IP on CloudFlare & send an e-mail with whois report and relevant log lines
# to the destemail.
action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
…
For example, action_mw
takes action and sends an email, action_mwl
takes action, sends an email, and includes logging, and action_cf_mwl
does all of the above in addition to sending an update to the Cloudflare API associated with your account to ban the offender there, too.
Next is the portion of the configuration file that deals with individual services. These are specified by section headers, like [sshd]
.
Each of these sections needs to be enabled individually by adding an enabled = true
line under the header, with their other settings.
[jail_to_enable]
. . .
enabled = true
. . .
For this tutorial, you’ll enable the SSH jail. It should be at the top of the individual jail settings. The default parameters will work otherwise, but you’ll need to add a configuration line that says enabled = true
under the [sshd]
header.
#
# JAILS
#
#
# SSH servers
#
[sshd]
# To use more aggressive sshd modes set filter parameter "mode" in jail.local:
# normal (default), ddos, extra or aggressive (combines all).
# See "tests/files/logs/sshd" or "filter.d/sshd.conf" for usage example and details.
#mode = normal
enabled = true
port = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
Some other settings that are set here are the filter
that will be used to decide whether a line in a log indicates a failed authentication and the logpath
which tells fail2ban where the logs for that particular service are located.
The filter
value is actually a reference to a file located in the /etc/fail2ban/filter.d
directory, with its .conf
extension removed. These files contain regular expressions (a common shorthand for text parsing) that determine whether a line in the log is a failed authentication attempt. We won’t be covering these files in-depth in this guide, because they are fairly complex and the predefined settings match appropriate lines well.
However, you can see what kind of filters are available by looking into that directory:
- ls /etc/fail2ban/filter.d
If you see a file that looks related to a service you are using, you should open it with a text editor. Most of the files are fairly well commented and you should be able to at least tell what type of condition the script was designed to guard against. Most of these filters have appropriate (disabled) sections in the jail.conf
file that we can enable in the jail.local
file if desired.
For instance, imagine that you are serving a website using Nginx and realize that a password-protected portion of your site is getting slammed with login attempts. You can tell fail2ban to use the nginx-http-auth.conf
file to check for this condition within the /var/log/nginx/error.log
file.
This is actually already set up in a section called [nginx-http-auth]
in your /etc/fail2ban/jail.conf
file. You would just need to add the enabled
parameter:
. . .
[nginx-http-auth]
enabled = true
. . .
When you are finished editing, save and close the file. If you are using vi
, use :x
to save and quit. At this point, you can enable your Fail2ban service so that it will run automatically from now on. First, run systemctl enable
:
- sudo systemctl enable fail2ban
Then, start it manually for the first time with systemctl start
:
- sudo systemctl start fail2ban
You can verify that it’s running with systemctl status
:
- sudo systemctl status fail2ban
Output● fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: disabled
Active: active (running) since Wed 2022-09-14 20:48:40 UTC; 2s ago
Docs: man:fail2ban(1)
Main PID: 39396 (fail2ban-server)
Tasks: 5 (limit: 1119)
Memory: 12.9M
CPU: 278ms
CGroup: /system.slice/fail2ban.service
└─39396 /usr/bin/python3.6 -s /usr/bin/fail2ban-server -xf start
Sep 14 20:48:40 rocky9-tester systemd[1]: Starting Fail2Ban Service...
Sep 14 20:48:40 rocky9-tester systemd[1]: Started Fail2Ban Service.
Sep 14 20:48:41 rocky9-tester fail2ban-server[39396]: Server ready
In the next step, you’ll demonstrate Fail2ban in action.
From another server, one that won’t need to log into your Fail2ban server in the future, you can test the rules by getting that second server banned. After logging into your second server, try to SSH into the Fail2ban server. You can try to connect using a nonexistent name:
- ssh blah@your_server
Enter random characters into the password prompt. Repeat this a few times. At some point, the error you’re receiving should change from Permission denied
to Connection refused
. This signals that your second server has been banned from the Fail2ban server.
On your Fail2ban server, you can see the new rule by checking the output of fail2ban-client
. fail2ban-client
is an additional command provided by Fail2ban for checking its running configuration.
- sudo fail2ban-client status
OutputStatus
|- Number of jail: 1
`- Jail list: sshd
If you run fail2ban-client status sshd
, you can see the list of IPs that have been banned from SSH:
- sudo fail2ban-client status sshd
OutputStatus for the jail: sshd
|- Filter
| |- Currently failed: 2
| |- Total failed: 7
| `- Journal matches: _SYSTEMD_UNIT=sshd.service + _COMM=sshd
`- Actions
|- Currently banned: 1
|- Total banned: 1
`- Banned IP list: 134.209.165.184
The Banned IP list
contents should reflect the IP address of your second server.
You should now be able to configure some banning policies for your services. Fail2ban is a useful way to protect any kind of service that uses authentication. If you want to learn more about how fail2ban works, you can check out our tutorial on how fail2ban rules and files work.
For information about how to use fail2ban to protect other services, you can read about How To Protect an Nginx Server with Fail2Ban.
]]>Apt is a command line frontend for the dpkg packaging system and is the preferred way of managing software from the command line for many distributions. It is the main package management system in Debian and Debian-based Linux distributions like Ubuntu.
While a tool called “dpkg” forms the underlying packaging layer, apt
and apt-cache
provide user-friendly interfaces and implement dependency handling. This allows users to efficiently manage large amounts of software easily.
In this guide, we will discuss the basic usage of apt
and apt-cache
and how they can manage your software. We will be practicing on an Ubuntu 22.04 cloud server, but the same steps and techniques should apply on any other Ubuntu or Debian-based distribution.
Apt operates on a database of known, available software. It performs installations, package searches, and many other operations by referencing this database.
Because of this, before beginning any packaging operations with apt
, we need to ensure that our local copy of the database is up-to-date.
Update the local database with apt update
. Apt requires administrative privileges for most operations:
- sudo apt update
You will see a list of servers we are retrieving information from. After this, your database should be up-to-date.
You can upgrade the packages on your system by using apt upgrade
. You will be prompted to confirm the upgrades, and restart any updated system services:
- sudo apt upgrade
If you know the name of a package you need to install, you can install it by using apt install
:
- sudo apt install package1 package2 …
You can see that it is possible to install multiple packages at one time, which is useful for acquiring all of the necessary software for a project in one step.
Apt installs not only the requested software, but also any software needed to install or run it.
You can install a program called sl
by typing:
- sudo apt install sl
After that, you’ll be able to run sl
on the command line.
To remove a package from your system, run apt remove
:
- sudo apt remove package_name
This command removes the package, but retains any configuration files in case you install the package again later. This way, your settings will remain intact, even though the program is not installed.
If you need to clean out the configuration files as well as the program, use apt purge
:
- sudo apt purge package_name
This uninstalls the package and removes any configuration files associated with the package.
To remove any packages that were installed automatically to support another program, that are no longer needed, type the following command:
- sudo apt autoremove
You can also specify a package name after the autoremove
command to uninstall a package and its dependencies.
There are a number of additional options that can be specified by the use of flags. We will go over some common ones.
To do a “dry run” of a procedure in order to get an idea of what an action will do, you can pass the -s
flag for “simulate”:
- sudo apt install -s htop
OutputReading package lists... Done
Building dependency tree... Done
Reading state information... Done
Suggested packages:
lm-sensors
The following NEW packages will be installed:
htop
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Inst htop (3.0.5-7build2 Ubuntu:22.04/jammy [amd64])
Conf htop (3.0.5-7build2 Ubuntu:22.04/jammy [amd64])
In place of actual actions, you can see an Inst
and Conf
section specifying where the package would be installed and configured if the “-s” was removed.
If you do not want to be prompted to confirm your choices, you can also pass the -y
flag to automatically assume “yes” to questions.
- sudo apt remove -y htop
If you would like to download a package, but not install it, you can issue the following command:
- sudo apt install -d packagename
The files will be retained in /var/cache/apt/archives
.
If you would like to suppress output, you can pass the -qq
flag to the command:
- sudo apt remove -qq packagename
The apt packaging tool is actually a suite of related, complimentary tools that are used to manage your system software.
While apt
is used to upgrade, install, and remove packages, apt-cache
is used to query the package database for package information.
You can use apt-cache search
to search for a package that suits your needs. Note that apt-cache doesn’t usually require administrative privileges:
- apt-cache search what_you_are_looking_for
For instance, to find htop
, an improved version of the top
system monitor, you can use:
- apt-cache search htop
Outputhtop - interactive processes viewer
aha - ANSI color to HTML converter
bashtop - Resource monitor that shows usage and stats
bpytop - Resource monitor that shows usage and stats
btop - Modern and colorful command line resource monitor that shows usage and stats
libauthen-oath-perl - Perl module for OATH One Time Passwords
pftools - build and search protein and DNA generalized profiles
You can search for more generic terms also. In this example, we’ll look for mp3 conversion software:
- apt-cache search mp3 convert
Outputabcde - A Better CD Encoder
cue2toc - converts CUE files to cdrdao's TOC format
dir2ogg - audio file converter into ogg-vorbis format
easytag - GTK+ editor for audio file tags
ebook2cw - convert ebooks to Morse MP3s/OGGs
ebook2cwgui - GUI for ebook2cw
ffcvt - ffmpeg convert wrapper tool
. . .
To view information about a package, including an extended description, use the following syntax:
- apt-cache show package_name
This will also provide the size of the download and the dependencies needed for the package.
To see if a package is installed and to check which repository it belongs to, you can use apt-cache policy
:
- apt-cache policy package_name
You should now know enough about apt-get and apt-cache to manage most of the software on your server.
While it is sometimes necessary to go beyond these tools and the software available in the repositories, most software operations can be managed by these tools.
Next, you can read about Ubuntu and Debian package management in detail.
]]>Optical Character Recognition, or OCR, is primarily used to turn the text from scanned images into selectable, copyable, encoded, embedded text. Many modern desktop and mobile applications and scanner software stacks have some OCR functionality built in, and most circulating PDFs have text embedded. However, you may still encounter documents or images that contain significant amounts of non-embedded text which cannot be automatically extracted.
In this case, you can use a pipeline of open source tools to automatically perform OCR. This is especially useful if you are ingesting documents or images to a web application that needs to extract text, or if you are working with a large corpus of documents that need to have their full text indexed.
This tutorial will cover setting up an OCR pipeline using Ghostscript, Tesseract, and PDFtk. You will also review other tools that can be used instead of or in addition to this baseline functionality.
These tools are available on most platforms. This tutorial will provide installation instructions for a Ubuntu 22.04 server, following our guide to Initial Server Setup with Ubuntu 22.04.
OCR can be performed on both PDFs (which contain, and are sometimes rendered as, images) and standalone images. Working with PDFs adds some extra steps, which you can skip if you are working with images by themselves.
You will need three tools for the end-to-end pipeline: Ghostscript, which handles all kinds of PDF-to-image conversion and vice-versa (it was originally created as an interpreter for Postscript, the predecessor technology to PDF), Tesseract, an open source OCR engine which, like Ghostscript, has been developed continuously since the 1980s, and PDFtk, a smaller utility for slicing or reconstructing PDFs from individual pages.
All three applications are available in Ubuntu’s default repositories, and can be installed with the apt
package manager. Update your package sources with apt update
and then use apt install
to install them:
- sudo apt update
- sudo apt install pdftk ghostscript tesseract-ocr x11-utils
You should now have three new commands present, one for each application, which you can verify by using which
:
- which pdftk
Output/usr/bin/pdftk
- which gs
Output/usr/bin/gs
- which tesseract
Output/usr/bin/tesseract
You’ll use these commands to perform OCR in the next step.
If you don’t already have a PDF that you want to perform OCR on, you can follow along with this tutorial by downloading this sample PDF, which was scanned without any embedded text. To download the PDF onto your server, you can use curl
with the -O
flag to save it to your current directory under the same file name:
- curl -O https://deved-images.nyc3.cdn.digitaloceanspaces.com/server-ocr/OCR-sample-paper.pdf
If you’re working with one or more PDFs, you’ll need to convert them to individual images before they can be used as OCR sources. This can be done using a Ghostscript command. You’ll need to include additional parameters to maintain consistency around DPI, color space, and dimensions. First, create a working output
directory for the files created this process, then run gs
:
- mkdir output
- gs -o output/%05d.png -sDEVICE=png16m -r300 -dPDFFitPage=true OCR-sample-paper.pdf
This gs
command specifies the output path before the rest of the command, using the -o
flag. %05d
is obscure shell syntax that Ghostscript understands natively — in this case, it means to name the output PNG files from the input PDF using automatically incremented, 5-digit numbers. You may see this used in other, older, command line applications. After adding some PNG formatting syntax and a DPI of -r300
, provide the path to OCR-sample-paper.pdf
or your chosen input file.
Ghostscript will output every page in the PDF individually:
OutputProcessing pages 1 through 14.
Page 1
Page 2
Page 3
Page 4
Page 5
…
After it finishes, you can verify the contents of the output
directory.
- ls output
Output00001.png 00003.png 00005.png 00007.png 00009.png 00011.png 00013.png
00002.png 00004.png 00006.png 00008.png 00010.png 00012.png 00014.png
Next, you’ll use a shell for
loop around a tesseract
command to turn the images you created back into individual PDF pages, this time with embedded text. Shell loops behave similarly to loops in other programming languages, and you can format them all into a single command by separating each part with semicolons, ending with done
:
- for png in $(ls output); do tesseract -l eng output/$png output/$(echo $png | sed -e "s/\.png//g") pdf; done
Some text will be output to your shell while the command is looping:
OutputTesseract Open Source OCR Engine v4.1.1 with Leptonica
Tesseract Open Source OCR Engine v4.1.1 with Leptonica
Tesseract Open Source OCR Engine v4.1.1 with Leptonica
Tesseract Open Source OCR Engine v4.1.1 with Leptonica
...
The Tesseract syntax itself is this component: tesseract -l language input_filename output_base_filename [pdf]
. If the -l language
component is omitted, Tesseract defaults to the English language model, and if pdf
is omitted, Tesseract will output identified text separately from the input image, rather than as a PDF. The additional sed syntax added to this command ensures that the correct paths are provided to Tesseract, and the .png
file extensions are removed when renaming the output files to .pdf
.
Note: On Ubuntu, Tesseract does not install every language model by default. If you need to perform non-English OCR, you should install the tesseract-ocr-all package with sudo apt install tesseract-ocr-all
.
You can find more Tesseract command line examples in the official documentation.
Check the output directory again after running Tesseract:
- ls output
You’ll see all your newly created PDF pages.
Output00001.pdf 00003.pdf 00005.pdf 00007.pdf 00009.pdf 00011.pdf 00013.pdf
00001.png 00003.png 00005.png 00007.png 00009.png 00011.png 00013.png
00002.pdf 00004.pdf 00006.pdf 00008.pdf 00010.pdf 00012.pdf 00014.pdf
00002.png 00004.png 00006.png 00008.png 00010.png 00012.png 00014.png
If you only need an output image, you can skip ahead to the final steps of this tutorial to learn more about bulk text extraction options. If you’re using a PDF, you’ll reconstruct and finalize it in the next step.
If you used a PDF as input in the last step, you’ll now need to use PDFtk and Ghostscript again to put it back together from the individual pages produced by Tesseract. Because they are numbered sequentially, you can use shell syntax to pass an ordered list of files to the pdftk cat
command, to concatenate them:
- pdftk output/*.pdf cat output joined.pdf
You now have a single PDF reconstructed from your Tesseract output named joined.pdf
. The only remaining step is to reformat the PDF using Ghostscript. This is important because Tesseract isn’t always faithful to precise PDF dimensions. Your new PDF is currently much larger than your input because it hasn’t been optimized, and Ghostscript is a much more powerful tool for re-rendering PDFs to exact specifications. Run one last gs
command on your joined.pdf
:
- gs -sDEVICE=pdfwrite -sPAPERSIZE=letter -dFIXEDMEDIA -dPDFFitPage -o final.pdf joined.pdf
You may receive some warning output from this command about PDF spec compliance, which is normal. Ghostscript is much more exacting about PDF standards than other tools, and most PDFs will render in most viewers.
The parameters -sDEVICE=pdfwrite -sPAPERSIZE=letter -dFIXEDMEDIA -dPDFFitPage
are all used to enforce PDF dimensions. You may need to change sPAPERSIZE=letter
if you are working with a different page format. The -o final.pdf
filename provided to the gs
command will be the name of your finished output.
To test that your OCR was successful, you can open the PDF locally in a desktop application, or you can use a command line application like pdftotext
to dump out the now-embedded text from the document.
You can install pdftotext
on Ubuntu via a package called poppler-utils
, which contains several tools for working with PDFs on the command line:
- sudo apt install poppler-utils
Next, run pdftotext
on your new PDF:
- pdftotext final.pdf
A new file, final.txt
, will be created. Preview this file’s contents with a tool like head
:
- head final.txt
OutputPakistan Journal of Applied Economics (1983) vol. II, no. 2 (167—180)
THE MEASUREMENT OF FARM-SPECIFIC
TECHNICAL EFFICIENCY
K. P. KALIRAJAN and J. C, FLINN*
Measures of technical efficiency were estimated using a stochastic translog
production frontier for a sample of rainfed rice farmers in Bicol, Philippines.
These estimates were farm specific as opposed to being based on deviations
from an average sample efficiency. A wide variation in the level of technical
You should receive a stream of text from your input file. It may be out of order or include some odd formatting characters, but that’s natural when dumping out the text all at once – the important thing is that the document now contains embedded text. At this point, you can remove the output
directory you created that contains the work-in-progress images and PDF pages. Those images are no longer needed.
You now have an end-to-end PDF OCR pipeline using three tools and four commands. These can be combined into a standalone script, integrated into another application, or run interactively as needed. This is a complete solution for individual PDF documents. In the next steps, you’ll review some additional options for formatting data tables and for bulk text extraction.
After performing OCR on images or PDFs, you can optionally also extract any tabular or spreadsheet-formatted data into a CSV file. This can be especially useful when working with older data sources or scientific papers.
There are two tools which provide this functionality, and they both perform similarly: Tabula, written in Java, and Camelot, written in Python.
Tabula can be installed as a snap package on Ubuntu by using snap install
:
- sudo snap install tabula
If you visually inspect the sample PDF used in this tutorial, you will find a table in the middle of page 6:
Therefore, you’ll run tabula
on your PDF by specifying that you want to extract a table from -p 6
in your final.pdf
, and redirect the output to a new file called test.csv
:
- tabula -p 6 final.pdf > test.csv
Check the quality of the table detection in test.csv
. You should now be able to use it as input to a spreadsheet program like Excel, or to another data analysis script.
Camelot is a Python library, and requires you to have installed Python and pip
, the Python package manager. If you haven’t already installed Python, you can refer to the first step of How To Install Python 3 and Set Up a Programming Environment on an Ubuntu 22.04 Server.
Next, install Camelot with pip install
, along with its opencv
dependency:
- sudo pip install camelot-py opencv-python-headless ghostscript
After that, you can run camelot
on your PDF, again specifying -p 6
, the output path and file type, and the input final.pdf
:
- camelot -p 6 -f csv -o test.csv stream final.pdf
You can refer to the Camelot documentation to fine-tune the extraction if needed.
In the final, optional step of this tutorial, you’ll review some other OCR solutions.
While Tesseract is the longest-developed open source OCR tool and provides support for the broadest set of output formats, a few other options also exist for performing server-side OCR. EasyOCR is a newer open source OCR engine that is more actively developed and can provide faster or more accurate results by running on a GPU. However, EasyOCR does not support PDF output, making it challenging to reconstruct input documents, and is primarily useful for outputting large amounts of raw text.
EasyOCR is a Python library, and requires you to have installed Python and pip
, the Python package manager. If you haven’t already installed Python, you can refer to the first step of How To Install Python 3 and Set Up a Programming Environment on an Ubuntu 22.04 Server.
Next, install EasyOCR with pip install
:
- sudo pip install easyocr
After installing EasyOCR, you can use it as a library within a Python script, or you can call it directly from the command line using the easyocr
command. A sample EasyOCR command looks like this:
- easyocr -l ch_sim en -f image.jpg --detail=1 --gpu=True
EasyOCR supports loading multiple language models at once for performing multilingual OCR. You can specify multiple languages following the -l
flag, in this case ch_sim
for simplified Chinese and en
for English. -f image.jpg
is the path to your input file. --detail=1
will provide bounding box coordinates along with your output if you need to reference the location of the extracted text in your file. You can also omit this information by running with --detail=0
.
The -gpu=True
flag is optional, and will try to use a CUDA code path for more efficient extraction if a GPU environment has been configured.
In this tutorial, you created an OCR pipeline using several mature open-source tools that can be implemented into other application stacks or exposed via a web service. You also reviewed some of the syntax and options available to these tools for fine-tuning, and considered some other OCR options, for extracting CSV tables and for running large-scale bulk extraction of text.
OCR is a well-understood and widely used technology: you know when you need it. Despite this, turnkey OCR implementations are often limited to paid desktop software. Being able to deploy OCR tools wherever you need them can be very useful. Next, you may want to read An Introduction to Machine Learning to review some related topic areas.
]]>This checkpoint is intended to help you assess what you have learned from our introductory articles to Cloud Servers, where we introduced cloud computing, cloud servers, and the Linux command line. You can use this checkpoint to assess your knowledge on these topics, review key terms and commands, and find resources for continued learning.
Cloud computing typically uses virtualization for hosting needs. This abstraction from physical hardware (often on-premises) means that projects can be built and maintained at scale without making large financial and time investments to maintain your own hardware. Once you have learned the fundamentals of cloud servers, you will be ready to begin exploring other key concepts and technologies of cloud computing, such as databases, containers, web servers, and security.
In this checkpoint, you’ll find three sections that synthesize the central topics across the articles in the Cloud Servers section: defining the cloud and its delivery models, using the Linux command line, and using SSH with remote servers. You can test your knowledge with interactive components. At the end of this checkpoint, you will find opportunities for continued learning and Linux server management.
Cloud computing is the delivery of computing resources as a service, meaning that the resources are owned and managed by the cloud provider rather than the end user. You’ve likely used the cloud to watch streaming media, store your personal data such as photos and files, or even create your own web apps or other projects.
In A General Introduction to Cloud Computing, you learned about cloud computing as it is defined by the National Institute of Standards and Technology (NIST), a non-regulatory agency of the United States Department of Commerce.
Check Yourself
What are the five essential characteristics of cloud computing?
NIST defines the following as the five key principles of cloud computing:
These characteristics are relevant across all types of cloud environments: public cloud, private cloud, hybrid cloud, and multicloud.
Through each of the articles, you have developed a vocabulary with common terms related to cloud computing.
Check Yourself
Define each of the following terms, then use the dropdown feature to check your work.
A server is the computer hardware or software that can run services to other computers and that allows client machines to function.
See the glossary entry for server for a longer definition.
A virtual private server, or VPS, is a virtual server that emulates a real computer with its own operating system. The software on a virtual machine is allocated by a host and disconnected from the computer’s hardware.
They are sometimes called virtual machines, or VMs. When hosted in the cloud, they are sometimes called cloud servers or remote servers.
Virtualization is a process that abstracts computer environments from physical hardware, making it possible to host in the cloud. This process facilitates the relationship between the virtual servers that apps and websites can be hosted on and the physical hosts that manage the virtual servers.
A hypervisor is software that deploys, manages, and grants resources to virtual servers under its control. The physical hardware that a hypervisor is running on is referred to as a host. The hypervisor shares the host’s resources among various guest VMs.
An Introduction to Cloud Hosting specifies four common hypervisors available today. Can you name them?
See the glossary entry for hypervisor for a longer definition.
The kernel is the foundation of a computer’s operating system. The kernel facilitates memory allocation as well as device and resource management.
See the glossary entry for kernel for a longer definition.
You can also identify how cloud resources are provided through delivery models such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
IaaS provides complete control over your infrastructure without maintaining your own hardware. Benefits include flexible hosting, scaling with demand, and building across multiple datacenters.
With PaaS, you use deployment platforms on your cloud provider’s back-end infrastructure. Benefits include predictable scaling, pre-configured runtime environments, and a simplified experience with API integrations.
SaaS provides software applications in cloud environments. You can access the software but not its production, maintenance, or modification. As a result, users can use the platform directly without needing to install or maintain software on their device.
Check Yourself
Match the following products to their delivery model:
Delivery Model | Product |
---|---|
IaaS | Managed Kubernetes on DigitalOcean, Managed Databases (like MongoDB and MySQL), Microsoft Azure, and more |
PaaS | AWS Elastic Beanstalk, DigitalOcean App Platform, Heroku, and more |
SaaS | Adobe Creative Cloud, Google Workspace, Netflix, Slack, Spotify, Zoom, and more |
You can now explain what the cloud is and describe why it has become ubiquitous in the modern day. You know the benefits and considerations necessary when building projects in the cloud, as well as which types of projects are available in which cloud delivery model. To build projects in the cloud, many developers use Linux-based virtual machines.
In A Linux Command Line Primer, you began the journey to love your terminal. With the initial server setup, you configured a Linux environment with SSH, a firewall configured with ufw
, a package manager, and a non-root user with sudo
privileges.
You can now navigate the command line interface (CLI) on both your local machine and a remote server using commands such as:
cat
to review file contents.cd
to move between directories.curl
to transfer data using URL syntax.echo
to display strings of text.ls
to list files.mkdir
to make new folders.mv
to move or rename files.nano
to create and edit text files.pwd
to review your current working directory path.rm
and rmdir
to delete files and folders.sudo
to run commands as a superuser.usermod
to change user permissions.And options (also known as flags or switches) like:
-a
to list all files, including hidden files.-h
or --human-readable
to print memory sizes in a human readable format.-l
to print extra details about files.-o
to output text to a file.-r
to run commands recursively.You can review all the commands you have run in your terminal with the history
command. You can also use the man
command in Linux to display user manuals or the --help
flag to review additional information about any command.
Once you’ve chosen your Linux distribution, you can explore tutorials in the Linux Basics series, manage processes on your Linux server, and otherwise monitor your server resources. If you are running a Linux-based remote server, you will use SSH to access and perform operations on the remote server from your local terminal.
The Secure Shell protocol (SSH) allows you to log in to a remote server and run command line execution from an unsecured network.
In SSH Essentials, you generated an SSH key pair to connect to your remote server. The SSH key provides a secure access credential when using SSH login. Your keys are stored in an authorized key file, usually in the /.ssh
directory in each user’s home directory.
Alongside ssh
and ssh-keygen
, you can use the rsync
(remote sync) and scp
(secure copy program) commands for file transfer between systems. During your initial server setup, you used rsync
to copy files between users, but you can also use it to copy files between systems.
Check Yourself
What is the difference between scp
and rsync
?
Both scp
and rsync
copy files: scp
between hosts on a network using SSH; rsync
on the local host or bidirectionally between the local host and a remote host. Both encrypt file transfer when used with SSH, and rsync
is known for its delta-transfer algorithm, which results in optimized transfer speed.
With scp
, you select which files and directories are to be transferred, whereas rsync
transfers all files and directories initially and then only the changed files and directories. You can use additional options with rsync
, such as the --archive
, --verbose
, and --compress
flags.
The Secure File Transfer Protocol (sftp
) is another option for file transfer, but it is not used frequently these days as scp
and rsync
are more flexible.
You can host a cloud server on a DigitalOcean Droplet. Once you’re familiar with the fundamentals of Linux, you can try securing your VPS and setting up Fail2ban to protect your server. You’ll also want to decide which package management system to use.
If you’d like to develop your Linux skills further, follow these tutorials:
If you run into issues as you work, you can also troubleshoot common site issues.
With your newfound cloud knowledge, you are well-equipped to continue your cloud journey with web servers, databases, containers, and security.
]]>There are many tools available to manage storage in Linux. However, only a handful are used for day-to-day maintenance and administration. In this guide, you will review some of the most commonly used utilities for managing mount points, storage devices, and filesystems.
This guide will not cover how to prepare storage devices for their initial use on a Linux system. This guide on partitioning and formatting block devices in Linux will help you prepare your raw storage device if you have not set up your storage yet.
For more information about some of the terminology used to discuss storage, try reading this article on storage terminology.
Often, the most important information you will need about the storage on your system is the capacity and current utilization of the connected storage devices.
To check how much storage space is available in total and to see the current utilization of your drives, use the df utility. By default, this outputs the measurements in 1K blocks, which isn’t always useful. Add the -h
flag to output in human-readable units:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
udev 238M 0 238M 0% /dev
tmpfs 49M 624K 49M 2% /run
/dev/vda1 20G 1.1G 18G 6% /
tmpfs 245M 0 245M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 245M 0 245M 0% /sys/fs/cgroup
tmpfs 49M 0 49M 0% /run/user/1000
/dev/sda1 99G 60M 94G 1% /mnt/data
The /dev/vda1
partition, which is mounted at /
, is 6% full and has 18G of available space, while the /dev/sda1
partition, which is mounted at /mnt/data
is empty and has 94G of available space. The other entries use tmpfs
or devtmpfs
filesystems, which is volatile memory used as if it were permanent storage. You can exclude these entries by typing:
- df -h -x tmpfs -x devtmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 1.1G 18G 6% /
/dev/sda1 99G 60M 94G 1% /mnt/data
This output offers a more focused display of current disk utilization by removing some pseudo-devices and special devices.
A block device is a generic term for a storage device that reads or writes in blocks of a specific size. This term applies to almost every type of non-volatile storage, including hard disk drives (HDDs), solid state drives (SSDs), and so on. The block device is the physical device where the filesystem is written. The filesystem, in turn, dictates how data and files are stored.
The lsblk utility can be used to display information about block devices. The specific capabilities of the utility depend on the version installed, but in general, the lsblk
command can be used to display information about the drive itself, as well as the partitioning information and the filesystem that has been written to it.
Without any arguments, lsblk
will show device names, the major and minor numbers associated with a device (used by the Linux kernel to keep track of drivers and devices), whether the drive is removable, its size, whether it is mounted read-only, its type (disk or partition), and its mount point. Some systems require sudo
for this to display correctly:
- sudo lsblk
OutputNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
Of the output displayed, the most important parts will usually be the name, which refers to the device name under /dev
, the size, the type, and the mountpoint. Here, you can see that you have one disk (/dev/vda
) with a single partition (/dev/vda1
) being used as the /
partition and another disk (/dev/sda
) that has not been partitioned.
To get information more relevant to disk and partition management, you can pass the --fs
flag on some versions:
- sudo lsblk --fs
OutputNAME FSTYPE LABEL UUID MOUNTPOINT
sda
vda
└─vda1 ext4 DOROOT c154916c-06ea-4268-819d-c0e36750c1cd /
If the --fs
flag is unavailable on your system, you can manually replicate the output by using the -o
flag to request specific output. You can use -o NAME,FSTYPE,LABEL,UUID,MOUNTPOINT
to get this same information.
To get information about the disk topology, type:
- sudo lsblk -t
OutputNAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME
sda 0 512 0 512 512 1 deadline 128 128 2G
vda 0 512 0 512 512 1 128 128 0B
└─vda1 0 512 0 512 512 1 128 128 0B
There are many other shortcuts available to display related traits about your disks and partitions. You can output all available columns with the -O
flag or you can customize the fields to display by specifying the column names with the -o
flag. The -h
flag can be used to list the available columns:
- lsblk -h
Output. . .
Available columns (for --output):
NAME device name
KNAME internal kernel device name
. . .
SUBSYSTEMS de-duplicated chain of subsystems
REV device revision
VENDOR device vendor
For more details see lsblk(8).
Before you can use a new disk, you typically have to partition it, format it with a filesystem, and then mount the drive or partitions. Partitioning and formatting are usually one-time procedures. You can find out more information on how to partition and format a drive with Linux in How To Partition and Format Storage Devices in Linux.
Mounting is something you may do more frequently. Mounting the filesystem makes it available to the server at the selected mount point. A mount point is a directory under which the new filesystem can be accessed.
Two complementary commands are primarily used to manage mounting: mount
and umount
. The mount
command is used to attach a filesystem to the current file tree. In a Linux system, a single unified file hierarchy is used for the entire system, regardless of how many physical devices it is composed of. The umount
command (Note: this is umount
, not unmount
) is used to unmount a filesystem. Additionally, the findmnt
command is helpful for gathering information about the current state of mounted filesystems.
The most straightforward way to use mount
is to pass in a formatted device or partition and the mount point where it is to be attached:
- sudo mount /dev/sda1 /mnt
The mount point, the final parameter which specifies where in the file hierarchy the new filesystem should be attached, should almost always be an empty directory.
Usually, you will want to select more specific options when mounting. Although mount
can attempt to guess the filesystem type, it’s almost always a better idea to pass in the filesystem type with the -t
option. For an Ext4 filesystem, this would be:
- sudo mount -t ext4 /dev/sda1 /mnt
There are many other options that will impact the way that the filesystem is mounted. There are generic mount options, which can be found in the FILESYSTEM INDEPENDENT MOUNT OPTIONS section of the mount manual.
Pass in other options with the -o
flag. For instance, to mount a partition with the default options (which stands for rw,suid,dev,exec,auto,nouser,async
), you can pass in -o defaults
. If you need to override the read-write permissions and mount as read-only, you can add ro
as a later option, which will override the rw
from the defaults
option:
- sudo mount -t ext4 -o defaults,ro /dev/sda1 /mnt
To mount all of the filesystems outlined in the /etc/fstab
file, you can pass the -a
option:
- sudo mount -a
To display the mount options used for a specific mount, use the findmnt
command. For instance, if you viewed the read-only mount from the example above with findmnt
, it would look something like this:
- findmnt /mnt
OutputTARGET SOURCE FSTYPE OPTIONS
/mnt /dev/sda1 ext4 ro,relatime,data=ordered
This can be useful if you have been experimenting with multiple options and have finally discovered a set that you like. You can find the options it is using with findmnt
so that you know what is appropriate to add to the /etc/fstab
file for future mounting.
The umount
command is used to unmount a given filesystem. Again, this is umount
not unmount
.
The general form of the command is to name the mount point or device of a currently mounted filesystem. Make sure that you are not using any files on the mount point and that you do not have any applications (including your current shell) operating inside of the mount point:
- cd ~
- sudo umount /mnt
There are usually no options to add to the default unmounting behavior.
While this list is in no way exhaustive, these utilities should cover most of what you need for daily system administration tasks. By learning a few tools, you can handle storage devices on your server.
]]>Linux has robust systems and tooling to manage hardware devices, including storage drives. In this article we’ll cover, at a high level, how Linux represents these devices and how raw storage is made into usable space on the server.
Block storage is another name for what the Linux kernel calls a block device. A block device is a piece of hardware that can be used to store data, like a traditional spinning hard disk drive (HDD), solid state drive (SSD), flash memory stick, and so on. It is called a block device because the kernel interfaces with the hardware by referencing fixed-size blocks, or chunks of space.
In other words, block storage is what you think of as regular disk storage on a computer. Once it is set up, it acts as an extension of the current filesystem tree, and you should be able to write to or read information from each drive interchangeably.
Disk partitions are a way of breaking up a storage drive into smaller usable units. A partition is a section of a storage drive that can be treated in much the same way as a drive itself.
Partitioning allows you to segment the available space and use each partition for a different purpose. This gives a user more flexibility, allowing them to potentially segment a single disk for multiple operating systems, swap space, or specialized filesystems.
While disks can be formatted and used without partitioning, operating systems usually expect to find a partition table, even if there is only a single partition written to the disk. It is generally recommended to partition new drives for greater flexibility.
When partitioning a disk, it is important to know what partitioning format will be used. This generally comes down to a choice between MBR (Master Boot Record) and GPT (GUID Partition Table).
MBR is over 30 years old. Because of its age, it has some serious limitations. For instance, it cannot be used for disks over 2TB in size, and can only have a maximum of four primary partitions.
GPT is a more modern partitioning scheme that resolves some of the issues inherent with MBR. Systems running GPT can have many more partitions per disk. This is usually only limited by the restrictions imposed by the operating system itself. Additionally, the disk size limitation does not exist with GPT and the partition table information is available in multiple locations to guard against corruption. GPT can also write a “protective MBR” for compatibility with MBR-only tools.
In most cases, GPT is the better choice unless your operating system prevents you from using it.
While the Linux kernel can recognize a raw disk, it must be formatted to be used. Formatting is the process of writing a filesystem to the disk and preparing it for file operations. A filesystem is the system that structures data and controls how information is written to and retrieved from the underlying disk. Without a filesystem, you could not use the storage device for any standard filesystem operations.
There are many different filesystem formats, each with trade-offs, including operating system support. They all present the user with a similar representation of the disk, but the features and the platforms that they support can be very different.
Some of the more popular filesystems for Linux are:
Additionally, Windows primarily uses *NTFS and ExFAT, and macOS primarily uses HFS+ and APFS. It is usually possible to read and sometimes write these filesystem formats on different platforms, but may require additional compatibility tools.
In Linux, almost everything is represented by a file somewhere in the filesystem hierarchy. This includes hardware like storage drives, which are represented on the system as files in the /dev
directory. Typically, files representing storage devices start with sd
or hd
followed by a letter. For instance, the first drive on a server is usually something like /dev/sda
.
Partitions on these drives also have files within /dev
, represented by appending the partition number to the end of the drive name. For example, the first partition on the drive from the previous example would be /dev/sda1
.
While the /dev/sd*
and /dev/hd*
device files represent the traditional way to refer to drives and partitions, there is a significant disadvantage to using these values alone. The Linux kernel decides which device gets which name on each boot, so this can lead to confusing scenarios where your devices change device nodes.
To work around this issue, the /dev/disk
directory contains subdirectories corresponding with different, more persistent ways to identify disks and partitions on the system. These contain symbolic links that are created at boot back to the correct /dev/[sh]da*
files. The links are named according to the directory’s identifying trait (for example, by partition label in for the /dev/disk/by-partlabel
directory). These links will always point to the correct devices, so they can be used as static identifiers for storage spaces.
Some or all of the following subdirectories may exist under /dev/disk
:
by-label
: Most filesystems have a labeling mechanism that allows the assignment of arbitrary user-specified names for a disk or partition. This directory consists of links named after these user-supplied labels.by-uuid
: UUIDs, or universally unique identifiers, are a long, unique string of letters and numbers that can be used as an ID for a storage resource. These are generally not very human-readable, but are almost always unique, even across systems. As such, it might be a good idea to use UUIDs to reference storage that may migrate between systems, since naming collisions are less likely.by-partlabel
and by-partuuid
: GPT tables offer their own set of labels and UUIDs, which can also be used for identification. This functions in much the same way as the previous two directories, but uses GPT-specific identifiers.by-id
: This directory contains links generated by the hardware’s own serial numbers and the hardware they are attached to. This is not entirely persistent, because the way that the device is connected to the system may change its by-id
name.by-path
: Like by-id
, this directory relies on a storage device’s connection to the system itself. The links here are constructed using the system’s interpretation of the hardware used to access the device. This has the same drawbacks as by-id
as connecting a device to a different port can alter this value.Usually, by-label
or by-uuid
are the best options for persistent identification of specific devices.
Note: DigitalOcean block storage volumes control the device serial numbers reported to the operating system. This allows for the by-id
categorization to be reliably persistent on this platform. This is the preferred method of referring to DigitalOcean volumes as it is both persistent and predictable on first boot.
In Linux and other Unix-like operating systems, the entire system, regardless of how many physical devices are involved, is represented by a single unified file tree. When a filesystem on a drive or partition is to be used, it must be hooked into the existing tree. Mounting is the process of attaching a formatted partition or drive to a directory within the Linux filesystem. The drive’s contents can then be accessed from that directory.
Drives are almost always mounted on dedicated empty directories – mounting on a non-empty directory means that the directory’s usual contents will be inaccessible until the drive is unmounted). There are many different mounting options that can be set to alter the behavior of a mounted device. For example, the drive can be mounted in read-only mode to ensure that its contents won’t be altered.
The Filesystem Hierarchy Standard recommends using /mnt
or a subdirectory under it for temporarily mounted filesystems. It makes no recommendations on where to mount more permanent storage, so you can choose whichever scheme you’d like. In many cases, /mnt
or /mnt
subdirectories are used for more permanent storage as well.
Linux systems use a file called /etc/fstab
(filesystem table) to determine which filesystems to mount during the boot process. Filesystems that do not have an entry in this file will not be automatically mounted unless scripted by some other software.
Each line of the /etc/fstab
file represents a different filesystem that should be mounted. This line specifies the block device, the mount point to attach it to, the format of the drive, and the mount options, as well as a few other pieces of information.
While many use cases will be accommodated by these core features, there are more complex management paradigms available for joining together multiple disks, notably RAID.
RAID stands for redundant array of independent disks. RAID is a storage management and virtualization technology that allows you to group drives together and manage them as a single unit with additional capabilities.
The characteristics of a RAID array depend on its RAID level, which defines how the disks in the array relate to each other. Some of the more common levels are:
If you have a new storage device that you wish to use in your Linux system, this article will guide you through the process of partitioning, formatting, and mounting your new filesystem. This should be sufficient for most use cases where you are mainly concerned with adding additional capacity. To learn how to perform storage administration tasks, check out How To Perform Basic Administration Tasks for Storage Devices in Linux.
]]>SSH, or Secure Shell, is the most common way of connecting to Linux servers for remote administration. Although connecting to a single server via the command line is relatively straightforward, there are many workflow optimizations for connecting to multiple remote systems.
OpenSSH, the most commonly used command-line SSH client on most systems, allows you to provide customized connection options. These can be saved to a configuration file that contains different options per server. This can help keep the different connection options you use for each host separated and organized, and avoids having to provide extensive options on the command line whenever you need to connect.
In this guide, we’ll cover the structure of the SSH client configuration file, and go over some common options.
To complete this guide, you will need a working knowledge of SSH and some of the options that you can provide when connecting. You should also configure SSH key-based authentication for some of your users or servers, at least for testing purposes.
Each user on your system can maintain their own SSH configuration file within their home directory. These can contain any options that you would use on the command line to specify connection parameters. It is always possible to override the values defined in the configuration file at the time of the connection by adding additional flags to the ssh
command.
The client-side configuration file is located at ~/.ssh/config
– the ~
is a universal shortcut to your home directory. Often, this file is not created by default, so you may need to create it yourself. The touch
command will create it if it does not exist (and update the last modified timestamp if it does).
- touch ~/.ssh/config
The config
file is organized by hosts, i.e., by remote servers. Each host definition can define connection options for the specific matching host. Wildcards are also supported for options that should have a broader scope.
Each of the sections starts with a header defining the hosts that should match the configuration options that follow. The specific configuration items for that matching host are then defined below. Only items that differ from the default values need to be specified, as each entry will inherit the defaults for any undefined items. Each section spans from one Host
header to the following Host
header.
Typically, for organizational purposes and readability, the options being set for each host are indented. This is not a hard requirement, but a useful convention that allows for interpretation at a glance.
The general format will look something like this:
Host firsthost
Hostname your-server.com
User username-to-connect-as
IdentityFile /path/to/non/default/keys.pem
Host secondhost
ANOTHER_OPTION custom_value
Host *host
ANOTHER_OPTION custom_value
Host *
CHANGE_DEFAULT custom_value
Here, we have four sections that will be applied on each connection attempt depending on whether the host in question matches.
It is important to understand the way that SSH will interpret the file to apply the configuration values. This has implications when using wildcards and the Host *
generic host definition.
SSH will match the host name provided on the command line with each of the Host
headers that define configuration sections.
For example, consider this definition:
Host devel
HostName devel.example.com
User tom
This host allows us to connect as tom@devel.example.com
by typing this on the command line:
- ssh devel
SSH starts at the top of the config file and checks each Host
definition to see if it matches the value given on the command line. When the first matching Host
definition is found, each of the associated SSH options are applied to the upcoming connection.
SSH then moves down the file, checking to see if other Host
definitions also match. If another definition is found that matches the current hostname given on the command line, it will consider the SSH options associated with the new section. It will then apply any SSH options defined for the new section that have not already been defined by previous sections.
This is an important point. SSH will interpret each of the Host
sections that match the hostname given on the command line, in order. During this process, it will always use the first value given for each option. There is no way to override a value that has already been given by a previously matched section.
This means that your config
file should follow the rule of having the most specific configurations at the top. More general definitions should come later on in order to apply options that were not defined by the previous matching sections.
Let’s look again at the example from the previous section:
Host firsthost
Hostname your-server.com
User username-to-connect-as
IdentityFile /path/to/non/default/keys.pem
Host secondhost
ANOTHER_OPTION custom_value
Host *host
ANOTHER_OPTION custom_value
Host *
CHANGE_DEFAULT custom_value
Here, we can see that the first two sections are defined by literal hostnames (or aliases), meaning that they do not use any wildcards. If we connect using ssh firsthost
, the very first section will be the first to be applied. This will set Hostname
, User
, and IdentityFile
for this connection.
It will check the second section and find that it does not match and move on. It will then find the third section and find that it matches. It will check ANOTHER_OPTION
to see if it already has a value for that from previous sections. Finding that it doesn’t, it will apply the value from this section. It will then match the last section since the Host *
definition matches every connection. Since it doesn’t have a value for the mock CHANGE_DEFAULT
option from other sections, it will take the value from this section. The connection is then made with the options collected from this process.
Let’s try this again, pretending to call ssh secondhost
from the command line.
Again, it will start at the first section and check whether it matches. Since this matches only a connection to firsthost
, it will skip this section. It will move on to the second section. Upon finding that this section matches the request, it will collect the value of ANOTHER_OPTION
for this connection.
SSH then looks at the third definition and find that the wildcard matches the current connection. It will then check whether it already has a value for ANOTHER_OPTION
. Since this option was defined in the second section, which was already matched, the value from the third section is dropped and has no effect.
SSH then checks the fourth section and applies the options within that have not been defined by previously matched sections. It then attempts the connection using the values it has gathered.
Now that you have an idea about how to write your configuration file, let’s discuss some common options and the format to use to specify them on the command line.
The first ones we will cover are the minimum settings necessary to connect to a remote host. Namely, the hostname, username, and port that the SSH server is running on.
To connect as a user named apollo
to a host called example.com
that runs its SSH daemon on port 4567
from the command line, you could run ssh
like this:
ssh -p 4567 apollo@example.com
However, you could also use the full option names with the -o
flag, like this:
ssh -o "User=apollo" -o "Port=4567" -o "HostName=example.com"
You can find a full list of available options in the SSH manual page.
To set these in your config
file, you have to choose a Host
header name, like home
:
Host home
HostName example.com
User apollo
Port 4567
So far, we have discussed some of the options necessary to establish a connection. We have covered these options:
Host
header. This option is not necessary if the Host
definition specifies the actual valid address to connect to.22
.There are many other useful options worth exploring. We will discuss some of the more common options, separated according to function.
ServerAliveInterval
: This option can be configured to let SSH know when to send a packet to test for a response from the server. This can be useful if your connection is unreliable and you want to know if it is still available.
LogLevel
: This configures the level of detail in which SSH will log on the client-side. This can be used for turning off logging in certain situations or increasing the verbosity when trying to debug. From least to most verbose, the levels are QUIET, FATAL, ERROR, INFO, VERBOSE, DEBUG1, DEBUG2, and DEBUG3.
StrictHostKeyChecking
: This option configures whether ssh SSH will ever automatically add hosts to the ~/.ssh/known_hosts
file. By default, this will be set to “ask” meaning that it will warn you if the Host Key received from the remote server does not match the one found in the known_hosts
file. If you are constantly connecting to a large number of ephemeral hosts (such as testing servers), you may want to turn this to “no”. SSH will then automatically add any hosts to the file. This can have security implications if your known hosts ever do change addresses when they shouldn’t, so think carefully before enabling it.
UserKnownHostsFile
: This option specifies the location where SSH will store the information about hosts it has connected to. Usually you do not have to worry about this setting, but you may wish to set this to /dev/null
so they are discarded if you have turned off strict host checking above.
VisualHostKey
: This option can tell SSH to display an ASCII representation of the remote host’s key upon connection. Turning this on can be a useful way to get familiar with your host’s key, allowing you to recognize it if you have to connect from a different computer sometime in the future.
Compression
: Turning compression on can be helpful for very slow connections. Most users will not need this.
With the above configuration items in mind, we could make a number of useful configuration tweaks.
For instance, if we are creating and destroying hosts very quickly at a cloud provider, something like this may be useful:
Host home
VisualHostKey yes
Host cloud*
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
LogLevel QUIET
Host *
StrictHostKeyChecking ask
UserKnownHostsFile ~/.ssh/known_hosts
LogLevel INFO
ServerAliveInterval 120
This will turn on your visual host key for your home connection, allowing you to become familiar with it so you can recognize if it changes or when connecting from a different machine. We have also set up any host that begins with cloud* to not check hosts and not log failures. For other hosts, we have sane fallback values.
One common use of SSH is forwarding connections, either allowing a local connection to tunnel through the remote host, or allowing the remote machine access to tunnel through the local machine. This is sometimes necessary when you need to connect to a remote machine behind a firewall through a separate, designated “gateway” server. SSH can also do dynamic forwarding using protocols like SOCKS5 which include the forwarding information for the remote host.
The options that control this behavior are:
LocalForward
: This option is used to specify a connection that will forward a local port’s traffic to the remote machine, tunneling it out into the remote network. The first argument should be the local port you wish to direct traffic to and the second argument should be the address and port that you wish to direct that traffic to on the remote end.
RemoteForward
: This option is used to define a remote port where traffic can be directed to in order to tunnel out of the local machine. The first argument should be the remote port where traffic will be directed on the remote system. The second argument should be the address and port to point the traffic to when it arrives on the local system.
DynamicForward
: This is used to configure a local port that can be used with a dynamic forwarding protocol like SOCKS5. Traffic using the dynamic forwarding protocol can then be directed at this port on the local machine and on the remote end, it will be routed according to the included values.
These options can be used to forward ports in both directions, as you can see here:
# This will allow us to use port 8080 on the local machine
# in order to access example.com at port 80 from the remote machine
Host local_to_remote
LocalForward 8080 example.com:80
# This will allow us to offer access to internal.com at port 443
# to the remote machine through port 7777 on the other side
Host remote_to_local
RemoteForward 7777 internal.com:443
This is especially useful when you need to open a browser window to a private dashboard or another web application running on a server that is not directly accessible other than over SSH.
Along with connection forwarding, SSH allows other types of forwarding as well.
You can forward any SSH keys stored in an agent on your local machine, allowing us to connect from the remote system using credentials stored on your local system. You can also start applications on a remote system and forward the graphical display to our local system using X11 forwarding. X11 is a Linux display server and is not very intuitive to use without a Linux desktop system, but can be very useful if you are using both a remote and a local Linux environment.
These are the directives that are associated with these capabilities:
ForwardAgent
: This option allows authentication keys stored on our local machine to be forwarded onto the system you are connecting to. This can allow you to hop from host-to-host using your home keys.
ForwardX11
: If you want to be able to forward a graphical screen of an application running on the remote system, you can turn this option on.
If you have SSH keys configured for your hosts, these options can help you manage which keys to use for each host.
IdentityFile
: This option can be used to specify the location of the key to use for each host. SSH will use keys located in ~/.ssh
by default, but if you have keys assigned per-server, this can be used to specify the exact path where they can be found.
IdentitiesOnly
: This option can be used to force SSH to only rely on the identities provided in the config
file. This may be necessary if an SSH agent has alternative keys in memory that are not valid for the host in question.
These options are especially useful if you have to keep track of a large number of keys for different hosts and use one or more SSH agents to assist.
As long as you keep in mind the way that SSH will interpret the values, you can establish rich sets of specific values with reasonable fall backs.
If you ever have to use SSH over a very poor or intermittent connection such as airplane Wi-Fi, you can also try using mosh, which is designed to make SSH work under adverse circumstances.
]]>An understanding of networking is important for anyone managing a server. Not only is it essential for getting your services online and running smoothly, it also gives you the insight to diagnose problems.
This article will provide an overview of some common networking concepts. We will discuss terminology, common protocols, and the responsibilities and characteristics of the different layers of networking.
This guide is operating system agnostic, but should be very helpful when implementing features and services that utilize networking on your server.
First, we will define some common terms that you will see throughout this guide, and in other guides and documentation regarding networking.
These terms will be expanded upon in the appropriate sections that follow:
Connection: In networking, a connection refers to pieces of related information that are transferred through a network. Generally speaking, a connection is established before data transfer (by following the procedures laid out in a protocol) and may be deconstructed at the end of the data transfer.
Packet: A packet is the smallest unit that is intentionally transferred over a network. When communicating over a network, packets are the envelopes that carry your data (in pieces) from one end point to the other.
Packets have a header portion that contains information about the packet including the source and destination, timestamps, network hops, etc. The main portion of a packet contains the actual data being transferred. It is sometimes called the body or the payload.
A network interface may be associated with a physical device, or it may be a representation of a virtual interface. The “loopback” device, which is a virtual interface available in most Linux environments to connect back to the same machine, is an example of this.
LAN: LAN stands for “local area network”. It refers to a network or a portion of a network that is not publicly accessible to the greater internet. A home or office network is an example of a LAN.
WAN: WAN stands for “wide area network”. It means a network that is much more extensive than a LAN. While WAN is the relevant term to use to describe large, dispersed networks in general, it is usually meant to mean the internet, as a whole.
If an interface is said to be connected to the WAN, it is generally assumed that it is reachable through the internet.
Some low level protocols are TCP, UDP, IP, and ICMP. Some familiar examples of application layer protocols, built on these lower protocols, are HTTP (for accessing web content), SSH, and TLS/SSL.
Port: A port is an address on a single machine that can be tied to a specific piece of software. It is not a physical interface or location, but it allows your server to be able to communicate using more than one application.
Firewall: A firewall is a program that decides whether traffic coming or going from a server should be allowed. A firewall usually works by creating rules for which type of traffic is acceptable on which ports. Generally, firewalls block ports that are not used by a specific application on a server.
NAT: NAT stands for network address translation. It is a way to repackage and send incoming requests to a routing server to the relevant devices or servers on a LAN. This is usually implemented in physical LANs as a way to route requests through one IP address to the necessary backend servers.
VPN: VPN stands for virtual private network. It is a means of connecting separate LANs through the internet, while maintaining privacy. This is used to connect remote systems as if they were on a local network, often for security reasons.
There are many other terms that you will come across, and this list is not exhaustive. We will explain other terms as we need them. At this point, you should understand some high-level concepts that will enable us to better discuss the topics to come.
While networking is often discussed in terms of topology in a horizontal way, between hosts, its implementation is layered in a vertical fashion within any given computer or network.
What this means is that there are multiple technologies and protocols that are built on top of each other in order for communication to function. Each successive, higher layer abstracts the raw data a little bit more.
It also allows you to leverage lower layers in new ways without having to invest the time and energy to develop the protocols and applications that handle those types of traffic.
The language that we use to talk about each of the layering schemes varies significantly depending on which model you use. Regardless of the model used to discuss the layers, the path of data is the same.
As data is sent out of one machine, it begins at the top of the stack and filters downwards. At the lowest level, actual transmission to another machine takes place. At this point, the data travels back up through the layers of the other computer.
Each layer has the ability to add its own “wrapper” around the data that it receives from the adjacent layer, which will help the layers that come after decide what to do with the data when it is handed off.
The TCP/IP model, more commonly known as the Internet protocol suite, is a widely adopted layering model. It defines the four separate layers:
Application: In this model, the application layer is responsible for creating and transmitting user data between applications. The applications can be on remote systems, and should appear to operate as if locally to the end user. This communication is said to take place between peers.
Transport: The transport layer is responsible for communication between processes. This level of networking utilizes ports to address different services.
Internet: The internet layer is used to transport data from node to node in a network. This layer is aware of the endpoints of the connections, but is not concerned with the actual connection needed to get from one place to another. IP addresses are defined in this layer as a way of reaching remote systems in an addressable manner.
Link: The link layer implements the actual topology of the local network that allows the internet layer to present an addressable interface. It establishes connections between neighboring nodes to send data.
As you can see, the TCP/IP model is abstract and fluid. This made it popular to implement and allowed it to become the dominant way that networking layers are categorized.
Interfaces are networking communication points for your computer. Each interface is associated with a physical or virtual networking device.
Typically, your server will have one configurable network interface for each Ethernet or wireless internet card you have.
In addition, it will define a virtual network interface called the “loopback” or localhost interface. This is used as an interface to connect applications and processes on a single computer to other applications and processes. You can see this referenced as the “lo” interface in many tools.
Many times, administrators configure one interface to service traffic to the internet and another interface for a LAN or private network.
In datacenters with private networking enabled (including DigitalOcean Droplets), your VPS will have two networking interfaces. The “eth0” interface will be configured to handle traffic from the internet, while the “eth1” interface will operate to communicate with a private network.
Networking works by piggybacking a number of different protocols on top of each other. In this way, one piece of data can be transmitted using multiple protocols encapsulated within one another.
We will start with protocols implemented on the lower networking layers and work our way up to protocols with higher abstraction.
Medium access control is a communications protocol that is used to distinguish specific devices. Each device is supposed to get a unique, hardcoded media access control address (MAC address) when it is manufactured that differentiates it from every other device on the internet.
Addressing hardware by the MAC address allows you to reference a device by a unique value even when the software on top may change the name for that specific device during operation.
MAC addressing is one of the only protocols from the low-level link layer that you are likely to interact with on a regular basis.
The IP protocol is one of the fundamental protocols that allow the internet to work. IP addresses are unique on each network and they allow machines to address each other across a network. It is implemented on the internet layer in the TCP/IP model.
Networks can be linked together, but traffic must be routed when crossing network boundaries. This protocol assumes an unreliable network and multiple paths to the same destination that it can dynamically change between.
There are a number of different implementations of the protocol. The most common implementation today is IPv4 addresses, which follow the pattern 123.123.123.123
, although IPv6 addresses, which follows the pattern 2001:0db8:0000:0000:0000:ff00:0042:8329
, are growing in popularity due to the limited number of available IPv4 addresses.
ICMP stands for internet control message protocol. It is used to send messages between devices to indicate their availability or error conditions. These packets are used in a variety of network diagnostic tools, such as ping
and traceroute
.
Usually ICMP packets are transmitted when a different kind of packet encounters a problem. They are used as a feedback mechanism for network communications.
TCP stands for transmission control protocol. It is implemented in the transport layer of the TCP/IP model and is used to establish reliable connections.
TCP is one of the protocols that encapsulates data into packets. It then transfers these to the remote end of the connection using the methods available on the lower layers. On the other end, it can check for errors, request certain pieces to be resent, and reassemble the information into one logical piece to send to the application layer.
The protocol builds up a connection prior to data transfer using a system called a three-way handshake. This is a way for the two ends of the communication to acknowledge the request and agree upon a method of ensuring data reliability.
After the data has been sent, the connection is torn down using a similar four-way handshake.
TCP is the protocol of choice for many of the most popular uses for the internet, including WWW, SSH, and email.
UDP stands for user datagram protocol. It is a popular companion protocol to TCP and is also implemented in the transport layer.
The fundamental difference between UDP and TCP is that UDP offers unreliable data transfer. It does not verify that data has been received on the other end of the connection. This might sound like a bad thing, and for many purposes, it is. However, it is also extremely important for some functions.
Because it is not required to wait for confirmation that the data was received and forced to resend data, UDP is much faster than TCP. It does not establish a connection with the remote host, it just sends data without confirmation.
Because it is a straightforward transaction, it is useful for communications like querying for network resources. It also doesn’t maintain a state, which makes it great for transmitting data from one machine to many real-time clients. This makes it ideal for VOIP, games, and other applications that cannot afford delays.
HTTP stands for hypertext transfer protocol. It is a protocol defined in the application layer that forms the basis for communication on the web.
HTTP defines a number of verbs that tell the remote system what you are requesting. For instance, GET, POST, and DELETE all interact with the requested data in a different way. To see an example of the different HTTP requests in action, refer to How To Define Routes and HTTP Request Methods in Express.
DNS stands for domain name system. It is an application layer protocol used to provide a human-friendly naming mechanism for internet resources. It is what ties a domain name to an IP address and allows you to access sites by name in your browser.
SSH stands for secure shell. It is an encrypted protocol implemented in the application layer that can be used to communicate with a remote server in a secure way. Many additional technologies are built around this protocol because of its end-to-end encryption and ubiquity.
There are many other protocols that we haven’t covered that are equally important. However, this should give you a good overview of some of the fundamental technologies that make the internet and networking possible.
At this point, you should be familiar with some networking terminology and be able to understand how different components are able to communicate with each other. This should assist you in understanding other articles and the documentation of your system.
Next, for a high-level, read world example, you may want to read How To Make HTTP Requests in Go.
]]>One way to guard against out-of-memory errors in applications is to add some swap space to your server. In this guide, we will cover how to add a swap file to a Rocky Linux 9 server.
Swap is a portion of hard drive storage that has been set aside for the operating system to temporarily store data that it can no longer hold in RAM. This lets you increase the amount of information that your server can keep in its working memory, with some caveats. The swap space on the hard drive will be used mainly when there is no longer sufficient space in RAM to hold in-use application data.
The information written to disk will be significantly slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fallback for when your system’s RAM is depleted can be a good safety net against out-of-memory exceptions on systems with non-SSD storage available.
Before we begin, we can check if the system already has some swap space available. It is possible to have multiple swap files or swap partitions, but generally one should be enough.
We can see if the system has any configured swap by typing:
- sudo swapon --show
If you don’t get back any output, this means your system does not have swap space available currently.
You can verify that there is no active swap using the free
utility:
- free -h
Output total used free shared buff/cache available
Mem: 1.7Gi 173Mi 1.2Gi 9.0Mi 336Mi 1.4Gi
Swap: 0B 0B 0B
As you can see in the Swap row of the output, no swap is active on the system.
Before we create our swap file, we’ll check our current disk usage to make sure we have enough space. Do this by entering:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
devtmpfs 855M 0 855M 0% /dev
tmpfs 888M 0 888M 0% /dev/shm
tmpfs 355M 9.4M 346M 3% /run
/dev/vda1 59G 1.4G 58G 3% /
/dev/vda2 994M 155M 840M 16% /boot
/dev/vda15 100M 7.0M 93M 7% /boot/efi
tmpfs 178M 0 178M 0% /run/user/0
The device with /
in the Mounted on
column is our disk in this case. We have plenty of space available in this example (only 1.4G used). Your usage will probably be different.
Although there are many opinions about the appropriate size of a swap space, it really depends on your personal preferences and your application requirements. Generally, an amount equal to or double the amount of RAM on your system is a good starting point. Another good rule of thumb is that anything over 4G of swap is probably unnecessary if you are just using it as a RAM fallback.
Now that we know our available hard drive space, we can create a swap file on our filesystem. We will allocate a file of the size that we want called swapfile
in our root (/
) directory.
The best way of creating a swap file is with the fallocate
program. This command instantly creates a file of the specified size.
Since the server in our example has 2G of RAM, we will create a 2G file in this guide. Adjust this to meet the needs of your own server:
- sudo fallocate -l 1G /swapfile
We can verify that the correct amount of space was reserved by typing:
- ls -lh /swapfile
- -rw-r--r--. 1 root root 2.0G Sep 13 17:52 /swapfile
Our file has been created with the correct amount of space set aside.
Now that we have a file of the correct size available, we need to actually turn this into swap space.
First, we need to lock down the permissions of the file so that only users with root privileges can read the contents. This prevents normal users from being able to access the file, which would have significant security implications.
Make the file only accessible to root by typing:
- sudo chmod 600 /swapfile
Verify the permissions change by typing:
- ls -lh /swapfile
Output-rw------- 1 root root 2.0G Sep 13 17:52 /swapfile
As you can see, only the root user has the read and write flags enabled.
We can now mark the file as swap space by typing:
- sudo mkswap /swapfile
OutputSetting up swapspace version 1, size = 2 GiB (2147479552 bytes)
no label, UUID=585e8b33-30fa-481f-af61-37b13326545b
After marking the file, we can enable the swap file, allowing our system to start using it:
- sudo swapon /swapfile
Verify that the swap is available by typing:
- sudo swapon --show
OutputNAME TYPE SIZE USED PRIO
/swapfile file 2G 0B -2
We can check the output of the free
utility again to corroborate our findings:
- free -h
Output total used free shared buff/cache available
Mem: 1.7Gi 172Mi 1.2Gi 9.0Mi 338Mi 1.4Gi
Swap: 2.0Gi 0B 2.0Gi
Our swap has been set up successfully and our operating system will begin to use it as necessary.
Our recent changes have enabled the swap file for the current session. However, if we reboot, the server will not retain the swap settings automatically. We can change this by adding the swap file to our /etc/fstab
file.
Back up the /etc/fstab
file in case anything goes wrong:
- sudo cp /etc/fstab /etc/fstab.bak
Add the swap file information to the end of your /etc/fstab
file by typing:
- echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Next we’ll review some settings we can update to tune our swap space.
There are a few options that you can configure that will have an impact on your system’s performance when dealing with swap.
The swappiness
parameter configures how often your system swaps data out of RAM to the swap space. This is a value between 0 and 100 that represents a percentage.
With values close to zero, the kernel will not swap data to the disk unless absolutely necessary. Remember, interactions with the swap file are “expensive” in that they take a lot longer than interactions with RAM and they can cause a significant reduction in performance. Telling the system not to rely on the swap much will generally make your system faster.
Values that are closer to 100 will try to put more data into swap in an effort to keep more RAM space free. Depending on your applications’ memory profile or what you are using your server for, this might be better in some cases.
We can see the current swappiness value by typing:
- cat /proc/sys/vm/swappiness
Output60
For a Desktop, a swappiness setting of 60 is not a bad value. For a server, you might want to move it closer to 0.
We can set the swappiness to a different value by using the sysctl
command.
For instance, to set the swappiness to 10, we could type:
- sudo sysctl vm.swappiness=10
Outputvm.swappiness = 10
This setting will persist until the next reboot. We can set this value automatically at restart by adding the line to our /etc/sysctl.conf
file.
The default text editor that comes with Rocky Linux 9 is vi
. vi
is an extremely powerful text editor, but it can be somewhat obtuse for users who lack experience with it. You might want to install a more user-friendly editor such as nano
to facilitate editing configuration files on your Rocky Linux 9 server:
- sudo dnf install nano
Now you can use nano
to edit the sysctl.conf
file:
- sudo nano /etc/sysctl.conf
At the bottom, you can add:
vm.swappiness=10
Save and close the file when you are finished. If you are using nano
, you can save and quit by pressing CTRL + X
, then when prompted, Y
and then Enter.
Another related value that you might want to modify is the vfs_cache_pressure
. This setting configures how much the system will choose to cache inode and dentry information over other data.
This is access data about the filesystem. This is generally very costly to look up and very frequently requested, so it’s an excellent thing for your system to cache. You can see the current value by querying the proc
filesystem again:
- cat /proc/sys/vm/vfs_cache_pressure
Output100
As it is currently configured, our system removes inode information from the cache too quickly. We can set this to a more conservative setting like 50 by typing:
- sudo sysctl vm.vfs_cache_pressure=50
Outputvm.vfs_cache_pressure = 50
Again, this is only valid for our current session. We can change that by adding it to our configuration file like we did with our swappiness setting:
- sudo nano /etc/sysctl.conf
At the bottom, add the line that specifies your new value:
vm.vfs_cache_pressure=50
Save and close the file when you are finished.
Following the steps in this guide will give you some breathing room in cases that would otherwise lead to out-of-memory exceptions. Swap space can be incredibly useful in avoiding some of these common problems.
If you are running into out of memory errors, or if you find that your system is unable to use the applications you need, the best solution is to optimize your application configurations or upgrade your server.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a Rocky Linux server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for a Rocky Linux 9 server. SSH keys provide a straightforward, secure method of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your local computer):
- ssh-keygen
By default, ssh-keygen
will create a 2048-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following prompt:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press ENTER
to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you may optionally enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to your key, to prevent unauthorized users from logging in.
You should then see the following output:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
You now have a public and private key that you can use to authenticate. The next step is to get the public key onto your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Rocky Linux host is to use a utility called ssh-copy-id
. This method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods that follow (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you need only specify the remote host that you would like to connect to and the user account that you have password SSH access to. This is the account to which your public SSH key will be copied:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into the remote account’s ~/.ssh/authorized_keys
file.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a more conventional SSH method.
We can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, we can make sure that the ~/.ssh
directory exists and has the correct permissions under the account we’re using.
We can then output the content we piped over into a file called authorized_keys
within this directory. We’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying any previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the above process manually.
We will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== sammy@host
Log in to your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, we’ll ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root
account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root
:
- chown -R sammy:sammy ~/.ssh
In this tutorial our user is named sammy but you should substitute the appropriate username into the above command.
We can now attempt key-based authentication with our Rocky Linux server.
If you have successfully completed one of the procedures above, you should now be able to log into the remote host without the remote account’s password.
The initial process is the same as with password-based authentication:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type yes
and then press ENTER
to continue.
If you did not supply a passphrase when creating your key pair in step 1, you will be logged in immediately. If you supplied a passphrase you will be prompted to enter it now. After authenticating, a new shell session should open for you with the configured account on the Rocky Linux server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling your SSH server’s password-based authentication.
If you were able to log in to your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo vi /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out with a #
hash. Press i
to put vi
into insertion mode, and then uncomment the line and set the value to no
. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
When you are finished making changes, press ESC
and then :wq
to write the changes to the file and quit. To actually implement these changes, we need to restart the sshd
service:
- sudo systemctl restart sshd
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing your current session:
- ssh username@remote_host
Once you have verified your SSH service is still working properly, you can safely close all current server sessions.
The SSH daemon on your Rocky Linux server now only responds to SSH keys. Password-based authentication has successfully been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>I then followed this guide to add a GUI - https://www.digitalocean.com/community/questions/how-to-install-graphical-interface
All the steps worked, until the step to establish a secure connection (Step 3 — Connecting to the VNC Desktop Securely).
I ran the code that was provided (changing the user parameter accordingly) -
> ssh -L 59000:localhost:5901 -C -N -l sammy your_server_ip
But I got the following error - Permission denied (publickey).
I do have SSH keys as the authentication method on root, and my non root user can login successfully. So I assume the key was copied successfully to that user.
I was required to set a password for the non root user, and I am not sure if that could be the cause of the error.
How can I fix this error and establish the SSH tunnel?
]]>When you first create a new Rocky Linux 9 server, there are a few configuration steps that you should take early on as part of the initial setup. This will increase the security and usability of your server and will give you a solid foundation to build on.
To log into your server, you will need to know your server’s public IP address. You will also need the password or, if you installed an SSH key for authentication, the private key for the root user’s account. If you have not already logged into your server, you may want to follow our documentation on how to connect to your Droplet with SSH, which covers this process in detail.
If you are not already connected to your server, log in as the root user now using the following command (substitute the highlighted portion of the command with your server’s public IP address):
- ssh root@your_server_ip
Accept the warning about host authenticity if it appears. If you are using password authentication, provide your root password to log in. If you are using an SSH key that is passphrase protected, you may be prompted to enter the passphrase the first time you use the key each session. If this is your first time logging into the server with a password, you may also be prompted to change the root password.
The root user is the administrative user in a Linux environment, and it has very broad privileges. Because of the heightened privileges of the root account, you are discouraged from using it on a regular basis. This is because part of the power inherent with the root account is the ability to make very destructive changes, even by accident.
As such, the next step is to set up an alternative user account with a reduced scope of influence for day-to-day work. This account will still be able to gain increased privileges when necessary.
Once you are logged in as root, you can create a new user account that you will use to log in from now on.
This example creates a new user called sammy, but you should replace it with any username that you prefer:
- adduser sammy
Next, set a strong password for the sammy
user:
- passwd sammy
You will be prompted to enter the password twice. After doing so, your user will be ready to use, but first you’ll give this user additional privileges to use the sudo
command. This will allow you to run commands as root when necessary.
Now, you have a new user account with regular account privileges. However, you may sometimes need to perform administrative tasks.
To avoid having to log out of your regular user and log back in as the root account, you can set up what is known as “superuser” or root privileges for your regular account. This will allow your regular user to run commands with administrative privileges by putting the word sudo
before each command.
To add these privileges to your new user, you need to add the new user to the wheel group. By default, on Rocky Linux 9, users who belong to the wheel group are allowed to use the sudo
command.
As root, run this command to add your new user to the wheel group (substitute the highlighted word with your new username):
- usermod -aG wheel sammy
Now, when logged in as your regular user, you can type sudo
before commands to perform actions with superuser privileges.
Firewalls provide a baseline level of security for your server. These applications are responsible for denying traffic to every port on your server, except for those ports/services you have explicitly approved. Rocky Linux has a service called firewalld
to perform this function. A tool called firewall-cmd
is used to configure firewalld
firewall policies.
Note: If your servers are running on DigitalOcean, you can optionally use DigitalOcean Cloud Firewalls instead of firewalld
. You should use only one firewall at a time to avoid conflicting rules that may be difficult to debug.
First install firewalld
:
- dnf install firewalld -y
The default firewalld
configuration allows ssh
connections, so you can turn the firewall on immediately:
- systemctl start firewalld
Check the status of the service to make sure it started:
- systemctl status firewalld
Output● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2022-09-13 18:26:19 UTC; 1 day 2h ago
Docs: man:firewalld(1)
Main PID: 15060 (firewalld)
Tasks: 4 (limit: 10938)
Memory: 28.1M
CPU: 6.127s
CGroup: /system.slice/firewalld.service
└─15060 /usr/bin/python3 -s /usr/sbin/firewalld --nofork --nopid
Note that it is both active
and enabled
, meaning it will start by default if the server is rebooted.
Now that the service is up and running, you can use the firewall-cmd
utility to get and set policy information for the firewall.
First let’s list which services are already allowed:
- firewall-cmd --permanent --list-all
Outputpublic (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: cockpit dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
To see the additional services that you can enable by name, type:
- firewall-cmd --get-services
To add a service that should be allowed, use the --add-service
flag:
- firewall-cmd --permanent --add-service=http
This would add the http
service and allow incoming TCP traffic to port 80
. The configuration will update after you reload the firewall:
- firewall-cmd --reload
Remember that you will have to explicitly open the firewall (with services or ports) for any additional services that you may configure later.
Now that you have a regular non-root user for daily use, you need to make sure you can use it to SSH into your server.
Note: Until verifying that you can log in and use sudo
with your new user, you should stay logged in as root. This way, if you have problems, you can troubleshoot and make any necessary changes as root. If you are using a DigitalOcean Droplet and experience problems with your root SSH connection, you can log into the Droplet using the DigitalOcean Console.
The process for configuring SSH access for your new user depends on whether your server’s root account uses a password or SSH keys for authentication.
If you logged in to your root account using a password, then password authentication is enabled for SSH. You can SSH to your new user account by opening up a new terminal session and using SSH with your new username:
- ssh sammy@your_server_ip
After entering your regular user’s password, you will be logged in. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
To enhance your server’s security, you should set up SSH keys instead of using password authentication. Follow this guide on setting up SSH keys on Rocky Linux 9 to learn how to configure key-based authentication.
If you logged in to your root account using SSH keys, then password authentication is disabled for SSH. You will need to add a copy of your public key to the new user’s ~/.ssh/authorized_keys
file to log in successfully.
Since your public key is already in the root account’s ~/.ssh/authorized_keys
file on the server, you can copy that file and directory structure to your new user account.
The most straightforward way to copy the files with the correct ownership and permissions is with the rsync
command. This will copy the root user’s .ssh
directory, preserve the permissions, and modify the file owners, all in a single command. Make sure to change the highlighted portions of the command below to match your regular user’s name:
Note: The rsync
command treats sources and destinations that end with a trailing slash differently than those without a trailing slash. When using rsync
below, be sure that the source directory (~/.ssh
) does not include a trailing slash (check to make sure you are not using ~/.ssh/
).
If you accidentally add a trailing slash to the command, rsync
will copy the contents of the root account’s ~/.ssh
directory to the sudo
user’s home directory instead of copying the entire ~/.ssh
directory structure. The files will be in the wrong location and SSH will not be able to find and use them.
- rsync --archive --chown=sammy:sammy ~/.ssh /home/sammy
Now, back in a new terminal on your local machine, open up a new SSH session with your non-root user:
- ssh sammy@your_server_ip
You should be logged in to the new user account without using a password. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
At this point, you have a solid foundation for your server. You can install any of the software you need on your server now. For example, you can begin by installing the Nginx web server.
]]>Screen is a console application that allows you to use multiple terminal sessions within one window. The program operates within a shell session and acts as a container and manager for other terminal sessions, similar to how a window manager manages windows.
There are many situations where creating several terminal windows is not possible or ideal. You might need to manage multiple console sessions without an X server running, you might need to access many remote cloud servers, or you might need to monitor a running program’s output while working on some other task.
There are modern, all-in-one solutions to this problem, like tmux, but screen
is the most mature of them all, and it has its own powerful syntax and features.
In this tutorial, we’ll be using Ubuntu 22.04, but outside of the installation process, everything should be the same on every modern Linux distribution.
Screen is often installed by default on Ubuntu. You can also use apt
to update your package sources and install screen
:
- sudo apt update
- sudo apt install screen
Verify that screen
has been installed by running which screen
:
- which screen
Output/usr/bin/screen
You can begin using screen
in the next step.
To start a new screen session, run the screen
command.
- screen
[secondary_label Output
GNU Screen version 4.09.00 (GNU) 30-Jan-22
Copyright (c) 2018-2020 Alexander Naumov, Amadeusz Slawinski
Copyright (c) 2015-2017 Juergen Weigert, Alexander Naumov, Amadeusz Slawinski
Copyright (c) 2010-2014 Juergen Weigert, Sadrul Habib Chowdhury
Copyright (c) 2008-2009 Juergen Weigert, Michael Schroeder, Micah Cowan, Sadrul Habib
Chowdhury
Copyright (c) 1993-2007 Juergen Weigert, Michael Schroeder
Copyright (c) 1987 Oliver Laumann
This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 2, or (at your option) any later version.
…
[Press Space for next page; Return to end.]
You’ll be greeted with the licensing page upon starting the program. Just press Enter to continue.
What happens next may be surprising. We are given a normal command prompt and it looks like nothing has happened. Did screen
fail to run correctly? Let’s try a quick keyboard shortcut to find out. Press Ctrl+a
, followed by v
:
Outputscreen 4.09.00 (GNU) 30-Jan-22
We’ve just requested the version information from screen, and we’ve received some feedback that allows us to verify that screen <em>is</em> running correctly.
Now is a great time to introduce the way that we will be controlling screen
. Screen is mainly controlled through keyboard shortcuts. Every keyboard shortcut for screen
is prefaced with Ctrl-a
(hold the control key while pressing the “a” key). That sequence of keystrokes tells screen
that it needs to pay attention to the next keys we press.
You’ve already used this paradigm once when we requested the version information about screen
. Let’s use it to get some more useful information, by entering Ctrl-a ?
:
Output Screen key bindings, page 1 of 2.
Command key: ^A Literal ^A: a
break ^B b license , removebuf =
clear C lockscreen ^X x reset Z
colon : log H screen ^C c
copy ^[ [ login L select '
detach ^D d meta a silence _
digraph ^V monitor M split S
displays * next ^@ ^N sp n suspend ^Z z
dumptermcap . number N time ^T t
fit F only Q title A
flow ^F f other ^A vbell ^G
focus ^I pow_break B version v
hardcopy h pow_detach D width W
help ? prev ^H ^P p ^? windows ^W w
history { } quit \ wrap ^R r
info i readbuf < writebuf >
kill K k redisplay ^L l xoff ^S s
lastmsg ^M m remove X xon ^Q q
[Press Space for next page; Return to end.]
This is the internal keyboard shortcut screen. You’ll probably want to memorize how to get here, because it’s an excellent quick reference. As you can see at the bottom, you can press Space to get more commands.
Okay, let’s try something more fun. Let’s run a program called top
in this window, which will show us some information on our processes.
- top
Outputtop - 16:08:07 up 1:44, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 58 total, 1 running, 57 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 507620k total, 262920k used, 244700k free, 8720k buffers
Swap: 0k total, 0k used, 0k free, 224584k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 3384 1836 1288 S 0.0 0.4 0:00.70 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
5 root 20 0 0 0 0 S 0.0 0.0 0:00.12 kworker/u:0
6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
7 root RT 0 0 0 0 S 0.0 0.0 0:00.07 watchdog/0
8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset
9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
11 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 netns
12 root 20 0 0 0 0 S 0.0 0.0 0:00.03 sync_supers
13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 bdi-default
14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kintegrityd
15 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kblockd
16 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 ata_sff
17 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khubd
18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 md
Okay, we are now monitoring the processes on our VPS. But what if we need to run some commands to find out more information about the programs we see? We don’t need to exit out of “top”. We can create a new window to run these commands.
The Ctrl-a c
sequence creates a new window for us. We can now run whatever commands we want without disrupting the monitoring we were doing in the other window.
Where did that other window go? We can get back to it using Ctrl-a n
.
This sequence goes to the next window that we are running. The list of windows wrap, so when there aren’t any windows beyond the current one, it switches us back to the first window.
Ctrl-a p
changes the current window in the opposite direction. So if you have three windows and are currently on the third, this command will switch you to the second window.
A helpful shortcut to use when you’re flipping between the same two windows is Ctrl-a Ctrl-a
.
This sequence moves you to your most recently visited window. So in the previous example, this would move you back to your third window.
At this point, you might be wondering how you can keep track of all of the windows that we are creating. Thankfully, screen comes with a number of different ways of managing your different sessions. First, we’ll create three new windows for a total of four windows and then we’ll try out Ctrl-a w
, one of Screen’s window management tools. Enter Ctrl-a c Ctrl-a c Ctrl-a c Ctrl-a w
:
Output0$ bash 1$ bash 2-$ bash 3*$ bash
We get some useful information from this command: a list of our current windows. Here, we have four windows. Each window has a number and the windows are numbered starting at “0”. The current window has an asterisk next to the number.
So you can see that we’re currently at window #3 (actually the fourth window because the first window is 0). You can get back to window #1 quickly by using Ctrl-a 1
.
We can use the index number to jump straight to the window we want. Let’s see our window list again using Ctrl-a w
:
Output0$ bash 1*$ bash 2$ bash 3-$ bash
As you can see, the asterisk tells us that we’re now on window #1. Let’s try a different way of switching windows with Ctrl-a “
:
OutputNum Name Flags
0 bash $
1 bash $
2 bash $
3 bash $
We get an actual navigation menu this time. You can navigate with either the up and down arrows. Switch to a window by pressing Enter.
This is pretty useful, but right now all of our windows are named “bash”. That’s not very helpful. Let’s name some of our sessions. Switch to a window you want to name, for example with Ctrl-a 0
and then use Ctrl-a A
:
OutputSet window's title to: bash
Using the Ctrl-a A
sequence, we can name our sessions. You can now backspace over “bash” and then rename it whatever you’d like. We’re going to run top
on window #0 again, so we’re going to name it monitoring
.
Verify the result with Ctrl-a “
:
OutputNum Name Flags
0 monitoring $
1 bash $
2 bash $
3 bash $
Now we have a more helpful label for window #0. So we know how to create and name windows, but how do we get rid of them when we don’t need them anymore? We use the Ctrl-a k
sequence, which stands for “kill”.
OutputReally kill this window [y/n]
When you want to quit screen
and kill all of your windows, you can use Ctrl-a \
.
OutputReally quit and kill all your windows [y/n]
This will destroy our screen session. We will lose any windows we have created and any unfinished work.
But we want to explore one of the huge benefits of using “screen”. We don’t want to destroy the session, we want to detach it. Detaching allows our programs in the screen instance to continue to run, but it gives us access back to our base-console session (the one where we started “screen” from initially). The screen session is still there, it will just be managed in the background. Use Ctrl-a d
to detach.
Output[detached from 1835.pts-0.Blank]
So our session is now detached. How do we get back into it?
- screen –r
The -r
flag stands for reattach. We are now back in our screen session. What if we have multiple screen sessions though? What if we had started a screen session and detached it, and then started a new screen session and detached that as well?
Try running screen
, then detaching with Ctrl-a d
, then running screen
again, then detaching with Ctrl-a d
again.
How do we tell screen
which session to attach?
- screen –ls
OutputThere are screens on:
2171.pts-0.Blank (07/01/2013 05:00:39 PM) (Detached)
1835.pts-0.Blank (07/01/2013 03:50:43 PM) (Detached)
2 Sockets in /var/run/screen/S-justin.
Now we have a list of our sessions. We can reattach the second one by typing its id number after the -r
flag.
- screen –r 1835
What if you want to attach a session on two separate computers or terminal windows? You can use the -x
flag, which lets you share the session.
- screen –x
There are a number of commands that help you manage the terminal sessions you run within screen
.
To copy text, you can use Ctrl-a [
.
This will give you a cursor that you can move with the arrow keys or with HJKL. Move to where you want to start copying, and press Enter. Move to the end of where you’d like to copy and press Enter again. The text is then copied to your clipboard.
One thing to be aware of is that this is also Screen’s mechanism for scrolling. If you need to see some text that is off the screen, you can hit Ctrl-a [
and then scroll up off of the screen.
We can paste text that we copied with Ctrl-a ]
.
Another thing you might want to do is monitor programs that are executing in another screen window.
Let’s say that you’re compiling something in one window and you want to know when it’s completed. You can ask screen
to monitor that window for silence with Ctrl-a _
, which will tell you when no output has been generated for 30 seconds.
Let’s try it with another example. Let’s have screen
tell us when our window is finished pinging Google 4 times.
- ping –c 4 www.google.com
Then input Ctrl-a _
:
OutputThe window is now being monitored for 30 sec. Silence.
Now we can do work in another window and be alerted when the task in this window is complete by entering Ctrl-a 1
:
OutputWindow 2: silence for 30 seconds
We can also do the opposite and be alerted when there is activity happening on a specific window.
- sleep 20 && echo “output”
Then, enter Ctrl-a M
.
OutputWindow 2 (bash) is now being monitored for all activity.
We will now be alerted when the command produces output. To see results, use Ctrl-a 1
:
OutputActivity in window 2
Let’s say we are going to be doing some important changes and we want to have a log of all of the commands we run. We can log the session with Ctrl-a H
.
OutputCreating logfile "screenlog.1".
If we need to see multiple windows at once, we can use something that screen calls “regions”. We create more regions by splitting the current region. To split the current region horizontally, we can use Ctrl-a S
.
This will move our current window to the top half of the screen and open a new blank region below it. We can get to the lower screen with Ctrl-a [tab]
.
We can then either create a new window in the bottom region or change the view to a different window in the normal way.
If we want to kill the current region, we can use Ctrl-a X
.
That destroys the region without destroying the actual window. This means that if you were running a program in that region, you can still access it as a normal window, the view into that window was destroyed.
If we want to make a vertical split, we can use Ctrl-a |
instead.
The controls for vertical splits are the same as horizontal splits. If we’ve added a few different regions and want to go back to a single region, we can use Ctrl-a Q
, which destroys all regions but the current one.
A great enhancement for screen
is a program called byobu
. It acts as a wrapper for screen
and provides an enhanced user experience. On Ubuntu, you can install it with:
- sudo apt install byobu
Before we begin, we need to tell byobu
to use screen
as a backend. We can do this with the following command:
- byobu-select-backend
OutputSelect the byobu backend:
1. tmux
2. screen
Choose 1-2 [1]:
We can choose screen
here to set it as the default terminal manager.
Now, instead of typing screen
to start a session, you can type byobu
.
- byobu
When you type Ctrl-a
for the first time, you’ll have to tell byobu to recognize that as a screen command.
OutputConfigure Byobu's ctrl-a behavior...
When you press ctrl-a in Byobu, do you want it to operate in:
(1) Screen mode (GNU Screen's default escape sequence)
(2) Emacs mode (go to beginning of line)
Note that:
- F12 also operates as an escape in Byobu
- You can press F9 and choose your escape character
- You can run 'byobu-ctrl-a' at any time to change your selection
Select [1 or 2]:
Select 1
to use byobu as normal.
The interface gives you a lot of useful information, such as a window list and system information. On Ubuntu, it even tells you how many packages have security updates with a number followed by an exclamation point on a red background.
One thing that is different between using byobu and screen is the way that byobu actually manages sessions. If you run byobu
again once you’re detached, it will reattach your previous session instead of creating a new one.
To create a new session, use byobu –S
:
- byobu –S sessionname
Change sessionname
to whatever you’d like to call your new session. You can see a list of current sessions with:
- byobu –ls
OutputThere are screens on:
22961.new (07/01/2013 06:42:52 PM) (Detached)
22281.byobu (07/01/2013 06:37:18 PM) (Detached)
2 Sockets in /var/run/screen/S-root.
And if there are multiple sessions, when you run byobu
, you will be presented with a menu to choose which session you want to connect to.
- byobu
OutputByobu sessions...
1. screen: 22961.new (07/01/2013 06:42:52 PM) (Detached)
2. screen: 22281.byobu (07/01/2013 06:37:18 PM) (Detached)
3. Create a new Byobu session (screen)
4. Run a shell without Byobu (/bin/bash)
Choose 1-4 [1]:
You can select any of the current sessions, create a new byobu session, or even get a new shell without using byobu.
One option that might be useful on a cloud server you manage remotely is to have byobu start up automatically whenever you log into your session. That means that if you are ever disconnected from your session, your work won’t be lost, and you can reconnect to get right back to where you were before.
To enable byobu to automatically start with every login, type this into the terminal:
- byobu-enable
OutputThe Byobu window manager will be launched automatically at each text login.
To disable this behavior later, just run:
byobu-disable
Press <enter> to continue…
As it says, if you ever want to turn this feature off again, type:
- byobu-disable
It will no longer start automatically.
In this tutorial, you installed and used screen
and then byobu
to manage terminal sessions. You learned several different shortcuts for detaching and switching between multiple running environments on the fly. Like many mature Unix terminal interfaces, Screen can be idiosyncratic, but it is also powerful and ubiquitous – you never know when it may come in handy.
Ordinarily, you connect to an SSH server using a command line app in a terminal, or terminal emulator software that includes an SSH client. Some tools, like Python’s WebSSH, make it possible to connect over SSH and run a terminal directly in your web browser.
This can be useful in a number of situations. It is particularly helpful for giving live presentations or demos, when it would be challenging to share a regular terminal window in a way that makes visual sense. It can also be helpful in educational settings when granting access to command line novices, as it avoids them needing to install software on their machines (especially on Windows, where the default options come with caveats). Finally, Python’s WebSSH in particular is very portable and requires no dependencies other than Python to get up and running. Other web-based terminal stacks can be much more complicated, and specific to Linux.
In this tutorial, you will set up WebSSH and connect over SSH in your browser. You will then optionally secure it with an SSL certificate and run it behind an Nginx reverse proxy for a production deployment.
A Windows, Mac, or Linux environment with a running SSH service. It can be useful to run WebSSH locally, but if you don’t have an SSH service running on a local machine, you can use a remote Linux server by following our initial server setup guide for Ubuntu 22.04.
The Python programming language installed along with pip
, its package manager. You can install Python and pip
on Ubuntu by following Step 1 of this tutorial.
Optionally, to enable HTTPS in the browser, you will need SSL certificates and your own domain name. You can obtain them by following How To Use Certbot Standalone Mode to Retrieve Let’s Encrypt SSL Certificates. If you do not have your own domain name, you can use an IP address for the first two steps of this tutorial.
If you’ve already installed Python and pip, you should be able to install Python packages from PyPI, the Python software repository. WebSSH is designed to be installed and run directly from the command line, so you won’t need to set up another virtual environment as discussed in the How To Install Python 3 and Set Up a Programming Environment. Virtual environments are more useful when working on your own projects, not when installing system-wide tools.
Use pip install
to install the WebSSH package:
- sudo pip3 install webssh
Output…
Successfully built webssh
Installing collected packages: tornado, pycparser, cffi, pynacl, paramiko, webssh
Successfully installed cffi-1.15.1 paramiko-2.11.0 pycparser-2.21 pynacl-1.5.0 tornado-6.2 webssh-1.6.0
Installing via sudo
will install the wssh
command globally. You can verify this by using which wssh
:
- which wssh
Output/usr/local/bin/wssh
You’ve now installed WebSSH. In the next step, you’ll run and connect to it. First, though, you’ll need to add a firewall rule. WebSSH runs on port 8888 by default. If you are using ufw
as a firewall, allow
that port through ufw
:
- sudo ufw allow 8888
You’ll revisit that firewall rule later in this tutorial.
If you’re running WebSSH on a local machine, you can run wssh
by itself with no additional arguments to start. If you’re running WebSSH on a remote server, you’ll need to add the --fbidhttp=False
flag to allow remote connections over regular HTTP. This is not secure if you are connecting over an unprotected network, but it is useful for a demo, and you will secure WebSSH later in this tutorial.
- wssh --fbidhttp=False
Now you can connect to WebSSH and log in. Navigate to your_domain:8888
in a web browser (using localhost
in place of your_domain if you are running locally). You will see the WebSSH login page:
Provide your regular SSH credentials. If you followed DigitalOcean’s initial server setup guide, you will be using key-based authentication rather than a password. That means you should only need to specify the Hostname
of the server you’re connecting to, your Username
for the server, and your SSH key, which should be located in the .ssh/
folder within your local home directory (usually named id_rsa
).
Note: As you might guess from having to manually specify a Hostname, WebSSH can also be used to connect to servers other than the one it is running on. For this tutorial, it is being run on the same server you are connecting to.
Click the Connect button, and you should be greeted with your default terminal welcome prompt:
At this point, you can use your terminal as normal, exactly as if you’d connected over SSH. Multiple users can also connect through the same WebSSH instance simultaneously. If you are running WebSSH on a local machine solely for the purpose of streaming or capturing video, this may be all you need. You can enter Ctrl+C
in the terminal that you launched WebSSH from (not the WebSSH terminal) to stop the WebSSH server when finished.
If you are running on a remote server, you will not want to use WebSSH in production behind an unsecured HTTP connection. Although you would still be protected by your SSH service’s authentication mechanism, using an SSH connection over HTTP poses a significant security risk, and will likely allow others to steal your SSH credentials. In the next steps, you’ll secure your WebSSH instance so that it is no less safe than a regular SSH connection.
To complete this step, you should have already obtained your own domain name and SSL certificates. One way to do that is by using LetsEncrypt in standalone mode.
When LetsEncrypt retrieves certificates, by default, it stores them in /etc/letsencrypt/live/your_domain
. Check to make sure that you have them:
- sudo ls /etc/letsencrypt/live/your_domain
OutputREADME cert.pem chain.pem fullchain.pem privkey.pem
To run WebSSH with HTTPS support, you’ll need to provide a path to a cert, and a path to a key. These are fullchain.pem
and privkey.pem
. By default, WebSSH provides HTTPS access on port 4433, so open that port in your firewall as well:
- sudo ufw allow 4433
Next, you’ll need to allow
Then, run WebSSH with the paths to your cert and your key:
- sudo wssh --certfile='/etc/letsencrypt/live/your_domain/fullchain.pem' --keyfile='/etc/letsencrypt/live/your_domain/privkey.pem'
In a web browser, navigate to https://your_domain:4433
, and you should see the same interface as in the prior step, now with HTTPS support. This is now enough setup for a safe production configuration. However, you’re still running the wssh
app directly from your terminal, and you’re accessing it in the browser from an unusual port. In the last two steps of this tutorial, you will remove both of these limitations.
Putting a web server such as Nginx in front of other web-facing applications can improve performance and make it much more straightforward to secure a site. You’ll install Nginx and configure it to reverse proxy requests to WebSSH, meaning it will take care of handling requests from your users to WebSSH and back again.
Refresh your package list, then install Nginx using apt
:
- sudo apt update nginx
- sudo apt install nginx
If you are using a ufw
firewall, you should make some changes to your firewall configuration at this point, to enable access to the default HTTP/HTTPS ports, 80 and 443. ufw
has a stock configuration called “Nginx Full” which provides access to both of these ports:
- sudo ufw allow “Nginx Full”
Nginx allows you to add per-site configurations to individual files in a subdirectory called sites-available/
. Using nano
or your favorite text editor, create a new Nginx configuration at /etc/nginx/sites-available/webssh
:
- sudo nano /etc/nginx/sites-available/webssh
Paste the following into the new configuration file, being sure to replace your_domain
with your domain name.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name your_domain www.your_domain
root /var/www/html;
access_log /var/log/nginx/webssh.access.log;
error_log /var/log/nginx/webssh.error.log;
location / {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_read_timeout 300;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
}
listen 443 ssl;
# RSA certificate
ssl_certificate /etc/letsencrypt/live/your_domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain/privkey.pem;
# Redirect non-https traffic to https
if ($scheme != "https") {
return 301 https://$host$request_uri;
}
}
You can read this configuration as having three main “blocks” to it. The first block, coming before the location /
line, contains a boilerplate Nginx configuration for serving a website on the default HTTP port, 80. The location /
block contains a configuration for proxying incoming connections to WebSSH, running on port 8888 internally, while preserving SSL. The configuration at the end of the file, after the location /
block, loads your LetsEncrypt SSL keypairs and redirects HTTP connections to HTTPS.
Save and close the file. If you are using nano
, press Ctrl+X
, then when prompted, Y
and then Enter.
Next, you’ll need to activate this new configuration. Nginx’s convention is to create symbolic links (like shortcuts) from files in sites-available/
to another folder called sites-enabled/
as you decide to enable or disable them. Using full paths for clarity, make that link:
- sudo ln -s /etc/nginx/sites-available/webssh /etc/nginx/sites-enabled/webssh
By default, Nginx includes another configuration file at /etc/nginx/sites-available/default
, linked to /etc/nginx/sites-enabled/default
, which also serves its default index page. You’ll want to disable that rule by removing it from /sites-enabled
, because it conflicts with your new WebSSH configuration:
- sudo rm /etc/nginx/sites-enabled/default
Note: The Nginx configuration in this tutorial is designed to serve a single application, WebSSH. You could expand this Nginx configuration to serve multiple applications on the same server by following the Nginx documentation.
Next, run nginx -t
to verify your configuration before restarting Nginx:
- sudo nginx -t
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Now you can restart your Nginx service, so it will reflect your new configuration:
- sudo systemctl restart nginx
Finally, you can remove the firewall rules you created earlier for accessing WebSSH directly, since all traffic will now be handled by Nginx over the standard HTTP/HTTPS ports:
- sudo ufw delete allow 8888
- sudo ufw delete allow 4433
Restart webssh
on the command line:
- wssh
You do not need to provide the cert and key paths this time, since Nginx is handling that. Then navigate to your_domain in a web browser.
Notice that the WebSSH is now being served over HTTPS via Nginx without needing to specify a port. At this point, you have automated everything except launching wssh
itself. You will do that in the final step.
Deploying server-side applications that do not automatically run in the background can be unintuitive at first, since you’d need to start them directly from the command line every time. The solution to this is to set up your own background service.
To do this, you’ll create a unit file that can be used by your server’s init system. On nearly all modern Linux distros, the init system is called Systemd, and you can interact with it by using the systemctl
command.
If WebSSH is still running in your terminal, press Ctrl+C
to stop it. Then, using nano
or your favorite text editor, open a new file called /etc/systemd/system/webssh.service
:
- sudo nano /etc/systemd/system/webssh.service
Your unit file needs, at minimum, a [Unit]
section, a [Service]
section, and an [Install]
section:
[Unit]
Description=WebSSH terminal interface
After=network.target
[Service]
User=www-data
Group=www-data
ExecStart=wssh
[Install]
WantedBy=multi-user.target
This file can be broken down as follows:
The [Unit]
section contains a plaintext description of your new service, as well as an After
hook that specifies when it should be run at system startup, in this case after your server’s networking interfaces have come up.
The [Service]
section specifies which command should actually be run, as well as which user should be running it. In this case, www-data
is the default Nginx user on a Ubuntu server, and wssh
is the command itself.
The [Install]
section contains only the WantedBy=multi-user.target
line, which works together with the After
line in the [Unit]
section to ensure that the service is started when the server is ready to accept user logins.
Save and close the file. You can now start
your new WebSSH service, and enable
it to run on boot automatically:
- sudo systemctl start webssh
- sudo systemctl enable webssh
Use systemctl status webssh
to verify that it started successfully. You should receive similar output to when you first ran the command in a terminal.
- sudo systemctl status webssh
Output● webssh.service - WebSSH terminal interface
Loaded: loaded (/etc/systemd/system/webssh.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2022-08-11 22:08:25 UTC; 2s ago
Main PID: 15678 (wssh)
Tasks: 1 (limit: 1119)
Memory: 20.2M
CPU: 300ms
CGroup: /system.slice/webssh.service
└─15678 /usr/bin/python3 /usr/local/bin/wssh
Aug 11 22:08:25 webssh22 systemd[1]: Started WebSSH terminal interface.
Aug 11 22:08:26 webssh22 wssh[15678]: [I 220811 22:08:26 settings:125] WarningPolicy
Aug 11 22:08:26 webssh22 wssh[15678]: [I 220811 22:08:26 main:38] Listening on :8888 (http)
You can now reload https://your_domain
in your browser, and you should once again get the WebSSH interface. From now on, WebSSH and Nginx will automatically restart with your server and run in the background.
In this tutorial, you installed WebSSH, a portable solution for providing a command line interface in a web browser. You improved your deployment by adding SSL, then by adding an Nginx reverse proxy, and finally by creating a system service for WebSSH. This is a good model for deploying small server-side web applications in general, and particularly important for SSH, which relies on key pairs for security.
Next, you may want to learn about other connection options for SSH.
]]>I wanted to know that I am building a project on which I wanna do experiment which check location of devices from windows os to kali linux, is there any way of doing it?
Help me out, if anyone can.
]]>apt-key
is a utility used to manage the keys that APT uses to authenticate packages. It’s closely related to the add-apt-repository
utility, which adds external repositories using keyservers to an APT installation’s list of trusted sources. However, keys added using apt-key
and add-apt-repository
are trusted globally by apt
. These keys are not limited to authorizing the single repository they were intended for. Any key added in this manner can be used to authorize the addition of any other external repository, presenting an important security concern.
Starting with Ubuntu 20.10, the use of apt-key
yields a warning that the tool will be deprecated in the near future; likewise, add-apt-repository
will also soon be deprecated. While these deprecation warnings do not strictly prevent the usage of apt-key
and add-apt-repository
with Ubuntu 22.04, it is not advisable to ignore them.
The current best practice is to use gpg
in place of apt-key
and add-apt-repository
, and in future versions of Ubuntu it will be the only option. apt-key
and add-apt-repository
themselves have always acted as wrappers, calling gpg
in the background. Using gpg
directly cuts out the intermediary. For this reason, the gpg
method is backwards compatible with older versions of Ubuntu and can be used as a drop-in replacement for apt-key
.
This tutorial will outline two procedures that use alternatives to apt-key
and add-apt-repository
, respectively. First will be adding an external repository using a public key with gpg
instead of using apt-key
. Second, as an addendum, this tutorial will cover adding an external repository using a keyserver with gpg
as an alternative to using add-apt-repository
.
To complete this tutorial, you will need an Ubuntu 22.04 server. Be sure to set this up according to our initial server setup guide for Ubuntu 22.04, with a non-root user with sudo
privileges and a firewall enabled.
PGP, or Pretty Good Privacy, is a proprietary encryption program used for signing, encrypting, and decrypting files and directories. PGP files are public key files, which are used in this process to authenticate repositories as valid sources within apt
. GPG, or GNU Privacy Guard, is an open-source alternative to PGP. GPG files are usually keyrings, which are files that hold multiple keys. Both of these file types are commonly used to sign and encrypt files.
gpg
is GPG’s command line tool that can be used to authorize external repositories for use with apt
. However, gpg
only accepts GPG files. In order to use this command line tool with PGP files, you must convert them.
Elasticsearch presents a common scenario for key conversion, and will be used as the example for this section. You’ll download a key formatted for PGP and convert it into an apt
compatible format with a .gpg
file extension. You’ll do this by running the gpg
command with the --dearmor
flag. Next, you’ll add the repository link to the list of package sources, while attaching a direct reference to your converted key. Finally, you will verify this process by installing the Elasticsearch package.
Projects that require adding repositories with key verification will always provide you with a public key and a repository URI representing its exact location. For our Elasticsearch example, the documentation gives these components on their installation page.
Here are the components given for Elasticsearch:
https://artifacts.elastic.co/GPG-KEY-elasticsearch
https://artifacts.elastic.co/packages/7.x/apt stable main
Next, you have to determine whether you are given a PGP or GPG file to work with. You can inspect at the key file by opening the URL with curl
:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch
This will output the contents of the key file, which starts with the following:
Output-----BEGIN PGP PUBLIC KEY BLOCK-----
. . .
Despite having GPG
in the URL, the first line indicates that this is actually a PGP key file. Take note of this, because apt
only accepts the GPG format. Originally, apt-key
detected PGP files and converted it into GPG automatically by calling gpg
in the background. Step 2 will cover both manual conversion from PGP to GPG, and also what to do when conversion is not needed.
apt
Compatible File TypeWith the gpg
method, you must always download the key before adding to the list of package sources. Previously with apt-key
, this ordering was not always enforced. Now, you are required to reference the path to the downloaded key file in your sources list. If you have not downloaded the key, you obviously cannot reference an existing path.
With Elasticsearch you are working with a PGP file, so you will convert it to a GPG file format after download. The following example uses curl
to download the key, with the download being piped into a gpg
command. gpg
is called with the --dearmor
flag to convert the PGP key into a GPG file format, with -o
used to indicate the file output.
On Ubuntu, the /usr/share/keyrings
directory is the recommended location for your converted GPG files, as it is the default location where Ubuntu stores its keyrings. The file is named elastic-7.x.gpg
in this example, but any name works:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-7.x.gpg
This converts the PGP file into the correct GPG format, making it ready to be added to the list of sources for apt
.
Note: If the downloaded file was already in a GPG format, you could instead download the file straight to /usr/share/keyrings
without converting it using a command like the following example:
- curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo tee /usr/share/keyrings/elastic-7.x.gpg
In this case, the curl
command’s output would be piped into tee
to save the file in the correct location.
With the key downloaded and in the correct GPG file format, you can add the repository to the apt
packages source while explicitly linking it to the key you obtained. There are three methods to achieve this, all of which are related to how apt
finds sources. apt
pulls sources from a central sources.list
file, .list
files in the sources.list.d
directory, and .source
files in the sources.list.d
directory. Though there is no functional difference between the three options, it is recommended to consider the three options and choose the method that best fits your needs.
sources.list
DirectlyThe first method involves inserting a line representing the source directly into /etc/apt/sources.list
, the primary file containing apt
sources. There are multiple sources in this file, including the default sources that come with Ubuntu. It is perfectly acceptable to edit this file directly, though Option 2 and Option 3 will present a more modular solution that can be easier to edit and maintain.
Open /etc/apt/sources.list
with nano
or your preferred text editor:
- sudo nano /etc/apt/sources.list
Then add the external repository to the bottom of the file:
. . .
deb [arch=amd64,arm64 signed-by=/usr/share/keyrings/elastic-7.x.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main
This line contains the following information about the source:
deb
: This specifies that the source uses a regular Debian architecture.arch=amd64,arm64
specifies the architectures the APT data will be downloaded to. Here it is amd64
and arm64
.signed-by=/usr/share/keyrings/elastic-7.x.gpg
: This specifies the key used to authorize this source, and here it points towards your .gpg
file stored in /usr/share/keyrings
. This portion of the line must be included, while it previously wasn’t required in the apt-key
method. This addition is the most critical change in porting away from apt-key
, since it ties the key to a singular repository it is allowed to authorize and fixes the original security flaw in apt-key
.https://artifacts.elastic.co/packages/7.x/apt stable main
: This is the URI representing the exact location the data within the repository can be found./etc/apt/sources.list.d/elastic-7.x.list
: This is the location and name of the new file to be created./dev/null
: This is used when the output of a command is not necessary. Pointing tee
to this location omits the output.Save and exit by hitting CTRL+O
then CTRL+X
.
.list
File in sources.list.d
With this option, you will instead create a new file in the sources.list.d
directory. apt
parses both this directory and sources.list
for repository additions. This method allows you to physically isolate repository additions within separate files. If you ever need to later remove this addition or make edits, you can delete this file instead of editing the central sources.list
file. Keeping your additions separate makes it easier to maintain, and editing sources.list
can be more error prone in a way that affects other repositories in the file.
To do this, pipe an echo
command into a tee
command to create this new file and insert the appropriate line. The file is named elastic-7.x.list
in the following example, but any name works as long as it is a unique filename in the directory:
- echo "deb [arch=amd64,arm64 signed-by=/usr/share/keyrings/elastic-7.x.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list > /dev/null
This command is identical to manually creating the file and inserting the appropriate line of text.
.sources
File in sources.list.d
The third method writes to a .sources
file instead of a .list
file. This method is relatively new, and uses the deb822
multiline format that is less ambiguous compared to the deb . . .
declaration, though is functionally identical. Create a new file:
- sudo nano /etc/apt/sources.list.d/elastic-7.x.sources
Then add the external repository using the deb822
format:
Types: deb
Architectures: amd64 arm64
Signed-By: /usr/share/keyrings/elastic-7.x.gpg
URIs: https://artifacts.elastic.co/packages/7.x/apt
Suites: stable
Components: main
Save and exit after you’ve inserted the text.
This is analogous to the one-line format, and doing a line-by-line comparison shows that the information in both is identical, just organized differently. One thing to note is that this format doesn’t use commas when there are multiple arguments (such as with amd64,arm64
), and instead uses spaces.
Next you will verify this process by doing a test installation.
You must call apt update
in order to prompt apt
to look through the main sources.list
file, and all the .list
and .sources
files in sources.list.d
. Calling apt install
without an update first will cause a failed install, or an installation of an out-of-date default package from apt
.
Update your repositories:
- sudo apt update
Then install your package:
- sudo apt install elasticsearch
Nothing changes in this step compared to the apt-key
method. Once this command finishes you will have completed the installation.
This section will briefly go over using gpg
with a keyserver instead of a public key to add an external repository. The process is nearly identical to the public key method, with the difference being how gpg
is called.
add-apt-repository
is the keyserver based counterpart to apt-key
, and both are up for deprecation. This scenario uses different components. Instead of a key and repository, you are given a keyserver URL and key ID. In this case, you can download from the keyserver directly into the appropriate .gpg
format without having to convert anything. Because add-apt-repository
will soon be deprecated, you will instead use gpg
to download to a file while overriding the default gpg
behavior of importing to an existing keyring.
Using the open-source programming language R as an example, here are the given components, which can also be found in the installation instructions on the official project site:
keyserver.ubuntu.com
E298A3A825C0D65DFD57CBB651716619E084DAB9
https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/
First, download from the keyserver directly using gpg
. Be aware that depending on download traffic, this download command may take a while to complete:
- sudo gpg --homedir /tmp --no-default-keyring --keyring /usr/share/keyrings/R.gpg --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
This command includes the following flags, which are different from using gpg
with a public key:
--no-default-keyring
combined with --keyring
allows outputting to a new file instead of importing into an existing keyring, which is the default behavior of gpg
in this scenario.--keyserver
combined with --recv-keys
provides the specific key and location you’re downloading from.--homedir
is used to overwrite the gpg
default location for creating temporary files. gpg
needs to create these files to complete the command, otherwise gpg
will attempt to write to /root
which causes a permission error. Instead, this command places the temporary files in the appropriate /tmp
directory.Next, add the repository to a .list
file. This is done in the exact same manner as adding an external repository using a public key by piping an echo
command into a tee
command:
- echo "deb [arch=amd64 signed-by=/usr/share/keyrings/R.gpg] https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/" | sudo tee /etc/apt/sources.list.d/R.list > /dev/null
Next, update your list of repositories:
- sudo apt update
Then you can install the package:
- sudo apt install r-base
Using gpg
to add external repositories is similar between public keys and keyservers, with the difference being how you call gpg
.
Adding an external repository using a public key or a keyserver can be done through gpg
, without using apt-key
or add-apt-repository
as an intermediary. Use this method to ensure your process does not become obsolete in future Ubuntu versions, as apt-key
and add-apt-repository
are deprecated and will be removed in a future version. Adding external repositories using gpg
ensures that a key will only be used to authorize a single repository as you intend.
I have been trying to deploy a django website on an Ubuntu instance. I have successfully setup an apache configuration file, and then removed the secret token from the settings.py file in the django project, putting it somewhere safe. When I tried to check if my website was working, it gave me this error in apache error.log file:
I have checked the configuration file to make sure that the path was the correct the one. I am sharing a screenshot of that apache configuration as well for reference:
I have tried running chown and chmod on both the venv in that directory and also on the main directory. I also ran ls -la to see if the permissions are correct as shown in this screenshot:
Unfortunately, as I am still a beginner, I couldn’t have figured out what the next steps should be. I did try looking at stack overflow and different articles, but a lot of them are not very specific to the issue I am facing.
Your help and guidance would be much appreciated, thank so much in advance for it.
]]>After installing a command-line program, you may only be able to run it in the same directory as the program. You can run a command-line program from any directory with the help of an environment variable called PATH
.
The PATH
variable contains a list of directories the system checks before running a command. Updating the PATH
variable will enable you to run any executables found in the directories mentioned in PATH
from any directory without typing the absolute file path.
For example, instead of typing the following to run a Python program:
- /usr/bin/python3
Because the /usr/bin
directory is included in the PATH
variable, you can type this instead:
- python3
The directories are listed in priority order, so the ones that will be checked first are mentioned first.
In this tutorial, you will view the PATH
variable and update its value.
For an overview of environment variables, refer to the How To Read and Set Environmental and Shell Variables on Linux article.
PATH
VariableYou can view the PATH
variable with the following command:
- echo $PATH
An unchanged PATH
may look something like this (file paths may differ slightly depending on your system):
Output/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
Some directories are mentioned by default, and each directory in PATH
is separated with a colon :
. The system checks these directories from left to right when running a program.
When a command-line program is not installed in any of the mentioned directories, you may need to add the directory of that program to PATH
.
PATH
Environment VariableA directory can be added to PATH
in two ways: at the start or the end of a path.
Adding a directory (/the/file/path
for example) to the start of PATH
will mean it is checked first:
- export PATH=/the/file/path:$PATH
Adding a directory to the end of PATH
means it will be checked after all other directories:
- export PATH=$PATH:/the/file/path
Multiple directories can be added to PATH
at once by adding a colon :
between the directories:
- export PATH=$PATH:/the/file/path:/the/file/path2
Once the export
command is executed, you can view the PATH
variable to see the changes:
- export PATH=$PATH:/the/file/path
- echo $PATH
You will see an output like this:
Output/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games:/the/file/path
This method will only work for the current shell session. Once you exit the current session and start a new one, the PATH
variable will reset to its default value and no longer contain the directory you added. For the PATH
to persist across different shell sessions, it has to be stored in a file.
PATH
VariableIn this step, you will add a directory permanently in the shell configuration file, which is ~/.bashrc
if you’re using a bash shell or ~/.zshrc
if you’re using a zsh shell. This tutorial will use ~/.bashrc
as an example.
First, open the ~/.bashrc
file:
- nano ~/.bashrc
The ~/.bashrc
file will have existing data, which you will not modify. At the bottom of the file, add the export
command with your new directory:
- ...
- Adding paths to your PATH
- export PATH=$PATH:the/file/path
Use the methods described in the prior section to clarify whether you want the new directory to be checked first or last in the PATH
.
Save and close the file. The changes to the PATH
variable will be made once a new shell session is started. To apply the changes to the current session, use the source
command:
- source ~/.bashrc
You can add new directories in the future by opening this file and appending directories separated by a colon :
to the existing export
command.
The PATH
environment variable is a crucial aspect of command-line use. It enables you to run command-line programs, such as echo
and python3
, from any directory without typing the full path. In cases where adding the directory to PATH
isn’t part of the installation process, this tutorial provides the required steps. For more on environmental variables, see How To Read and Set Environmental and Shell Variables on Linux.
SSH is the de facto method of connecting to a cloud server. It is durable, and it is extensible — as new encryption standards are developed, they can be used to generate new SSH keys, ensuring that the core protocol remains secure. However, no protocol or software stack is totally foolproof, and SSH being so widely deployed across the internet means that it represents a very predictable attack surface or attack vector through which people can try to gain access.
Any service that is exposed to the network is a potential target in this way. If you review the logs for your SSH service running on any widely trafficked server, you will often see repeated, systematic login attempts that represent brute force attacks by users and bots alike. Although you can make some optimizations to your SSH service to reduce the chance of these attacks succeeding to near-zero, such as disabling password authentication in favor of SSH keys, they can still pose a minor, ongoing liability.
Large-scale production deployments for whom this liability is completely unacceptable will usually implement a VPN such as WireGuard in front of their SSH service, so that it is impossible to connect directly to the default SSH port 22 from the outside internet without additional software abstraction or gateways. These VPN solutions are widely trusted, but will add complexity, and can break some automations or other small software hooks.
Prior to or in addition to committing to a full VPN setup, you can implement a tool called Fail2ban. Fail2ban can significantly mitigate brute force attacks by creating rules that automatically alter your firewall configuration to ban specific IPs after a certain number of unsuccessful login attempts. This will allow your server to harden itself against these access attempts without intervention from you.
In this guide, you’ll see how to install and use Fail2ban on a Rocky Linux 8 server.
To complete this guide, you will need:
A Rocky Linux 8 server and a non-root user with sudo privileges. You can learn more about how to set up a user with these privileges in our Initial Server Setup with Rocky Linux 8 guide. You should also have firewalld
running on the server, which is covered in our initial server setup guide.
Optionally, a second server that you can connect to your first server from, which you will use to test getting deliberately banned.
Fail2ban is not available in Rocky’s default software repositories. However, it is available in the EPEL, or Enhanced Packages for Enterprise Linux repository, which is commonly used for third-party packages on Red Hat and Rocky Linux. If you have not already added EPEL to your system package sources, you can add the repository using dnf
, like you would install any other package:
- sudo dnf install epel-release -y
The dnf
package manager will now check EPEL in addition to your default package sources when installing new software. Proceed to install Fail2ban:
- sudo dnf install fail2ban -y
Fail2ban will automatically set up a background service after being installed. However, it is disabled by default, because some of its default settings may cause undesired effects. You can verify this by using the systemctl
command:
- systemctl status fail2ban.service
Output○ fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; disabled; vendor preset: disabled
Active: inactive (dead)
Docs: man:fail2ban(1)
You could enable Fail2ban right away, but first, you’ll review some of its features.
The fail2ban service keeps its configuration files in the /etc/fail2ban
directory. There is a file with defaults called jail.conf
. Go to that directory and print the first 20 lines of that file using head -20
:
- cd /etc/fail2ban
- head -20 jail.conf
Output#
# WARNING: heavily refactored in 0.9.0 release. Please review and
# customize settings for your setup.
#
# Changes: in most of the cases you should not modify this
# file, but provide customizations in jail.local file,
# or separate .conf files under jail.d/ directory, e.g.:
#
# HOW TO ACTIVATE JAILS:
#
# YOU SHOULD NOT MODIFY THIS FILE.
#
# It will probably be overwritten or improved in a distribution update.
#
# Provide customizations in a jail.local file or a jail.d/customisation.local.
# For example to change the default bantime for all jails and to enable the
# ssh-iptables jail the following (uncommented) would appear in the .local file.
# See man 5 jail.conf for details.
#
# [DEFAULT]
As you’ll see, the first several lines of this file are commented out – they begin with #
characters indicating that they are to be read as documentation rather than as settings. As you’ll also see, these comments are directing you not to modify this file directly. Instead, you have two options: either create individual profiles for Fail2ban in multiple files within the jail.d/
directory, or create and collect all of your local settings in a jail.local
file. The jail.conf
file will be periodically updated as Fail2ban itself is updated, and will be used as a source of default settings for which you have not created any overrides.
In this tutorial, you’ll create jail.local
. You can do that by copying jail.conf
:
- sudo cp jail.conf jail.local
Now you can begin making configuration changes. Open the file in vi
or your favorite text editor:
- sudo vi jail.local
While you are scrolling through the file, this tutorial will review some options that you may want to update. The settings located under the [DEFAULT]
section near the top of the file will be applied to all of the services supported by Fail2ban. Elsewhere in the file, there are headers for [sshd]
and for other services, which contain service-specific settings that will apply over top of the defaults.
[DEFAULT]
. . .
bantime = 10m
. . .
The bantime
parameter sets the length of time that a client will be banned when they have failed to authenticate correctly. This is measured in seconds. By default, this is set to 10 minutes.
[DEFAULT]
. . .
findtime = 10m
maxretry = 5
. . .
The next two parameters are findtime
and maxretry
. These work together to establish the conditions under which a client is found to be an illegitimate user that should be banned.
The maxretry
variable sets the number of tries a client has to authenticate within a window of time defined by findtime
, before being banned. With the default settings, the fail2ban service will ban a client that unsuccessfully attempts to log in 5 times within a 10 minute window.
[DEFAULT]
. . .
destemail = root@localhost
sender = root@<fq-hostname>
mta = sendmail
. . .
If you need to receive email alerts when Fail2ban takes action, you should evaluate the destemail
, sendername
, and mta
settings. The destemail
parameter sets the email address that should receive ban messages. The sendername
sets the value of the “From” field in the email. The mta
parameter configures what mail service will be used to send mail. By default, this is sendmail
, but you may want to use Postfix or another mail solution.
[DEFAULT]
. . .
action = $(action_)s
. . .
This parameter configures the action that Fail2ban takes when it wants to institute a ban. The value action_
is defined in the file shortly before this parameter. The default action is to update your firewall configuration to reject traffic from the offending host until the ban time elapses.
There are other action_
scripts provided by default which you can replace $(action_)
with above:
…
# ban & send an e-mail with whois report to the destemail.
action_mw = %(action_)s
%(mta)s-whois[sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(action_)s
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# See the IMPORTANT note in action.d/xarf-login-attack for when to use this action
#
# ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines
# to the destemail.
action_xarf = %(action_)s
xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"]
# ban IP on CloudFlare & send an e-mail with whois report and relevant log lines
# to the destemail.
action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
…
For example, action_mw
takes action and sends an email, action_mwl
takes action, sends an email, and includes logging, and action_cf_mwl
does all of the above in addition to sending an update to the Cloudflare API associated with your account to ban the offender there, too.
Next is the portion of the configuration file that deals with individual services. These are specified by section headers, like [sshd]
.
Each of these sections needs to be enabled individually by adding an enabled = true
line under the header, with their other settings.
[jail_to_enable]
. . .
enabled = true
. . .
For this tutorial, you’ll enable the SSH jail. It should be at the top of the individual jail settings. The default parameters will work otherwise, but you’ll need to add a configuration line that says enabled = true
under the [sshd]
header.
#
# JAILS
#
#
# SSH servers
#
[sshd]
# To use more aggressive sshd modes set filter parameter "mode" in jail.local:
# normal (default), ddos, extra or aggressive (combines all).
# See "tests/files/logs/sshd" or "filter.d/sshd.conf" for usage example and details.
#mode = normal
enabled = true
port = ssh
logpath = %(sshd_log)s
backend = %(sshd_backend)s
Some other settings that are set here are the filter
that will be used to decide whether a line in a log indicates a failed authentication and the logpath
which tells fail2ban where the logs for that particular service are located.
The filter
value is actually a reference to a file located in the /etc/fail2ban/filter.d
directory, with its .conf
extension removed. These files contain regular expressions (a common shorthand for text parsing) that determine whether a line in the log is a failed authentication attempt. We won’t be covering these files in-depth in this guide, because they are fairly complex and the predefined settings match appropriate lines well.
However, you can see what kind of filters are available by looking into that directory:
- ls /etc/fail2ban/filter.d
If you see a file that looks related to a service you are using, you should open it with a text editor. Most of the files are fairly well commented and you should be able to at least tell what type of condition the script was designed to guard against. Most of these filters have appropriate (disabled) sections in the jail.conf
file that we can enable in the jail.local
file if desired.
For instance, imagine that you are serving a website using Nginx and realize that a password-protected portion of your site is getting slammed with login attempts. You can tell fail2ban to use the nginx-http-auth.conf
file to check for this condition within the /var/log/nginx/error.log
file.
This is actually already set up in a section called [nginx-http-auth]
in your /etc/fail2ban/jail.conf
file. You would just need to add the enabled
parameter:
. . .
[nginx-http-auth]
enabled = true
. . .
When you are finished editing, save and close the file. If you are using vi
, use :x
to save and quit. At this point, you can enable your Fail2ban service so that it will run automatically from now on. First, run systemctl enable
:
- sudo systemctl enable fail2ban
Then, start it manually for the first time with systemctl start
:
- sudo systemctl start fail2ban
You can verify that it’s running with systemctl status
:
- sudo systemctl status fail2ban
Output● fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: disabled
Active: active (running) since Mon 2022-06-27 19:25:15 UTC; 3s ago
Docs: man:fail2ban(1)
Main PID: 39396 (fail2ban-server)
Tasks: 5 (limit: 1119)
Memory: 12.9M
CPU: 278ms
CGroup: /system.slice/fail2ban.service
└─39396 /usr/bin/python3.6 -s /usr/bin/fail2ban-server -xf start
Jun 27 19:25:15 fail2ban22 systemd[1]: Started Fail2Ban Service.
Jun 27 19:25:15 fail2ban22 fail2ban-server[39396]: Server ready
In the next step, you’ll demonstrate Fail2ban in action.
From another server, one that won’t need to log into your Fail2ban server in the future, you can test the rules by getting that second server banned. After logging into your second server, try to SSH into the Fail2ban server. You can try to connect using a nonexistent name:
- ssh blah@your_server
Enter random characters into the password prompt. Repeat this a few times. At some point, the error you’re receiving should change from Permission denied
to Connection refused
. This signals that your second server has been banned from the Fail2ban server.
On your Fail2ban server, you can see the new rule by checking the output of fail2ban-client
. fail2ban-client
is an additional command provided by Fail2ban for checking its running configuration.
- sudo fail2ban-client status
OutputStatus
|- Number of jail: 1
`- Jail list: sshd
If you run fail2ban-client status sshd
, you can see the list of IPs that have been banned from SSH:
- sudo fail2ban-client status sshd
OutputStatus for the jail: sshd
|- Filter
| |- Currently failed: 2
| |- Total failed: 7
| `- Journal matches: _SYSTEMD_UNIT=sshd.service + _COMM=sshd
`- Actions
|- Currently banned: 1
|- Total banned: 1
`- Banned IP list: 134.209.165.184
The Banned IP list
contents should reflect the IP address of your second server.
You should now be able to configure some banning policies for your services. Fail2ban is a useful way to protect any kind of service that uses authentication. If you want to learn more about how fail2ban works, you can check out our tutorial on how fail2ban rules and files work.
For information about how to use fail2ban to protect other services, you can read about How To Protect an Nginx Server with Fail2Ban and How To Protect an Apache Server with Fail2Ban.
]]>Preparing a new disk for use on a Linux system is a straightforward process. There are many tools, filesystem formats, and partitioning schemes that may change the process if you have specialized needs, but the fundamentals remain the same.
This guide will cover the following process:
To partition the drive, you’ll use the parted
utility. Most of the commands necessary for interacting with a low-level filesystem are available by default on Linux. parted
, which creates partitions, is one of the only occasional exceptions.
If you are on an Ubuntu or Debian server and do not have parted
installed, you can install it by typing:
- sudo apt update
- sudo apt install parted
If you are on an RHEL, Rocky Linux, or Fedora server, you can install it by typing:
- sudo dnf install parted
Every other command used in this tutorial should be preinstalled, so you can move on to the next step.
Before you set up the drive, you need to be able to properly identify it on the server.
If this is a completely new drive, One way to identify it on your server is to look for the absence of a partitioning scheme. If you ask parted
to list the partition layout of your disks, it will produce an error for any disks that don’t have a valid partition scheme. This can be used to help identify the new disk:
- sudo parted -l | grep Error
You should see an unrecognized disk label
error for the new, unpartitioned disk:
OutputError: /dev/sda: unrecognized disk label
You can also use the lsblk
command and look for a disk of the correct size that has no associated partitions:
- lsblk
OutputNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
Note: Remember to check lsblk
every time you reconnect to your server before making changes. The /dev/sd*
and /dev/hd*
disk identifiers will not necessarily be consistent between boots, which means there is some danger of partitioning or formatting the wrong disk if you do not verify the disk identifier correctly.
Consider using more persistent disk identifiers like /dev/disk/by-uuid
, /dev/disk/by-label
, or /dev/disk/by-id
. See our introduction to storage concepts and terminology in Linux article for more information.
When you know the name that the kernel has assigned your disk, you can partition your drive.
As mentioned in the introduction, you’ll create a single partition spanning the entire disk in this guide.
To do this, you first need to specify the partitioning standard to use. There are two options: GPT and MBR. GPT is a more modern standard, while MBR is more widely supported among older operating systems. For a typical cloud server, GPT is a better option.
To choose the GPT standard, pass the disk you identified to parted
with mklabel gpt
:
- sudo parted /dev/sda mklabel gpt
To use the MBR format, use mklabel msdos
:
- sudo parted /dev/sda mklabel msdos
Once the format is selected, you can create a partition spanning the entire drive by using parted -a
:
- sudo parted -a opt /dev/sda mkpart primary ext4 0% 100%
You can break down this command as follows:
parted -a opt
runs parted, setting the default optimal alignment type./dev/sda
is the disk that you’re partitioning.mkpart primary ext4
makes a standalone (i.e. bootable, not extended from another) partition, using the ext4 filesystem.0% 100%
means that this partition should span from the start to the finish of the disk.For more information, refer to the manual page of Parted.
If you check lsblk
, you should see the new partition available:
- lsblk
OutputNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
└─sda1 8:1 0 100G 0 part
vda 253:0 0 20G 0 disk
└─vda1 253:1 0 20G 0 part /
You now have a new partition created, but it has not yet been initialized as a filesystem. The difference between these two steps is somewhat arbitrary, and unique to the way Linux filesystems work, but they are still two steps in practice.
Now that you have a partition available, you can initialize it as an Ext4 filesystem. Ext4 is not the only filesystem option available, but it is the most straightforward option for a single, standalone Linux volume. Windows uses filesystems like NTFS and exFAT, but they have limited support on other platforms (meaning that they will be read-only in some contexts, and cannot be used as a boot drive for other operating systems), and macOS uses HFS+ and APFS, with the same caveats. There are also newer Linux filesystems than Ext4, such as ZFS and BTRFS, but these impose different requirements and they are generally better-suited to multi-disk arrays.
To initialize an Ext4 filesystem, use the mkfs.ext4
utility. You can add a partition label with the -L
flag. Select a name that will help you identify this particular drive:
Note: Make sure you provide the path to the partition and not the entire disk. In Linux, disks have names like sda
, sdb
, hda
, etc. The partitions on these disks have a number appended to the end. So you would want to use something like sda1
, not sda
.
- sudo mkfs.ext4 -L datapartition /dev/sda1
If you want to change the partition label later on, you can use the e2label
command:
- sudo e2label /dev/sda1 newlabel
You can see all of the different ways to identify your partition with lsblk
. You should find the name, label, and UUID of the partition.
Some versions of lsblk
will print all of this information with the --fs
argument:
- sudo lsblk --fs
You can also specify them manually with lsblk -o
followed by the relevant options:
- sudo lsblk -o NAME,FSTYPE,LABEL,UUID,MOUNTPOINT
You should receive output like this. The highlighted output indicate different methods you can use to refer to the new filesystem:
OutputNAME FSTYPE LABEL UUID MOUNTPOINT
sda
└─sda1 ext4 datapartition 4b313333-a7b5-48c1-a957-d77d637e4fda
vda
└─vda1 ext4 DOROOT 050e1e34-39e6-4072-a03e-ae0bf90ba13a /
Make a note of this output, as you’ll use it when mounting the filesystem in the next step.
Now, you can mount the filesystem for use.
The Filesystem Hierarchy Standard recommends using the /mnt
directory or a subdirectory under it for temporarily mounted filesystems (like removable drives). It makes no recommendations on where to mount more permanent storage, so you can choose whichever scheme you’d like. For this tutorial, you’ll mount the drive under /mnt/data
.
Create that directory using mkdir
:
- sudo mkdir -p /mnt/data
You can mount the filesystem temporarily by typing:
- sudo mount -o defaults /dev/sda1 /mnt/data
In order to mount the filesystem automatically each time the server boots, you’ll add an entry to the /etc/fstab
file. This file contains information about all of your system’s permanent, or routinely mounted, disks. Open the file using nano
or your favorite text editor:
- sudo nano /etc/fstab
In the last step, you used the sudo lsblk --fs
command to display identifiers for your filesystem. You can use any of these in this file. This example uses the partition label, but you can see what the lines would look like using the other two identifiers in the commented out lines:
. . .
## Use one of the identifiers you found to reference the correct partition
# /dev/sda1 /mnt/data ext4 defaults 0 2
# UUID=4b313333-a7b5-48c1-a957-d77d637e4fda /mnt/data ext4 defaults 0 2
LABEL=datapartition /mnt/data ext4 defaults 0 2
Beyond the LABEL=datapartition
element, these options work as follows:
/mnt/data
is the path where the disk is being mounted.ext4
connotes that this is an Ext4 partition.defaults
means that this volume should be mounted with the default options, such as read-write support.0 2
signifies that the filesystem should be validated by the local machine in case of errors, but as a 2
nd priority, after your root volume.Note: You can learn about the various fields in the /etc/fstab
file by checking its man page For information about the mount options available for a specific filesystem type, check man [filesystem]
(like man ext4
).
Save and close the file when you are finished. If you are using nano
, press Ctrl+X
, then when prompted to confirm, Y
and then Enter
.
If you did not mount the filesystem previously, you can now mount it with mount -a
:
sudo mount -a
After you’ve mounted the volume, we should check to make sure that the filesystem is accessible.
You can check if the disk is available in the output from the df
command. Sometimes df
will include unnecessary information about temporary filesystems called tmpfs
in df
output, which you can exclude by appending -x tmpfs
:
- df -h -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 1.3G 18G 7% /
/dev/sda1 99G 60M 94G 1% /mnt/data
You can also check that the disk mounted with read and write capabilities by writing to a test file:
- echo "success" | sudo tee /mnt/data/test_file
Read the file back just to make sure the write executed correctly:
- cat /mnt/data/test_file
Outputsuccess
You can remove the file after you have verified that the new filesystem is functioning correctly:
- sudo rm /mnt/data/test_file
Your new drive should now be partitioned, formatted, mounted, and ready for use. This is the general process you can use to turn a raw disk into a filesystem that Linux can use for storage. There are more complex methods of partitioning, formatting, and mounting which may be more appropriate in some cases, but the above is a good starting point for general use.
Next, you may want to learn how to use SSHFS to mount remote volumes over SSH.
]]>SSH is the de facto method of connecting to a cloud server. It is durable, and it is extensible — as new encryption standards are developed, they can be used to generate new SSH keys, ensuring that the core protocol remains secure. However, no protocol or software stack is totally foolproof, and SSH being so widely deployed across the internet means that it represents a very predictable attack surface or attack vector through which people can try to gain access.
Any service that is exposed to the network is a potential target in this way. If you review the logs for your SSH service running on any widely trafficked server, you will often see repeated, systematic login attempts that represent brute force attacks by users and bots alike. Although you can make some optimizations to your SSH service to reduce the chance of these attacks succeeding to near-zero, such as disabling password authentication in favor of SSH keys, they can still pose a minor, ongoing liability.
Large-scale production deployments for whom this liability is completely unacceptable will usually implement a VPN such as WireGuard in front of their SSH service, so that it is impossible to connect directly to the default SSH port 22 from the outside internet without additional software abstraction or gateways. These VPN solutions are widely trusted, but will add complexity, and can break some automations or other small software hooks.
Prior to or in addition to committing to a full VPN setup, you can implement a tool called Fail2ban. Fail2ban can significantly mitigate brute force attacks by creating rules that automatically alter your firewall configuration to ban specific IPs after a certain number of unsuccessful login attempts. This will allow your server to harden itself against these access attempts without intervention from you.
In this guide, you’ll see how to install and use Fail2ban on a Debian 11 server.
To complete this guide, you will need:
A Debian 11 server and a non-root user with sudo privileges. You can learn more about how to set up a user with these privileges in our Initial Server Setup with Debian 11 guide.
Optionally, a second server that you can connect to your first server from, which you will use to test getting deliberately banned.
Fail2ban is available in Debian’s software repositories. Begin by running the following commands as a non-root user to update your package listings and install Fail2ban:
- sudo apt update
- sudo apt install fail2ban
Fail2ban will automatically set up a background service after being installed. You can check its status by using the systemctl
command:
- systemctl status fail2ban.service
Output● fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled
Active: active (running) since Tue 2022-06-28 16:23:14 UTC; 17s ago
Docs: man:fail2ban(1)
Process: 1942 ExecStartPre=/bin/mkdir -p /run/fail2ban (code=exited, status=0/SUCCESS
Main PID: 1943 (fail2ban-server)
Tasks: 5 (limit: 1132)
Memory: 15.8M
CPU: 280ms
CGroup: /system.slice/fail2ban.service
└─1943 /usr/bin/python3 /usr/bin/fail2ban-server -xf start
You can continue using Fail2ban with its default settings, but first, you’ll review some of its features.
The fail2ban service keeps its configuration files in the /etc/fail2ban
directory. There is a file with defaults called jail.conf
. Go to that directory and print the first 20 lines of that file using head -20
:
- cd /etc/fail2ban
- head -20 jail.conf
Output#
# WARNING: heavily refactored in 0.9.0 release. Please review and
# customize settings for your setup.
#
# Changes: in most of the cases you should not modify this
# file, but provide customizations in jail.local file,
# or separate .conf files under jail.d/ directory, e.g.:
#
# HOW TO ACTIVATE JAILS:
#
# YOU SHOULD NOT MODIFY THIS FILE.
#
# It will probably be overwritten or improved in a distribution update.
#
# Provide customizations in a jail.local file or a jail.d/customisation.local.
# For example to change the default bantime for all jails and to enable the
# ssh-iptables jail the following (uncommented) would appear in the .local file.
# See man 5 jail.conf for details.
#
# [DEFAULT]
As you’ll see, the first several lines of this file are commented out – they begin with #
characters indicating that they are to be read as documentation rather than as settings. As you’ll also see, these comments are directing you not to modify this file directly. Instead, you have two options: either create individual profiles for Fail2ban in multiple files within the jail.d/
directory, or create and collect all of your local settings in a jail.local
file. The jail.conf
file will be periodically updated as Fail2ban itself is updated, and will be used as a source of default settings for which you have not created any overrides.
In this tutorial, you’ll create jail.local
. You can do that by copying jail.conf
:
- sudo cp jail.conf jail.local
Now you can begin making configuration changes. Open the file in nano
or your favorite text editor:
- sudo nano jail.local
While you are scrolling through the file, this tutorial will review some options that you may want to update. The settings located under the [DEFAULT]
section near the top of the file will be applied to all of the services supported by Fail2ban. Elsewhere in the file, there are headers for [sshd]
and for other services, which contain service-specific settings that will apply over top of the defaults.
[DEFAULT]
. . .
bantime = 10m
. . .
The bantime
parameter sets the length of time that a client will be banned when they have failed to authenticate correctly. This is measured in seconds. By default, this is set to 10 minutes.
[DEFAULT]
. . .
findtime = 10m
maxretry = 5
. . .
The next two parameters are findtime
and maxretry
. These work together to establish the conditions under which a client is found to be an illegitimate user that should be banned.
The maxretry
variable sets the number of tries a client has to authenticate within a window of time defined by findtime
, before being banned. With the default settings, the fail2ban service will ban a client that unsuccessfully attempts to log in 5 times within a 10 minute window.
[DEFAULT]
. . .
destemail = root@localhost
sender = root@<fq-hostname>
mta = sendmail
. . .
If you need to receive email alerts when Fail2ban takes action, you should evaluate the destemail
, sendername
, and mta
settings. The destemail
parameter sets the email address that should receive ban messages. The sendername
sets the value of the “From” field in the email. The mta
parameter configures what mail service will be used to send mail. By default, this is sendmail
, but you may want to use Postfix or another mail solution.
[DEFAULT]
. . .
action = $(action_)s
. . .
This parameter configures the action that Fail2ban takes when it wants to institute a ban. The value action_
is defined in the file shortly before this parameter. The default action is to update your firewall configuration to reject traffic from the offending host until the ban time elapses.
There are other action_
scripts provided by default which you can replace $(action_)
with above:
…
# ban & send an e-mail with whois report to the destemail.
action_mw = %(action_)s
%(mta)s-whois[sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(action_)s
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# See the IMPORTANT note in action.d/xarf-login-attack for when to use this action
#
# ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines
# to the destemail.
action_xarf = %(action_)s
xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"]
# ban IP on CloudFlare & send an e-mail with whois report and relevant log lines
# to the destemail.
action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
…
For example, action_mw
takes action and sends an email, action_mwl
takes action, sends an email, and includes logging, and action_cf_mwl
does all of the above in addition to sending an update to the Cloudflare API associated with your account to ban the offender there, too.
Next is the portion of the configuration file that deals with individual services. These are specified by section headers, like [sshd]
.
Each of these sections needs to be enabled individually by adding an enabled = true
line under the header, with their other settings.
[jail_to_enable]
. . .
enabled = true
. . .
By default, the SSH service is enabled and all others are disabled.
.
Some other settings that are set here are the filter
that will be used to decide whether a line in a log indicates a failed authentication and the logpath
which tells fail2ban where the logs for that particular service are located.
The filter
value is actually a reference to a file located in the /etc/fail2ban/filter.d
directory, with its .conf
extension removed. These files contain regular expressions (a common shorthand for text parsing) that determine whether a line in the log is a failed authentication attempt. We won’t be covering these files in-depth in this guide, because they are fairly complex and the predefined settings match appropriate lines well.
However, you can see what kind of filters are available by looking into that directory:
- ls /etc/fail2ban/filter.d
If you see a file that looks related to a service you are using, you should open it with a text editor. Most of the files are fairly well commented and you should be able to at least tell what type of condition the script was designed to guard against. Most of these filters have appropriate (disabled) sections in the jail.conf
file that we can enable in the jail.local
file if desired.
For instance, imagine that you are serving a website using Nginx and realize that a password-protected portion of your site is getting slammed with login attempts. You can tell fail2ban to use the nginx-http-auth.conf
file to check for this condition within the /var/log/nginx/error.log
file.
This is actually already set up in a section called [nginx-http-auth]
in your /etc/fail2ban/jail.conf
file. You would just need to add the enabled
parameter:
. . .
[nginx-http-auth]
enabled = true
. . .
When you are finished editing, save and close the file. If you’ve made any changes, you can restart the Fail2ban service using systemctl
:
- sudo systemctl restart fail2ban
In the next step, you’ll demonstrate Fail2ban in action.
From another server, one that won’t need to log into your Fail2ban server in the future, you can test the rules by getting that second server banned. After logging into your second server, try to SSH into the Fail2ban server. You can try to connect using a nonexistent name:
- ssh blah@your_server
Enter random characters into the password prompt. Repeat this a few times. At some point, the error you’re receiving should change from Permission denied
to Connection refused
. This signals that your second server has been banned from the Fail2ban server.
On your Fail2ban server, you can see the new rule by checking your iptables
output. iptables
is a command for interacting with low-level port and firewall rules on your server. If you followed DigitalOcean’s guide to initial server setup, you will be using ufw
to manage firewall rules at a higher level. Running iptables -S
will show you all of the firewall rules that ufw
already created:
- sudo iptables -S
Output-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-N f2b-sshd
-N ufw-after-forward
-N ufw-after-input
-N ufw-after-logging-forward
-N ufw-after-logging-input
-N ufw-after-logging-output
-N ufw-after-output
-N ufw-before-forward
-N ufw-before-input
-N ufw-before-logging-forward
-N ufw-before-logging-input
-N ufw-before-logging-output
…
If you pipe the output of iptables -S
to grep
to search within those rules for the string f2b
, you can see the rules that have been added by fail2ban:
- sudo iptables -S | grep f2b
Output-N f2b-sshd
-A INPUT -p tcp -m multiport --dports 22 -j f2b-sshd
-A f2b-sshd -s 134.209.165.184/32 -j REJECT --reject-with icmp-port-unreachable
-A f2b-sshd -j RETURN
The line containing REJECT --reject-with icmp-port-unreachable
will have been added by Fail2ban and should reflect the IP address of your second server.
You should now be able to configure some banning policies for your services. Fail2ban is a useful way to protect any kind of service that uses authentication. If you want to learn more about how fail2ban works, you can check out our tutorial on how fail2ban rules and files work.
For information about how to use fail2ban to protect other services, you can read about How To Protect an Nginx Server with Fail2Ban and How To Protect an Apache Server with Fail2Ban.
]]>SSH is the de facto method of connecting to a cloud server. It is durable, and it is extensible — as new encryption standards are developed, they can be used to generate new SSH keys, ensuring that the core protocol remains secure. However, no protocol or software stack is totally foolproof, and SSH being so widely deployed across the internet means that it represents a very predictable attack surface or attack vector through which people can try to gain access.
Any service that is exposed to the network is a potential target in this way. If you review the logs for your SSH service running on any widely trafficked server, you will often see repeated, systematic login attempts that represent brute force attacks by users and bots alike. Although you can make some optimizations to your SSH service to reduce the chance of these attacks succeeding to near-zero, such as disabling password authentication in favor of SSH keys, they can still pose a minor, ongoing liability.
Large-scale production deployments for whom this liability is completely unacceptable will usually implement a VPN such as WireGuard in front of their SSH service, so that it is impossible to connect directly to the default SSH port 22 from the outside internet without additional software abstraction or gateways. These VPN solutions are widely trusted, but will add complexity, and can break some automations or other small software hooks.
Prior to or in addition to committing to a full VPN setup, you can implement a tool called Fail2ban. Fail2ban can significantly mitigate brute force attacks by creating rules that automatically alter your firewall configuration to ban specific IPs after a certain number of unsuccessful login attempts. This will allow your server to harden itself against these access attempts without intervention from you.
In this guide, you’ll see how to install and use Fail2ban on a Ubuntu 20.04 server.
To complete this guide, you will need:
An Ubuntu 20.04 server and a non-root user with sudo privileges. You can learn more about how to set up a user with these privileges in our Initial Server Setup with Ubuntu 20.04 guide.
Optionally, a second server that you can connect to your first server from, which you will use to test getting deliberately banned.
Fail2ban is available in Ubuntu’s software repositories. Begin by running the following commands as a non-root user to update your package listings and install Fail2ban:
- sudo apt update
- sudo apt install fail2ban
Fail2ban will automatically set up a background service after being installed. However, it is disabled by default, because some of its default settings may cause undesired effects. You can verify this by using the systemctl
command:
- systemctl status fail2ban.service
Output○ fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; disabled; vendor preset: enabled
Active: inactive (dead)
Docs: man:fail2ban(1)
You could enable Fail2ban right away, but first, you’ll review some of its features.
The fail2ban service keeps its configuration files in the /etc/fail2ban
directory. There is a file with defaults called jail.conf
. Go to that directory and print the first 20 lines of that file using head -20
:
- cd /etc/fail2ban
- head -20 jail.conf
Output#
# WARNING: heavily refactored in 0.9.0 release. Please review and
# customize settings for your setup.
#
# Changes: in most of the cases you should not modify this
# file, but provide customizations in jail.local file,
# or separate .conf files under jail.d/ directory, e.g.:
#
# HOW TO ACTIVATE JAILS:
#
# YOU SHOULD NOT MODIFY THIS FILE.
#
# It will probably be overwritten or improved in a distribution update.
#
# Provide customizations in a jail.local file or a jail.d/customisation.local.
# For example to change the default bantime for all jails and to enable the
# ssh-iptables jail the following (uncommented) would appear in the .local file.
# See man 5 jail.conf for details.
#
# [DEFAULT]
As you’ll see, the first several lines of this file are commented out – they begin with #
characters indicating that they are to be read as documentation rather than as settings. As you’ll also see, these comments are directing you not to modify this file directly. Instead, you have two options: either create individual profiles for Fail2ban in multiple files within the jail.d/
directory, or create and collect all of your local settings in a jail.local
file. The jail.conf
file will be periodically updated as Fail2ban itself is updated, and will be used as a source of default settings for which you have not created any overrides.
In this tutorial, you’ll create jail.local
. You can do that by copying jail.conf
:
- sudo cp jail.conf jail.local
Now you can begin making configuration changes. Open the file in nano
or your favorite text editor:
- sudo nano jail.local
While you are scrolling through the file, this tutorial will review some options that you may want to update. The settings located under the [DEFAULT]
section near the top of the file will be applied to all of the services supported by Fail2ban. Elsewhere in the file, there are headers for [sshd]
and for other services, which contain service-specific settings that will apply over top of the defaults.
[DEFAULT]
. . .
bantime = 10m
. . .
The bantime
parameter sets the length of time that a client will be banned when they have failed to authenticate correctly. This is measured in seconds. By default, this is set to 10 minutes.
[DEFAULT]
. . .
findtime = 10m
maxretry = 5
. . .
The next two parameters are findtime
and maxretry
. These work together to establish the conditions under which a client is found to be an illegitimate user that should be banned.
The maxretry
variable sets the number of tries a client has to authenticate within a window of time defined by findtime
, before being banned. With the default settings, the fail2ban service will ban a client that unsuccessfully attempts to log in 5 times within a 10 minute window.
[DEFAULT]
. . .
destemail = root@localhost
sender = root@<fq-hostname>
mta = sendmail
. . .
If you need to receive email alerts when Fail2ban takes action, you should evaluate the destemail
, sendername
, and mta
settings. The destemail
parameter sets the email address that should receive ban messages. The sendername
sets the value of the “From” field in the email. The mta
parameter configures what mail service will be used to send mail. By default, this is sendmail
, but you may want to use Postfix or another mail solution.
[DEFAULT]
. . .
action = $(action_)s
. . .
This parameter configures the action that Fail2ban takes when it wants to institute a ban. The value action_
is defined in the file shortly before this parameter. The default action is to update your firewall configuration to reject traffic from the offending host until the ban time elapses.
There are other action_
scripts provided by default which you can replace $(action_)
with above:
…
# ban & send an e-mail with whois report to the destemail.
action_mw = %(action_)s
%(mta)s-whois[sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(action_)s
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# See the IMPORTANT note in action.d/xarf-login-attack for when to use this action
#
# ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines
# to the destemail.
action_xarf = %(action_)s
xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"]
# ban IP on CloudFlare & send an e-mail with whois report and relevant log lines
# to the destemail.
action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
…
For example, action_mw
takes action and sends an email, action_mwl
takes action, sends an email, and includes logging, and action_cf_mwl
does all of the above in addition to sending an update to the Cloudflare API associated with your account to ban the offender there, too.
Next is the portion of the configuration file that deals with individual services. These are specified by section headers, like [sshd]
.
Each of these sections needs to be enabled individually by adding an enabled = true
line under the header, with their other settings.
[jail_to_enable]
. . .
enabled = true
. . .
By default, the SSH service is enabled and all others are disabled.
.
Some other settings that are set here are the filter
that will be used to decide whether a line in a log indicates a failed authentication and the logpath
which tells fail2ban where the logs for that particular service are located.
The filter
value is actually a reference to a file located in the /etc/fail2ban/filter.d
directory, with its .conf
extension removed. These files contain regular expressions (a common shorthand for text parsing) that determine whether a line in the log is a failed authentication attempt. We won’t be covering these files in-depth in this guide, because they are fairly complex and the predefined settings match appropriate lines well.
However, you can see what kind of filters are available by looking into that directory:
- ls /etc/fail2ban/filter.d
If you see a file that looks related to a service you are using, you should open it with a text editor. Most of the files are fairly well commented and you should be able to at least tell what type of condition the script was designed to guard against. Most of these filters have appropriate (disabled) sections in the jail.conf
file that we can enable in the jail.local
file if desired.
For instance, imagine that you are serving a website using Nginx and realize that a password-protected portion of your site is getting slammed with login attempts. You can tell fail2ban to use the nginx-http-auth.conf
file to check for this condition within the /var/log/nginx/error.log
file.
This is actually already set up in a section called [nginx-http-auth]
in your /etc/fail2ban/jail.conf
file. You would just need to add the enabled
parameter:
. . .
[nginx-http-auth]
enabled = true
. . .
When you are finished editing, save and close the file. At this point, you can enable your Fail2ban service so that it will run automatically from now on. First, run systemctl enable
:
- sudo systemctl enable fail2ban
Then, start it manually for the first time with systemctl start
:
- sudo systemctl start fail2ban
You can verify that it’s running with systemctl status
:
- sudo systemctl status fail2ban
Output● fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enab>
Active: active (running) since Tue 2022-06-28 19:29:15 UTC; 3s ago
Docs: man:fail2ban(1)
Main PID: 39396 (fail2ban-server)
Tasks: 5 (limit: 1119)
Memory: 12.9M
CPU: 278ms
CGroup: /system.slice/fail2ban.service
└─39396 /usr/bin/python3 /usr/bin/fail2ban-server -xf start
Jun 28 19:29:15 fail2ban20 systemd[1]: Started Fail2Ban Service.
Jun 28 19:29:15 fail2ban20 fail2ban-server[39396]: Server ready
In the next step, you’ll demonstrate Fail2ban in action.
From another server, one that won’t need to log into your Fail2ban server in the future, you can test the rules by getting that second server banned. After logging into your second server, try to SSH into the Fail2ban server. You can try to connect using a nonexistent name:
- ssh blah@your_server
Enter random characters into the password prompt. Repeat this a few times. At some point, the error you’re receiving should change from Permission denied
to Connection refused
. This signals that your second server has been banned from the Fail2ban server.
On your Fail2ban server, you can see the new rule by checking your iptables
output. iptables
is a command for interacting with low-level port and firewall rules on your server. If you followed DigitalOcean’s guide to initial server setup, you will be using ufw
to manage firewall rules at a higher level. Running iptables -S
will show you all of the firewall rules that ufw
already created:
- sudo iptables -S
Output-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-N f2b-sshd
-N ufw-after-forward
-N ufw-after-input
-N ufw-after-logging-forward
-N ufw-after-logging-input
-N ufw-after-logging-output
-N ufw-after-output
-N ufw-before-forward
-N ufw-before-input
-N ufw-before-logging-forward
-N ufw-before-logging-input
-N ufw-before-logging-output
…
If you pipe the output of iptables -S
to grep
to search within those rules for the string f2b
, you can see the rules that have been added by fail2ban:
- sudo iptables -S | grep f2b
Output-N f2b-sshd
-A INPUT -p tcp -m multiport --dports 22 -j f2b-sshd
-A f2b-sshd -s 134.209.165.184/32 -j REJECT --reject-with icmp-port-unreachable
-A f2b-sshd -j RETURN
The line containing REJECT --reject-with icmp-port-unreachable
will have been added by Fail2ban and should reflect the IP address of your second server.
You should now be able to configure some banning policies for your services. Fail2ban is a useful way to protect any kind of service that uses authentication. If you want to learn more about how fail2ban works, you can check out our tutorial on how fail2ban rules and files work.
For information about how to use fail2ban to protect other services, you can read about How To Protect an Nginx Server with Fail2Ban on Ubuntu 14.04 and How To Protect an Apache Server with Fail2Ban on Ubuntu 14.04.
]]>SSH is the de facto method of connecting to a cloud server. It is durable, and it is extensible — as new encryption standards are developed, they can be used to generate new SSH keys, ensuring that the core protocol remains secure. However, no protocol or software stack is totally foolproof, and SSH being so widely deployed across the internet means that it represents a very predictable attack surface or attack vector through which people can try to gain access.
Any service that is exposed to the network is a potential target in this way. If you review the logs for your SSH service running on any widely trafficked server, you will often see repeated, systematic login attempts that represent brute force attacks by users and bots alike. Although you can make some optimizations to your SSH service to reduce the chance of these attacks succeeding to near-zero, such as disabling password authentication in favor of SSH keys, they can still pose a minor, ongoing liability.
Large-scale production deployments for whom this liability is completely unacceptable will usually implement a VPN such as WireGuard in front of their SSH service, so that it is impossible to connect directly to the default SSH port 22 from the outside internet without additional software abstraction or gateways. These VPN solutions are widely trusted, but will add complexity, and can break some automations or other small software hooks.
Prior to or in addition to committing to a full VPN setup, you can implement a tool called Fail2ban. Fail2ban can significantly mitigate brute force attacks by creating rules that automatically alter your firewall configuration to ban specific IPs after a certain number of unsuccessful login attempts. This will allow your server to harden itself against these access attempts without intervention from you.
In this guide, you’ll see how to install and use Fail2ban on a Ubuntu 22.04 server.
To complete this guide, you will need:
An Ubuntu 22.04 server and a non-root user with sudo privileges. You can learn more about how to set up a user with these privileges in our Initial Server Setup with Ubuntu 22.04 guide.
Optionally, a second server that you can connect to your first server from, which you will use to test getting deliberately banned.
Fail2ban is available in Ubuntu’s software repositories. Begin by running the following commands as a non-root user to update your package listings and install Fail2ban:
- sudo apt update
- sudo apt install fail2ban
Fail2ban will automatically set up a background service after being installed. However, it is disabled by default, because some of its default settings may cause undesired effects. You can verify this by using the systemctl
command:
- systemctl status fail2ban.service
Output○ fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; disabled; vendor preset: enabled
Active: inactive (dead)
Docs: man:fail2ban(1)
You could enable Fail2ban right away, but first, you’ll review some of its features.
The fail2ban service keeps its configuration files in the /etc/fail2ban
directory. There is a file with defaults called jail.conf
. Go to that directory and print the first 20 lines of that file using head -20
:
- cd /etc/fail2ban
- head -20 jail.conf
Output#
# WARNING: heavily refactored in 0.9.0 release. Please review and
# customize settings for your setup.
#
# Changes: in most of the cases you should not modify this
# file, but provide customizations in jail.local file,
# or separate .conf files under jail.d/ directory, e.g.:
#
# HOW TO ACTIVATE JAILS:
#
# YOU SHOULD NOT MODIFY THIS FILE.
#
# It will probably be overwritten or improved in a distribution update.
#
# Provide customizations in a jail.local file or a jail.d/customisation.local.
# For example to change the default bantime for all jails and to enable the
# ssh-iptables jail the following (uncommented) would appear in the .local file.
# See man 5 jail.conf for details.
#
# [DEFAULT]
As you’ll see, the first several lines of this file are commented out – they begin with #
characters indicating that they are to be read as documentation rather than as settings. As you’ll also see, these comments are directing you not to modify this file directly. Instead, you have two options: either create individual profiles for Fail2ban in multiple files within the jail.d/
directory, or create and collect all of your local settings in a jail.local
file. The jail.conf
file will be periodically updated as Fail2ban itself is updated, and will be used as a source of default settings for which you have not created any overrides.
In this tutorial, you’ll create jail.local
. You can do that by copying jail.conf
:
- sudo cp jail.conf jail.local
Now you can begin making configuration changes. Open the file in nano
or your favorite text editor:
- sudo nano jail.local
While you are scrolling through the file, this tutorial will review some options that you may want to update. The settings located under the [DEFAULT]
section near the top of the file will be applied to all of the services supported by Fail2ban. Elsewhere in the file, there are headers for [sshd]
and for other services, which contain service-specific settings that will apply over top of the defaults.
[DEFAULT]
. . .
bantime = 10m
. . .
The bantime
parameter sets the length of time that a client will be banned when they have failed to authenticate correctly. This is measured in seconds. By default, this is set to 10 minutes.
[DEFAULT]
. . .
findtime = 10m
maxretry = 5
. . .
The next two parameters are findtime
and maxretry
. These work together to establish the conditions under which a client is found to be an illegitimate user that should be banned.
The maxretry
variable sets the number of tries a client has to authenticate within a window of time defined by findtime
, before being banned. With the default settings, the fail2ban service will ban a client that unsuccessfully attempts to log in 5 times within a 10 minute window.
[DEFAULT]
. . .
destemail = root@localhost
sender = root@<fq-hostname>
mta = sendmail
. . .
If you need to receive email alerts when Fail2ban takes action, you should evaluate the destemail
, sendername
, and mta
settings. The destemail
parameter sets the email address that should receive ban messages. The sendername
sets the value of the “From” field in the email. The mta
parameter configures what mail service will be used to send mail. By default, this is sendmail
, but you may want to use Postfix or another mail solution.
[DEFAULT]
. . .
action = $(action_)s
. . .
This parameter configures the action that Fail2ban takes when it wants to institute a ban. The value action_
is defined in the file shortly before this parameter. The default action is to update your firewall configuration to reject traffic from the offending host until the ban time elapses.
There are other action_
scripts provided by default which you can replace $(action_)
with above:
…
# ban & send an e-mail with whois report to the destemail.
action_mw = %(action_)s
%(mta)s-whois[sender="%(sender)s", dest="%(destemail)s", protocol="%(protocol)s", chain="%(chain)s"]
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action_mwl = %(action_)s
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
# See the IMPORTANT note in action.d/xarf-login-attack for when to use this action
#
# ban & send a xarf e-mail to abuse contact of IP address and include relevant log lines
# to the destemail.
action_xarf = %(action_)s
xarf-login-attack[service=%(__name__)s, sender="%(sender)s", logpath="%(logpath)s", port="%(port)s"]
# ban IP on CloudFlare & send an e-mail with whois report and relevant log lines
# to the destemail.
action_cf_mwl = cloudflare[cfuser="%(cfemail)s", cftoken="%(cfapikey)s"]
%(mta)s-whois-lines[sender="%(sender)s", dest="%(destemail)s", logpath="%(logpath)s", chain="%(chain)s"]
…
For example, action_mw
takes action and sends an email, action_mwl
takes action, sends an email, and includes logging, and action_cf_mwl
does all of the above in addition to sending an update to the Cloudflare API associated with your account to ban the offender there, too.
Next is the portion of the configuration file that deals with individual services. These are specified by section headers, like [sshd]
.
Each of these sections needs to be enabled individually by adding an enabled = true
line under the header, with their other settings.
[jail_to_enable]
. . .
enabled = true
. . .
By default, the SSH service is enabled and all others are disabled.
.
Some other settings that are set here are the filter
that will be used to decide whether a line in a log indicates a failed authentication and the logpath
which tells fail2ban where the logs for that particular service are located.
The filter
value is actually a reference to a file located in the /etc/fail2ban/filter.d
directory, with its .conf
extension removed. These files contain regular expressions (a common shorthand for text parsing) that determine whether a line in the log is a failed authentication attempt. We won’t be covering these files in-depth in this guide, because they are fairly complex and the predefined settings match appropriate lines well.
However, you can see what kind of filters are available by looking into that directory:
- ls /etc/fail2ban/filter.d
If you see a file that looks related to a service you are using, you should open it with a text editor. Most of the files are fairly well commented and you should be able to at least tell what type of condition the script was designed to guard against. Most of these filters have appropriate (disabled) sections in the jail.conf
file that we can enable in the jail.local
file if desired.
For instance, imagine that you are serving a website using Nginx and realize that a password-protected portion of your site is getting slammed with login attempts. You can tell fail2ban to use the nginx-http-auth.conf
file to check for this condition within the /var/log/nginx/error.log
file.
This is actually already set up in a section called [nginx-http-auth]
in your /etc/fail2ban/jail.conf
file. You would just need to add the enabled
parameter:
. . .
[nginx-http-auth]
enabled = true
. . .
When you are finished editing, save and close the file. At this point, you can enable your Fail2ban service so that it will run automatically from now on. First, run systemctl enable
:
- sudo systemctl enable fail2ban
Then, start it manually for the first time with systemctl start
:
- sudo systemctl start fail2ban
You can verify that it’s running with systemctl status
:
- sudo systemctl status fail2ban
Output● fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enab>
Active: active (running) since Mon 2022-06-27 19:25:15 UTC; 3s ago
Docs: man:fail2ban(1)
Main PID: 39396 (fail2ban-server)
Tasks: 5 (limit: 1119)
Memory: 12.9M
CPU: 278ms
CGroup: /system.slice/fail2ban.service
└─39396 /usr/bin/python3 /usr/bin/fail2ban-server -xf start
Jun 27 19:25:15 fail2ban22 systemd[1]: Started Fail2Ban Service.
Jun 27 19:25:15 fail2ban22 fail2ban-server[39396]: Server ready
In the next step, you’ll demonstrate Fail2ban in action.
From another server, one that won’t need to log into your Fail2ban server in the future, you can test the rules by getting that second server banned. After logging into your second server, try to SSH into the Fail2ban server. You can try to connect using a nonexistent name:
- ssh blah@your_server
Enter random characters into the password prompt. Repeat this a few times. At some point, the error you’re receiving should change from Permission denied
to Connection refused
. This signals that your second server has been banned from the Fail2ban server.
On your Fail2ban server, you can see the new rule by checking your iptables
output. iptables
is a command for interacting with low-level port and firewall rules on your server. If you followed DigitalOcean’s guide to initial server setup, you will be using ufw
to manage firewall rules at a higher level. Running iptables -S
will show you all of the firewall rules that ufw
already created:
- sudo iptables -S
Output-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-N f2b-sshd
-N ufw-after-forward
-N ufw-after-input
-N ufw-after-logging-forward
-N ufw-after-logging-input
-N ufw-after-logging-output
-N ufw-after-output
-N ufw-before-forward
-N ufw-before-input
-N ufw-before-logging-forward
-N ufw-before-logging-input
-N ufw-before-logging-output
…
If you pipe the output of iptables -S
to grep
to search within those rules for the string f2b
, you can see the rules that have been added by fail2ban:
- sudo iptables -S | grep f2b
Output-N f2b-sshd
-A INPUT -p tcp -m multiport --dports 22 -j f2b-sshd
-A f2b-sshd -s 134.209.165.184/32 -j REJECT --reject-with icmp-port-unreachable
-A f2b-sshd -j RETURN
The line containing REJECT --reject-with icmp-port-unreachable
will have been added by Fail2ban and should reflect the IP address of your second server.
You should now be able to configure some banning policies for your services. Fail2ban is a useful way to protect any kind of service that uses authentication. If you want to learn more about how fail2ban works, you can check out our tutorial on how fail2ban rules and files work.
For information about how to use fail2ban to protect other services, you can read about How To Protect an Nginx Server with Fail2Ban on Ubuntu 14.04 and How To Protect an Apache Server with Fail2Ban on Ubuntu 14.04.
]]>I logged to de DO dashboard and there was an update for the console, which I installed. Don’t know if that change is related to the problem.
Any ideas on how to troubleshoot this? :(
]]>A cloud server is internet infrastructure that provides computing resources to users remotely. You can think of a cloud server as a private computer that you can set up and control in the same way as an on-premise computer, such as a laptop or desktop. This conceptual article outlines several key components of cloud server architecture, the difference between cloud servers and other cloud offerings, and how to determine which cloud offering is right for your website or web application.
Note that you will sometimes see “cloud server,” “web server,” and plain “server” used interchangeably. Typically, a cloud server refers to an entire Linux environment, or effectively an entire computer. In practice, cloud servers will always be running as virtual machines, or software systems that emulate computers, within much larger server clusters in a process known as virtualization. For more information about this technical context, you can review An Introduction to Cloud Hosting.
To understand cloud servers, it’s helpful to understand the type of software that runs in the cloud.
Operating systems: To set up a cloud server, one of the first things you need to do is install an operating system. Today, nearly all cloud customers use a Linux-based operating system (such as Ubuntu or Rocky Linux) due to broad support, free or flexible licensing, and overall ubiquity in server computing. You can refer to How to Choose a Linux Distribution for more information.
Server-side software: This is a class of software that’s designed to run in a cloud environment, which does not have a desktop environment or a display connected to it. Usually, this means that the software is installed and configured via a command line interface, and then accessed by regular users through a web browser or another application. Though the types of software and tooling you install on your cloud server can vary greatly, understanding a few key components will help prepare you to plan and set up your own cloud server.
Web servers: This software enables your cloud server to communicate with users or applications on the internet using the HTTP protocol. Server-side software, like a web server, has to respond in a well-defined way to certain types of requests from clients or client-side software. For example, when a user enters a URL into a web browser, the web browser (known here as the client) makes a request to the server. In response, the server fetches the HTML document and sends it back to the browser where it is loaded as a web page. If you are setting up a cloud server from scratch to host a website or web application, you will likely need to install and set up server software, with Nginx and the Apache HTTP Web Server, being the two most popular options. You can read more about web server software in our guide An Introduction to Web Servers.
API servers: APIs (Application Programming Interfaces) are a type of software mediary that enable applications to communicate with one another. A web server is a type of an API server that implements the HTTP APIs. There are many other different types of APIs that enable your cloud server to send or receive data to and from external applications and data resources, such as pulling weather data, flight information, or other types of data to use with your application. Individual API implementations are also sometimes called API endpoints, or just “endpoints”.
Database servers: Database servers, also called databases, are another type of API server. Unlike web servers, which can be accessed via a web browser and usually render an HTML interface, database servers are usually accessed via a database query API. Some database deployments will be externally facing, and can implement their own web interfaces for anyone needing to interact with them in a browser, whereas others may only be internally accessible to your other cloud software via these queries.
Note: Running Linux without virtualization of any kind on a dedicated physical machine not shared with other tenants is usually called bare-metal hosting. Although relatively few cloud providers still offer bare-metal servers other than at the very high end, the most common modern equivalent to running a bare-metal server is running a Linux environment on a Raspberry Pi, usually for smaller projects.
Because a cloud server is effectively a whole virtual computer, other cloud product offerings can be understood in relation to them. For example, some cloud providers will offer dedicated web hosting, or dedicated database hosting. Any product offering that provides a database or a web server on its own has effectively abstracted out the actual cloud server in the equation. There are various ways of doing this, which will typically still involve virtualized server clusters, but the principle is consistent. The primary distinction is that a cloud server (sometimes called a VPS, or virtual private server, to clarify that it is a virtual machine) can be made to run any software in any way, whereas any other cloud offering is effectively an optimized and constrained subset of server features.
The market for these offerings has changed considerably over the past few decades. Before virtualization was widely available, there used to be a market of web hosts who would instead provision a web server like Nginx (or at that time, Apache) to support dozens of different users with their own unique sets of permissions, and offer hosting per-user. This was convenient because it did not require users to take on any server administration duties, but it was limited in practice to only supporting static websites (i.e., HTML, CSS and javascript only, with no backend engine) or drop-in PHP applications that had no dependencies other than the web server.
Since then, VPS offerings — full cloud servers — have become more commonly available. Committing to running an entire cloud server, especially in a production deployment, requires a certain amount of knowledge of Linux best practices, usually formalized in dedicated System Administration (“sysadmin”) or Development Operations (“DevOps”) roles for dealing with security, deployment, and so on. Being able to perform these roles on an occasional or an as-needed basis is very useful, but can be complex. This is especially true when considering that it is not strictly necessary to know how to interact with a Linux server or a command line at all to develop most software.
Cloud servers typically have a number of security features built into them, and it is not necessary to provision a commercial-scale production deployment to safely and reliably run open-source software on a cloud server. Most server packages ship with carefully configured default settings and are frequently updated to avoid any security risks. It is often sufficient to deploy a firewall like ufw
that can expose network ports on an individual basis to keep a server secure, or to at least offload the responsibility for that security to the maintainers of software like Nginx, which is used on millions of servers worldwide.
There are also other modern offerings which are more comparable to drop-in web hosts. Modern static websites can use modern javascript features to, in some cases, eliminate the need for a backend server entirely. Some cloud providers refer to this type of hosting as a “headless CMS” and provide other authoring tools and web forms as part of a larger software-as-a-service offering.
In addition to this static site functionality, some providers also support deploying what are called serverless functions. These are one-off scripts that can leverage backend server functionality on a discrete basis, which are deployed into an environment that can run them directly. When used together with static site deployments, this approach is sometimes called the Jamstack.
Static site and serverless deployments are highly portable and, like legacy web hosting, they avoid nearly all of the security and maintenance concerns around full server deployments. However, they are far more limited in scope. For example, as part of your stack, you may need to deploy a Docker container behind an Nginx web server in a particular way: for this, or any configuration like this, you need an entire cloud server.
In general, any software that can be deployed to a cloud server can also be deployed to a local computer. Although the differences can be instructive – notably, many people do not run Linux on their local computers, and server-side software isn’t always packaged to work directly on macOS or Windows – those differences are small in practice. This is the main value offering of a cloud server: for all intents and purposes, it is an entire computer that you can do anything with.
Like bare-metal computers, cloud servers will be more performant depending on their hardware specifications, and are priced accordingly. Each cloud server is allocated a certain amount of resources within the cluster. Unlike bare-metal computers, cloud server specs can be quickly scaled up and down as needed. When assessing servers, you should have an idea of how these specifications will impact your needs.
Cloud servers are typically provisioned by their number of available CPU cores, their total available memory (RAM), and their attached disk storage. While disk speed and CPU speed typically vary under real-world conditions, most cloud providers have standardized on an average disk speed roughly comparable to consumer solid-state disk drives (SSDs) and a CPU speed comparable to an Intel Xeon core. Some providers will also constrain lower-tier cloud servers by their total allowed number of disk input/output operations (IOPS) or their total allowable network traffic, after which traffic may be throttled, causing bottlenecks for some software.
Almost all cloud providers will also allow you to purchase additional storage, such as block storage or object storage, that can be attached to your VPS on an as-needed basis. It is usually a good idea to use this additional storage rather than continuing to expand the baseline storage allocation of your VPS. Storing all of your data on a single root partition can make scaling more challenging.
To be accessible on the open internet, cloud servers must have a public IP address assigned to them. This can be an IPv4 address, which follows the pattern 123.123.123.123
, or an IPv6 address, which follows the pattern 2001:0db8:0000:0000:0000:ff00:0042:8329
. Almost all network-capable software can parse and access these IP addresses directly, though most of the time, server IP addresses will be behind an assigned a domain name, such as https://my_domain.com
. Some cloud providers will automatically allocate you one IP address for each VPS, whereas others may require you to purchase IP addresses and assign them to your servers individually. These are called reserved IPs, and they can be more flexible in large deployments.
Domain names are usually purchased and configured from separate registrars using DNS records, although some cloud providers will offer both products together.
To connect and work with cloud servers, you will need to know how to work in a terminal environment, both locally and remotely. Remote terminal connections mostly make use of a protocol called SSH, or Secure Shell. Along with HTTP, this is one of the most commonly used protocols, although SSH is naturally used more often by administrators rather than end users. HTTP runs on port 80
(and port 443
for HTTPS). SSH typically runs on port 22
. Cloud administration can be broadly understood in terms of these protocols, servers, and services.
This curriculum provides an overview of evaluating, working with, and understanding the landscape of cloud servers. It is helpful to understand the range of product offerings, and the way that deployment preferences have changed over time, to leverage existing software documentation for your own use cases. A DigitalOcean Droplet – a cloud VPS – is a good starting point for many different projects:
Tutorials
A General Introduction to Cloud Computing. This tutorial provides an overview of the history and the business context of cloud computing. It contrasts different service models and explains other considerations around risks, costs, and privacy.
Initial Server Setup. This is a collection of DigitalOcean’s “Initial Server Setup” articles for many popular Linux environments, designed to get you up and running with SSH, a package manager, and a firewall as efficiently as possible.
A Linux Command Line Primer. This tutorial covers the essentials of working on a command line, including many core Linux commands, shortcuts, and the fundamentals of argument syntax and directory navigation.
SSH Essentials: Working with SSH Servers, Clients, and Keys. This tutorial explains the mechanics of SSH, or secure shell, which is the universally preferred method of connecting to and working with remote servers using a terminal.
Products
Refer to DigitalOcean’s Droplet overview for information on pricing, add-ons such as backups or floating IP addresses, and getting started.
You can watch this video guide to choosing the right Virtual Machine for your business.
Review DigitalOcean’s different Droplet plans to understand the optimization options available for faster CPUs, storage or memory.
After reading An Introduction to the Linux Terminal and A Linux Command Line Primer, you should have a good grasp on the fundamentals of working in a modern command line environment. However, many users who are new to the command line may still find it fairly restrictive. This tutorial is designed for these users to provide more background on command line interfaces, as well as advice around customization and cross-platform portability. The goal is to feel just as comfortable in a terminal environment as you are with using a computer in any other way.
Because getting started with a terminal on Windows can be less intuitive than other platforms, the first section of this tutorial will cover Windows terminal environments. If you are using macOS or Linux, you can skip to the following section.
On Windows, there are many choices for a Terminal equivalent. Historically, Windows did not use Unix-style command line shells, such as bash
, which have been ubiquitous on macOS and Linux since the early 2000s. It also lacked special features for highlighting text, opening multiple tabs, and so on. This is because it did not use common command-line terminal interfaces, sometimes called terminal emulators because they emulate the interfaces of older non-graphical computers.
Instead, Windows offered two of its own native command line interfaces: the Windows Command Prompt, and from Windows 7 onward, the Microsoft PowerShell. The Command Prompt, also called cmd.exe
, uses legacy MS-DOS syntax with relatively few additions. PowerShell provides somewhat more modern syntax relative to cmd.exe
(where “modern” in this context means “closer to a modern macOS or Linux shell”), as well as signal handling functions specific to certain Windows software, making it useful for Windows administrators.
Neither of these Windows shells include many fundamental features of modern Unix-style shells, and they are generally not well suited to most cloud development. Because of this, users who needed to work on cloud servers from Windows would usually install software like PuTTY (a tty
is the historical name of a Unix terminal), mobaXterm, or ConEmu. Each of these applications would usually include their own terminal interface, as well as a built-in SSH client for connecting to remote servers. These two features are not usually thought of as analogous on other platforms, but on Windows, it was usually a safe assumption that if you needed a Unix-style terminal, you were going to be working on a remote server, so they were often packaged together. For this reason, using a dedicated SSH GUI like PuTTY is still a popular way of working with cloud servers from Windows.
On other platforms, ssh
is just a command-line program that you can run from a terminal, and it is part of a core group of Linux utilities. It is also possible to install these core Linux utilities on Windows, along with a port of the standard bash
shell. Originally, this functionality was provided by the Cygwin project, which includes ports of many other Linux system tools. Installing Git on Windows also provides its own bash
shell from the MSYS2 project, which shares some functionality and upstream code with Cygwin. This would allow you to, for example, ssh
from the command line on Windows – as well as using common Linux utilities like grep
or cat
– without needing another program to provide this functionality.
These ports of Linux utilities to Windows are typically robust and well-maintained. However, they have a couple of significant downsides which have made them less popular than using similar functionality on macOS or Linux. One is that they generally do not include full-fledged package managers for installing other command line software as needed, which is a core assumption of most Linux environments (also provided by tools like Homebrew on macOS). In recent years, Windows has gained its own ecosystem of command line package managers, such as Chocolatey and Scoop, that can be used with Git’s bash
shell or other environments. Like Homebrew on macOS, these are especially useful on a local machine because they can be used to install desktop software in addition to command line tools. However, because most cloud software is still not designed to run natively on Windows, Chocolatey and Scoop’s package repositories are often less complete than their macOS or Linux equivalents. Using them often requires you to translate install documentation written for more common platforms into a working Windows equivalent, making them unintuitive for beginner users.
The other downside is that PuTTy, Git Bash, and other all-in-one Windows terminal environments usually have very barebones UI features, lacking support for syntax highlighting or tabs. Because the Windows Command Prompt has not visibly improved for some time, Windows users may be pushed toward more awkward workflows than users of a modern terminal like iTerm2 on macOS. To approximate an environment like this, you could instead combine Git’s bash
shell, Chocolatey’s package manager, and ConEmu’s terminal customization. This produces a very usable result, but requires a unique configuration that can be considered brittle or less reproducible than other platforms.
Note: It is also possible to open multiple tabs within a single shell without needing to rely on any additional features of your terminal emulator by using a Terminal Multiplexer such as tmux. However, this usually has a higher learning curve than making use of application-level features.
Nowadays, you can use the new Windows Terminal along with WSL2, also called the Windows Subsystem for Linux. These two projects substantially solve many outstanding issues. The Windows Terminal provides similar functionality to ConEmu, such as tabs, highlighting, and modern text rendering, and it can be used with any installed shell. It is also installed by default on Windows 11, significantly lowering the barrier to access and the need for third-party tools to do this work. WSL2 accomplishes a related goal: by allowing users to install a Linux compatibility environment that runs within Windows and is directly supported by Microsoft, they (mostly) no longer need to consider the entire set of other Windows command line tools.
WSL2 is a full Linux environment that runs within Windows. Because it is not a port of Linux tools to Windows, it comes with significant advantages. When working within a WSL2 environment, you can use apt
or Homebrew or any other native Linux tools to install and run software exactly as you would on a cloud server. This also has some downsides. Unlike with a macOS or Linux terminal, you can’t run native desktop software from WSL2, only Linux software that is installed into your Linux environment. In many cases this will be sufficient, especially if you are mostly using your terminal to deploy software to remote servers or make small configuration changes. However, you may still want to configure Windows Terminal to launch multiple different shell environments depending on your needs: for example, one using Git Bash and Chocolatey, which works with your native Windows software, and one using WSL2, which provides a full package manager and allows you to follow Linux documentation as-is. Many of these tools now have mutual support for one another built in, making this relatively straightforward, and very powerful.
Although bash
is the most common shell on modern Linux distributions, and its syntax is considered the most widely compatible with most environments, it is not the only one. Bourne shell syntax, or /bin/sh
, is a subset of bash
, and is sometimes still used in minimal environments such as containers. There is also the Z shell, or zsh
, which is becoming more popular due to its more flexible MIT license and its configurability. As of 2020, it is the default Terminal shell on macOS, and it is widely available in other environments.
Note: You can change your default shell in any modern command line environment by using the chsh command. This allows you to switch between bash
, zsh
, or others.
Zsh provided earlier and more widespread support for theming, text highlighting, and rendering of non-text characters (also called glyphs, essentially an earlier form of Emoji) than the default bash
shell in many environments. Because of this, there is a larger ecosystem of terminal customization tools for Zsh, such as Oh My Zsh. More importantly, Zsh tools and documentation usually prioritize installing fonts with supports for complex glyphs, such as the Powerline fonts, which is helpful for solving text rendering problems in other environments.
One downside that potential Zsh users quickly realize is that zsh
and bash
do not have strictly identical syntax, and bash
shell scripts are the most common by far. While most everyday shell conventions for navigating directories are the same, some differences arise. This includes testing equivalency with comparison operators, complex filesystem search strings, and so on.
As a general rule, if you are writing a shell script with syntax more complex than bash
accommodates, you should consider a different scripting language. Shell syntax is powerful, but it can also be unwieldy. The Go language, for example, has become popular for writing command line tools that use more advanced flow control than is appropriate for shell scripts. With some exceptions, even dedicated users of zsh
do not usually write and maintain standalone zsh
scripts.
With that in mind, you should not worry about any compatibility issues arising from using zsh
as your default, interactive shell. Virtually all environments where you can run zsh
will also have the bash
interpreter installed to run any bash
scripts as needed. When you are writing a standalone shell script, or running a shell script that you downloaded from elsewhere, the first line will normally contain #!/bin/bash
. This is known as a shebang, or an interpreter directive, which tells the computer which program, or interpreter, to use to run the script by default. This is especially important for shell scripts which cannot otherwise be distinguished by their file extension: both bash
scripts and other shell scripts all end in .sh
. Because bash
is always installed at /bin/bash
in compatible environments (including Git Bash on Windows), providing a complete path here is a widespread and safe convention.
An example of advanced shell syntax that is supported in zsh
by default, but has to be enabled manually in bash
, is the globstar, or **
pattern. Globbing is another name for performing fuzzy-match searching, i.e., searching for files using wildcard *
characters. The **
globstar allows you to use wildcard substitution not only within a single filename, but for entire directories as well. For example, if you were searching for a file named config.txt
somewhere in your home directory, but you didn’t know where to look, you might need to use a command like find
, which is designed to recursively search through nested directories. Even if you already know find
syntax, this would add an additional step to your process where you might otherwise have used a wildcard.
By using a globstar, you could instead provide a path like ~/**/config.txt
to any other command. The shell would automatically expand the search for you, without needing to invoke find
directly. Shells are good at providing this kind of functionality — what would normally require an entire other dependency can be incorporated into a single additional character substitution. To enable globstar use in bash,
you can run shopt -s globstar
.
- shopt -s globstar
You can also experiment with glob search strings using DigitalOcean’s Glob Tool.
As mentioned, the Go language has become very powerful for building modern command line tools. Another relatively new programming language that is especially relevant to command line environments is Rust. Rust is a low-level language that is generally considered similar to C++ and other C-likes, but with much more modern syntax and less of C’s accumulated baggage. Rust is popular for many use cases, including WebAssembly, but it is particularly useful for rewriting C code.
Nearly all of the core utilities that are thought of as essential to command line environments, such as ssh
, curl
, and cat
, are written in the C language, in many cases going back multiple decades. This is why many of them have so many dozens of options that have been added over years of active maintenance. In many ways, Linux has been built up around these specific tools, and they are unlikely ever to be officially replaced. However, there have been recent efforts to develop improved versions of each of them using Rust.
Note: “Core utilities,” or coreutils, is generally taken to be the actual name for this collection of programs. Many of these are often assumed to be actual terminal commands rather than programs – for example, cd
is terminal syntax, whereas cat
is a small, replaceable, program. On macOS, you can even use the Homebrew package manager to replace the built-in macOS coreutils with the more common GNU/Linux coreutils.
For example, bat provides more syntax highlighting and similar features to cat
, dust is similar to using du
to check disk usage but with more sophisticated graphical output, and ripgrep, or rg
, is a much faster implementation of grep
.
The relative advantages of these new Rust tools should not be taken as a judgment on C or Rust themselves, or on their relative performance. Rather, it is a consequence of maintaining the same codebases for a long time. Many core Linux utilities are not prioritizing getting faster, and in most cases, maintaining these utilities is about ensuring that they do not break compatibility with any legacy functionality. Most of these Rust utilities have been added to different platforms’ package managers, but they are still treated as new and optional.
The most significant downside to these new Rust tools is that many people may actually forget to use them, since their habits of using cat
or grep
will quickly become ingrained. To work around this, or to further customize your shell environment in general, you can use a profile file.
Both bash
and zsh
support profile files. A profile file is a file in your home directory called .bash_profile
or .bashrc
for bash, or .zshrc
for zsh. Often, they are created automatically when you first launch your shell, but because they start with a .
character, they are treated by the system as hidden, and remain invisible unless edited directly. Profile files can contain a number of settings that help to initialize your shell environment, such as aliases from one command to another.
With this method, you can create an alias in your ~/.bash_profile
to make the grep
command always run rg
instead of your system grep
. As mentioned, these Rust tools are not usually installed by default, so you would first need to install rg
. If you are using Homebrew, you can install it from the provided ripgrep package.
- brew install ripgrep
Next, open ~/.bash_profile
using nano
or your favorite text editor. This file may or may not already exist if you already have some profile settings configured.
- nano ~/.bash_profile
Then, add a line to the end of the file that includes the alias
command:
…
alias grep='rg'
Save and close the file, then close and open a new bash
terminal. From now on, when you run grep
, you’ll get rg
instead:
- grep --version
Outputripgrep 12.1.1
-SIMD -AVX (compiled)
+SIMD +AVX (runtime)
You should be aware that if you copy and paste a grep
command that you find elsewhere, and it uses functionality that isn’t present in rg
, it may not work correctly. For this reason, you should use aliases sparingly. However, they are useful for creating shortcuts that do not outright replace other commands. For example, the Python programming environment comes with a built-in webserver that can serve any small static sites by running python -m http.server port_number
. If you use this feature often, you may want to create an alias along the lines of alias pyserver="python -m http.server 8000"
.
Note: Aliases in ~/.bash_profile
will not override any commands in standalone bash scripts, because your ~/.bash_profile
is only loaded into interactive bash shells by default. You can manually load your ~/.bash_profile
by using the command source ~/.bash_profile
, but it is usually better for compatibility reasons to run standalone scripts in a stock environment.
Each terminal session is configured with a number of environment variables by default. Many of these are set automatically by the system, but you can provide additional overrides in ~/bash_profile
, or interactively within a single shell session. To view your current environment variables, run env
:
- env
OutputSHELL=/usr/local/bin/bash
ITERM_PROFILE=bash
COLORTERM=truecolor
XPC_FLAGS=0x0
TERM_PROGRAM_VERSION=3.4.15
…
Many of these contain information about your shell itself, which will not be particularly useful. One exception to this is the PATH
variable, which contains a list of all the directories that are automatically checked to run command-line programs. You can pipe with grep
to output only the line containing the PATH
variable from the env
command:
- env | grep PATH
OutputPATH=/usr/local/opt/mysql-client/bin:/Users/sammy/.gem/ruby/3.0.0/bin:/usr/local/opt/ruby/bin:/usr/local/lib/ruby/gems/3.0.0/bin:/Users/sammy/.cargo/bin:/Users/sammy/.rbenv/shims:/Users/sammy/.pyenv/shims:/Users/sammy/.pyenv/bin:/Users/sammy/Library/Python/3.9/bin:/usr/local/opt/gnu-sed/libexec/gnubin:/usr/local/sbin:/usr/local/opt/libpq/bin:/usr/local/opt/coreutils/libexec/gnubin:/usr/local/opt/findutils/libexec/gnubin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/TeX/texbin:/opt/X11/bin:/Library/Apple/usr/bin:/Library/Frameworks/Mono.framework/Versions/Current/Commands
Note the directories in this list are separated by a :
character. In general, your PATH variable will be longer on Windows or macOS than on Linux. Linux tries to enforce installing command-line programs only into directories like bin
or /usr/local/bin
, whereas Windows and Mac software is often installed to other directories that need to be added to your PATH for those commands to be available on the command line. Installing software via a package manager usually takes care of this. While you can manually move programs into /usr/local/bin
or another directory on your path, be aware that package managers expect to be able to manipulate the contents of these directories. Any programs that you install manually could be overwritten, or cause an error when attempting to be overwritten.
You can check the installed location of a command line program by using the which
command:
- which python
Output/Users/sammy/.pyenv/shims/python
This is useful for verifying which version of a program will run by default when, for example, python
is run from the command line. If a program is present in multiple directories on your PATH, the directory that is listed first in your PATH variable will take precedence. Python is a notorious example. Because macOS and Linux both include a version of Python with the system that is sometimes updated either too frequently or too infrequently, tools like pyenv are designed to register a separate installation of Python as early on your path as possible. This ensures that you are always working directly with and installing additional libraries for the pyenv
-provided Python.
Any time you do not get the expected result from a command, ask yourself these questions:
The variable inheritance behavior of your PATH and your shell environment is usually straightforward — you may just have multiple programs making other, conflicting assumptions. Knowing how to check and configure your environment will go a long way toward helping you be comfortable on the command line.
In this tutorial, you reviewed many nuances of terminal environments, including configuration on Windows, differences between bash
and other shells, modern command-line utilities, and environment variables including paths and aliases. Many of these are overlooked by new developers, and each of them can make working either locally or in the cloud much more pleasant and effective.
Next, you may want to learn to work with DigitalOcean’s command line client, doctl.
]]>To set up a cloud server, one of the first things you need to do is install an operating system. In a modern context, this means a Linux operating system almost all of the time. Historically, both Windows servers and other types of Unix were popular in specific commercial contexts, but almost everyone now runs Linux due to its broad support, free or flexible licensing, and overall ubiquity in server computing. There are many Linux distributions available, each with their own maintainers, with some backed by commercial providers and some not. The distributions detailed in the following sections are some of the most popular operating systems for running cloud servers.
Ubuntu is one of the most popular Linux distributions for both servers and desktop computers. New Ubuntu versions are released every six months, and new long-term support versions of Ubuntu are released every two years and supported for five. Most educational content about Linux reflects Ubuntu due to its popularity, and the breadth of available support is a significant point in its favor.
Debian is upstream of Ubuntu, meaning its core architectural decisions usually inform Ubuntu releases, and it uses the same .deb package format and apt
package manager that Ubuntu uses. Debian is not as popular for production servers due to its conservative packaging choices and lack of commercial support. However, many users pick Debian due to its portability and its use as a baseline for many other Linux distributions on different platforms, including Raspbian, the most popular Raspberry Pi OS.
Red Hat Enterprise Linux or RHEL, is the most popular commercially supported Linux distribution. Unlike the Debian family, it uses .rpm packages and a package manager called dnf
, along with its own ecosystem of tools. For licensing reasons, Red Hat is only used where there is a commercial support agreement in place.
Rocky Linux is downstream of Red Hat the way that Ubuntu is downstream of Debian, and unlike RHEL is free to use like most other Linux distributions, making it a popular choice for users that have adopted Red Hat tooling but may not be using Red Hat’s commercial support. Previously, a distribution called CentOS filled the same role as Rocky Linux, but its release model is changing. Rocky Linux versions track closely with RHEL versions, and most documentation can be shared between the two.
Fedora Linux is upstream of Red Hat, and like Ubuntu, is used in desktop environments as well as on servers. Fedora is the de facto development home of most RHEL ecosystem packages, as well as of the Gnome desktop environment, which is used as a default by Ubuntu and others.
Arch Linux is another popular desktop-focused Linux distribution which is not a member of either the Debian or the Red Hat Linux family, but provides its own unique packaging format and tools. Unlike the other distributions, it does not use release versioning of any kind — its packages are always the newest available. For this reason, it is not recommended for production servers, but provides excellent documentation, and can be very flexible for knowledgeable users.
Alpine Linux is a minimal Linux distribution which does not provide many common tools by default. Historically there have been many Linux distributions created with this goal in mind. Alpine is commonly used in modern containerized deployments such as Docker, where your software may need a virtualized operating system to run in, but needs to keep its overall footprint as small as possible. You would generally not work directly in Alpine Linux unless trying to prototype a container.
Previously, there were more differences between distributions in their choice of init system, window manager, and other libraries, but nearly all major Linux distributions have now standardized on systemd and other such tools.
There are many other Linux distributions, but most of the others can be currently understood in relation to these seven. As you can tell from this overview, most of your selection criteria for Linux distributions will come down to:
Choosing a distribution is down to preference, but if you are working in the cloud and do not have any production requirements for the Red Hat ecosystem, Ubuntu is a popular default choice. You can also review the available packages for a given distribution from their web-facing package repositories. For example, the Ubuntu 22.04 “Jammy Jellyfish” packages are hosted under the Jammy section of Ubuntu.com.
Most Linux distributions also differ significantly in how third-party packages — packages not available from the repository’s own package sources — are created, discovered, and installed. Red Hat, Fedora, and Rocky Linux generally use only a few popular third party package repositories in addition to their official packages, in keeping with their more authoritative, production-minded approach. One of these is the Extra Packages for Enterprise Linux or EPEL. Because the RHEL ecosystem draws a distinction between packages that are commercially supported and those that aren’t, many common packages that are available out of the box on Ubuntu will require you to configure EPEL to install them on Red Hat. In this and many other cases, which packages are available upstream from your distribution’s own repositories is often a matter of authoritativeness and maintenance responsibility more than anything else. Many third-party package sources are widely trusted, they may just be out of the scope of your distribution’s maintainers.
Ubuntu allows individual users to create PPAs, or personal package archives, to maintain third-party software for others to install. However, using too many PPAs concurrently can cause incompatibility headaches, because Debian and Ubuntu packages are all versioned to have specific requirements, so PPA maintainers need to match Ubuntu’s upstream updates fairly closely. Arch Linux has a single repository for user-submitted packages, fittingly called the Arch User Repository or AUR, and although their approach seems more chaotic by comparison, it can be more convenient in practice if you use dozens of third-party packages.
You can also avoid adding complexity to your system package manager by instead installing third-party software through Homebrew or through Docker. Although “Dockerized” or containerized deployments can be inefficient in terms of disk usage and installation overhead, which is where Alpine Linux comes into consideration, they are portable across distributions and do not impose any versioning requirements on your system. However, any packages not installed by your system package manager may not receive automatic updates by default, which should be another consideration.
In this tutorial, you reviewed some of the most important considerations in choosing a Linux distribution for your cloud. The now-widespread use of Docker and other container engines means that choosing a distribution is not quite as impactful in terms of the software you’re able to run as it was in the past. However, it still factors heavily into how you’ll obtain support for your software, and should be a significant consideration as you scale your infrastructure for production.
To learn more about how to work with the system package manager on different Linux distributions, refer to Package Management Essentials.
Tutorials
Products
Everyone has problems with their web server or site at one time or another. Learning where to look when you come across a problem and which components are the likely culprits will help you fix these problems as quickly and robustly as possible.
In this guide, you’ll gain an understanding of how to troubleshoot these issues so that you can get your site back up and running.
The majority of problems that you’ll encounter when trying to get your site up and running fall into a predictable spectrum.
We will go over these in more depth in the sections below, but for now, here’s a checklist of items to look into:
These are some of the common problems that administrators come across when a site is not working correctly. The exact issue can usually be narrowed down by taking a look at the different components’ log files and by referencing the error pages shown in your browser.
Below, we’ll go through each of these scenarios so that you can make sure your services are configured correctly.
Before blindly trying to track down a problem, try to check the logs of your web server and any related components. These will usually be in /var/log
in a subdirectory specific to the service.
For instance, if you have an Apache server running on an Ubuntu server, by default the logs will be kept in /var/log/apache2
. Check the files in this directory to see what kind of error messages are being generated. If you have a database backend that is giving you trouble, that will likely keep its logs in /var/log
as well.
Other things to check are whether the processes themselves leave you error messages when the services are started. If you attempt to visit a web page and get an error, the error page can contain clues too (although not as specific as the lines in the log files).
Use a search engine to try to find relevant information that can point you in the right direction. In many cases, it may be helpful to paste a snippet of your logs directly into a search engine to find other examples of the same issue. The steps below can help you troubleshoot further.
The first thing you will probably need to serve your sites correctly is a web server. In some cases, your web pages may be being served directly by a Docker container or some other application, and you won’t actually need to install a dedicated web server, but most deployments will still include at least one.
Most people will have installed a server before getting to this point, but there are some situations where you may have actually accidentally uninstalled the server when performing other package operations.
If you are on an Ubuntu or Debian system and need to install the Apache web server, you can type:
- sudo apt-get update
- sudo apt-get install apache2
On these systems, the Apache process is called apache2
.
If you are running Ubuntu or Debian and want the Nginx web server, you can instead type:
- sudo apt-get update
- sudo apt-get install nginx
On these systems, the Nginx process is called nginx
.
If you are running RHEL, Rocky Linux, or Fedora and need to use the Apache web server, you can type:
- sudo dnf install httpd
On these systems, the Apache process is called httpd
.
If you are running RHEL, Rocky Linux, or Fedora and want to use Nginx, you can type this. Again, remove the “sudo” if you are logged in as root:
- sudo dnf install nginx
On these systems, the Nginx process is called nginx
. Unlike on Ubuntu, Nginx is not started automatically after being installed on these RPM-based distributions. Read on to learn how to start it.
Now that you are sure your server is installed, is it running?
There are plenty of ways of finding out if the service is running or not. One method that is fairly cross-platform is to use the netstat
command.
Running netstat
with the -plunt
flags will tell you all of the processes that are using ports on the server. To learn more about running netstat
, you can refer to How To Use Top, Netstat, Du, & Other Tools to Monitor Server Resources. You can then grep
the output of netstat
for the name of the process you are looking for:
- sudo netstat -plunt | grep nginx
Outputtcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 15686/nginx: master
tcp6 0 0 :::80 :::* LISTEN 15686/nginx: master
You can change nginx
to the name of the web server process on your server. If you see a line like the one above, it means your process is up and running. If you don’t get any output back, it means you queried for the wrong process or that your web server is not running.
If your web server isn’t running, you can start it using your Linux distribution’s init system. Most software that’s designed to run in the background will register itself with the init system after being installed, so that you can start and stop it programmatically. Most distributions also now use the same init system, systemd
, which provides the systemctl
command.
For instance, you could start the nginx
service by typing:
- sudo systemctl start nginx
If your web server starts, you can check with netstat
again to verify that everything is correct.
If your web server was unable to start, this is usually an indication that your configuration files need some attention. Both Apache and Nginx require strict adherence to their syntax in order for their configuration to be parsed correctly.
The configuration files for system services are usually located within a subdirectory of the /etc/
directory named after the process itself.
For example, you could get to the main configuration directory of Apache on Ubuntu by typing:
- cd /etc/apache2
The Apache configuration directory on RHEL, Rocky, and Fedora also reflects the RHEL name for that process:
- cd /etc/httpd
The configuration will be spread out among many different files. When trying and failing to start a service, it will usually produce errors pointing to the configuration file and the line where the problem was first found. You can start investigating that file.
Each of these web servers also provide a way of validating the configuration syntax of your files.
If you are using Apache, you can use the apache2ctl
or apachectl
command to check your configuration files for syntax errors:
- apache2ctl configtest
OutputAH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message
Syntax OK
Syntax OK
essentially means that there are no major errors preventing the server from running, and every message printed prior to that is a minor error or a warning. In this case, the Could not reliably determine the server's fully qualified domain name
reflects an out-of-the-box Apache setup on a server that has not yet been configured with a domain name, but which should still be accessible by its IP address.
If you have an Nginx web server, you can run a similar test by typing:
- sudo nginx -t
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If you remove a semicolon at the end of a configuration line in /etc/nginx/nginx.conf
(a common error for Nginx configurations), you would get a message like this:
- sudo nginx -t
Outputnginx: [emerg] invalid number of arguments in "tcp_nopush" directive in /etc/nginx/nginx.conf:18
nginx: configuration file /etc/nginx/nginx.conf test failed
There is an invalid number of arguments because Nginx looks for a semicolon to end statements. If it doesn’t find one, it drops down to the next line and interprets that as further arguments for the last line.
You can run these tests in order to find syntax problems in your files. Fix the problems that it references until you can get the files to pass the test.
Generally, web servers run on port 80 for HTTP web traffic and use port 443 for HTTPS traffic encrypted with TLS/SSL. In order for users to reach your site, these ports must be accessible.
You can test whether your server has its port open by running netcat
from your local machine.
You’ll need to use your remote server’s IP address and tell it what port to check, like this:
- nc -z 111.111.111.111 80
This will check whether port 80 is open on the server at 111.111.111.111
. If it is open, the command will return right away. If it is not open, the command will continuously try to form a connection, unsuccessfully. You can stop this process by pressing CTRL+C
in the terminal window.
If your web ports are not accessible, you should look at your firewall configuration. You may need to open up port 80 or port 443.
If you can reach your site by its IP address, but not through a domain name that you’ve set up, you may need to take a look at your DNS settings.
In order for visitors to reach your site through its domain name, you should have an “A” or “AAAA” record pointing to your server’s IP address in the DNS settings. You can query for your domain’s “A” record by using the host
command on a local or:
- host -t A example.com
Outputexample.com has address 93.184.216.119
The line that is returned to you should match the IP address of your server. If you need to check an “AAAA” record (for IPv6 connections), you can type:
- host -t AAAA example.com
Outputexample.com has IPv6 address 2606:2800:220:6d:26bf:1447:1097:aa7
Keep in mind that any changes you make to your DNS records can take some time to propagate, depending on your domain name registrar. It can sometimes be helpful to use a site like https://www.whatsmydns.net/ to check when your DNS changes have come into effect globally (usually half an hour or so). You may receive inconsistent results to these queries after a change since your request will often hit different servers that are not all up-to-date yet.
If you are using DigitalOcean, you can learn how to configure DNS settings for your domain here.
If your DNS settings are correct, you may also want to check your Apache virtual host files or the Nginx server block files to make sure they are configured to respond to requests for your domain.
In Apache, your virtual host file might look like this:
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
ServerAdmin admin@example.com
DocumentRoot /var/www/html
. . .
This virtual host is configured to respond to any requests on port 80 for the domain example.com
.
A similar chunk in Nginx might look something like this:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name example.com www.example.com;
. . .
These two blocks are configured to respond to the same default types of requests.
Another consideration is whether your web server is pointed at the correct file location.
Each virtual server in Apache or server block in Nginx is configured to point to a specific port or local directory. If this is configured incorrectly, the server will throw an error message when you try to access the page.
In Apache, the document root is configured through the DocumentRoot
directive:
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
ServerAdmin admin@example.com
DocumentRoot /var/www/html
. . .
This line tells Apache that it should look for the files for this domain in the /var/www/html
directory. If your files are kept elsewhere, you’ll have to modify this line to point to the correct location.
In Nginx, the root
directive configures the same thing:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name example.com www.example.com;
. . .
In this configuration, Nginx looks for files for this domain in the /usr/share/nginx/html
directory.
If your document root is correct and your index pages are not being served correctly when you go to your site or a directory location on your site, you may have your indexes configured incorrectly.
Depending on the complexity of your web applications, many web servers will still default to serving index files. This is usually an index.html
file or an index.php
file depending on your configuration.
In Apache, you may find a line in your virtual host file that configures the index order that will be used for specific directories explicitly, like this:
<Directory /var/www/html>
DirectoryIndex index.html index.php
</Directory>
This means that when the directory is being served, Apache will look for a file called index.html
first, and try to serve index.php
as a backup if the first file is not found.
You can set the order that will be used to serve index files for the entire server by editing the mods-enabled/dir.conf
file, which will set the defaults for the server. If your server is not serving an index file, make sure you have an index file in your directory that matches one of the options in your file.
In Nginx, the directive that does this is is called index
and it is used like this:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name example.com www.example.com;
. . .
In order for the web server to correctly serve files, it must be able to read the files and have access to the directories where they are kept. This can be controlled through the file and directory permissions and ownership.
To read files, the directories containing the content must be readable and executable by the user account associated with the web server. On Ubuntu and Debian, Apache and Nginx run as the user www-data
which is a member of the www-data
group.
On RHEL, Rocky, and Fedora, Apache runs under a user called apache
which belongs to the apache
group. Nginx runs under a user called nginx
which is a part of the nginx
group.
With this in mind, you can look at the files and folders that you are hosting:
- ls -l /path/to/web/root
The directories should be readable and executable by the web user or group, and the files should be readable in order to read content. In order to upload, write or modify content, the directories must additionally be writeable and the files need to be writable as well.
To modify the ownership of a file, you can use chown
:
- sudo chown user_owner:group_owner /path/to/file
This can also be done to a directory. You can change the ownership of a directory and all of the files under it by passing the -R
flag:
- sudo chown -R user_owner:group_owner /path/to/file
To learn more about permissions, refer to An Introduction to Linux Permissions.
Your web server settings may also be configured to deny access from the files you are trying to serve.
In Apache, this would be configured in the virtual host file for that site, or through an .htaccess
file located in the directory itself.
Within these files, it is possible to restrict access in a few different ways. Directories can be restricted like this in Apache 2.4+:
<Directory /usr/share>
AllowOverride None
Require all denied
</Directory>
In Nginx, these restrictions will take the form of deny
directives and will be located in your server blocks or main config files:
location /usr/share {
deny all;
}
If your site relies on a database backend like MySQL, PostgresSQL, MongoDB, etc. you need to make sure that it is up and available.
You can do that in the same way that you checked that the web server was running. Again, you can search through running processes with netstat
and grep
:
- sudo netstat -plunt | grep mysql
Outputtcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 3356/mysqld
As you can see, the service is running on this machine. Make sure you know the name that your service runs under when searching for it.
An alternative is to search for the port that your service runs on. Look at the documentation for your database to find the default port that it runs on (MySQL defaults to 3356), or check your configuration files.
The next step to take if you are troubleshooting an issue with a database backend is to see if you can connect correctly. This usually means checking the files that your site reads to find out the database information.
For instance, for a WordPress site, the database connection settings are stored in a file called wp-config.php
. You need to check that the DB_NAME
, DB_USER
, and DB_PASSWORD
are correct in order for your site to connect to the database.
You can test whether the file has the correct information by trying to connect to the database manually on the command line. Most databases will support similar syntax to MySQL:
- mysql -u DB_USER_value -pDB_PASSWORD_value DB_NAME_value
If you cannot connect using the values you found in the file, you may need to modify the access permissions of your databases.
Checking the logs should actually be your first step, but it is also a good last step prior to asking for more help.
If you have reached the end of your ability to troubleshoot on your own and need some help, you will get more relevant help faster by providing log files and error messages. Experienced administrators will probably have a good idea of what is happening if you give them the pieces of information that they need.
Hopefully these troubleshooting tips will have helped you track down and fix some of the more common issues that administrators face when trying to get their sites up and running.
If you have any additional tips for things to check and ways of problem solving, please share them with other users in the comments.
]]>The redirection capabilities built into Linux provide you with a robust set of tools to optimize many workflows. The “Unix philosophy” of software development was to make tools that each do one thing well, and this philosophy has been carried forward to modern command-line tools, which are individually powerful, and exponentially more so when combined. Whether you’re writing complex software or just working on the command line, knowing how to manipulate the different I/O streams in your environment will greatly increase your productivity.
To follow along with this guide, you will need to have access to a Linux server. If you need information about connecting to your server for the first time, you can follow our guide on connecting to a Linux server using SSH.
Input and output in the Linux environment is distributed across three streams. These streams are:
standard input (stdin)
standard output (stdout)
standard error (stderr)
The streams are also numbered:
stdin (0)
stdout (1)
stderr (2)
During standard interactions between the user and the terminal, standard input comes from the user’s keyboard. Standard output and standard error are displayed on the user’s terminal as text. Collectively, the three streams are referred to as the standard streams.
The standard input stream typically carries data from a user to a program. Programs that expect standard input usually receive input from a device, such as a keyboard. Later in this tutorial, you will see examples of using one program’s output as Standard Input to another.
Standard output is the output that is generated by a program. When the standard output stream is not redirected, it will output text directly to the terminal. Try to output some arbitrary text, using echo
:
- echo Sent to the terminal
OutputSent to the terminal
When used without any additional options, the echo
command outputs any argument that is passed to it on the command line.
Run echo without any arguments:
- echo
It will return an empty line. Some programs do not do anything without provided arguments.
Standard error contains errors generated by a program that has failed in some way. Like standard output, the default destination for this stream is the terminal display.
Let’s see a basic example of standard error using the ls command. ls lists a directory’s contents.
When run without an argument, ls lists the contents within the current directory. If ls is run with a directory as an argument, it will list the contents of the provided directory.
- ls %
Since % is not an existing directory, this will send the following text to standard error:
Outputls: cannot access %: No such file or directory
A program does not have to crash or finish running in order to generate Standard Error, and whether some output is sent to either Standard Output or Standard Error is down to the behavior of the program. They are not technically different from one another in any way — just that one output stream is supposed to be reserved for error messages, and some tools will assume that Standard Error being empty means that a program ran successfully. Some programs will even output minor errors to Standard Error without crashing or failing to also produce the intended output. It is only used as a convention to separate intended output from unintended output.
Linux includes redirection commands for each stream. These can be used to write standard output or standard error to a file. If you write to a file that does not exist, a new file with that name will be created prior to writing.
Commands with a single bracket overwrite the destination’s existing contents.
Overwrite
> - standard output
< - standard input
2> - standard error
Commands with a double bracket do not overwrite the destination’s existing contents.
Append
>> - standard output
<< - standard input
2>> - standard error
Pipes are used to redirect a stream from one program to another. When a program’s standard output is sent to another through a pipe, the first program’s output will be used as input to the second, rather than being printed to the terminal. Only the data returned by the second program will be displayed.
The Linux pipe is represented by a vertical bar: |
Here is an example of a command using a pipe:
- ls | less
This takes the output of ls
, which displays the contents of your current directory, and pipes it to the less
program. less
displays the data sent to it one line at a time.
ls
normally displays directory contents across multiple rows. When you run it through less, each entry is placed on a new line.
Though the functionality of the pipe may appear to be similar to that of >
and >>
, the distinction is that pipes redirect data from one command to another, while > and >> are used to redirect exclusively to files.
Filters are are a class of programs that are commonly used with output piped from another program. Many of them are also useful on their own, but they illustrate piping behavior especially well.
find - returns files with filenames that match the argument passed to find.
grep - returns text that matches the string pattern passed to grep.
tee - redirects standard input to both standard output and one or more files.
tr - finds-and-replaces one string with another.
wc - counts characters, lines, and words.
Now that you have been introduced to redirection, piping, and basic filters, let’s look at some common redirection patterns and examples.
The command > file
pattern redirects the standard output of a command to a file.
- ls ~ > root_dir_contents.txt
The command above passes the contents of your home directory (~
) as standard output, and writes the output to a file named root_dir_contents.txt
. It will delete any prior contents in the file, as it is a single-bracket command.
The command > /dev/null
pattern redirects standard output to nowhere. /dev/null
is a special file that is used to trash any data that is redirected to it. It is used to discard standard output that is not needed, and that might otherwise interfere with the functionality of a command or a script. Any output that is sent to /dev/null
is discarded.
- ls > /dev/null
This command discards the standard output stream returned from the command ls by passing it to /dev/null.
This command 2> file
pattern redirects the standard error stream of a command to a file, overwriting existing contents.
- mkdir '' 2> mkdir_log.txt
This redirects the error raised by the invalid directory name ''
, and writes it to log.txt
. Note that the error is still sent to the terminal and displayed as text.
The command >> file
pattern redirects the standard output of a command to a file without overwriting the file’s existing contents.
- echo Written to a new file > data.txt
- echo Appended content to an existing file >> data.txt
This pair of commands first redirects the text inputted by the user through echo to a new file. It then appends the text received by the second echo command to the existing file, without overwriting its contents.
The command 2>> file
pattern above redirects the standard error stream of a command to a file without overwriting the file’s existing contents. This pattern is useful for creating error logs for a program or service, as the log file will not have its previous content wiped each time the file is written to.
- find '' 2> stderr_log.txt
- wc '' 2>> stderr_log.txt
The above command redirects the error message caused by an invalid find argument to a file named stderr_log.txt. It then appends the error message caused by an invalid wc argument to the same file.
The command | command
pattern redirects the standard output from the first command to the standard input of the second command.
- find /var lib | grep deb
This command searches through /var and its subfolders for filenames and extensions that match the string deb
, and returns the file paths for the files, with the matching portion in each path highlighted in red.
The command | tee file
pattern (which includes the tee
command) redirects the standard output of the command to a file and overwrites its contents. Then, it displays the redirected output in the terminal. It creates a new file if the file does not already exist.
In the context of this pattern, tee
is typically used to view a program’s output while simultaneously saving it to a file.
- wc /etc/magic | tee magic_count.txt
This pipes the counts for characters, lines, and words in the /etc/magic
file (used by the Linux shell to determine file types) to the tee command, which then splits wc
’s output in two directions, and sends it to the terminal display and the magic_count.txt file
. For the tee command, imagine the letter T. The bottom part of the letter is the initial data, and the top part is the data being split in two different directions (standard output and the terminal).
Multiple pipes can be used to redirect output across multiple commands and/or filters.
Learning how to use the redirection capabilities built into the Linux command line is a crucial skill. Now that you have seen the basics of how redirections and pipes work, you’ll be able to begin your journey into the world of shell scripting, which makes frequent use of the programs and patterns highlighted in this guide.
Searching for specific commands, or for something that you would like to do in the command line (e.g. “delete all files in a directory that begin with an uppercase letter”) can also prove helpful when you need to accomplish a specific task using the command line.
]]>Linux is, by definition, a multi-user OS that is based on the Unix concepts of file ownership and permissions to provide security at the file system level. To reliably administer a cloud server, it is essential that you have a decent understanding of how ownership and permissions work. There are many intricacies of dealing with file ownership and permissions, but this tutorial will provide a good introduction.
This tutorial will cover how to view and understand Linux ownership and permissions. If you are looking for a tutorial on how to modify permissions, you can read Linux Permissions Basics and How to Use Umask on a VPS.
Make sure you understand the concepts covered in the prior tutorials in this series:
To follow this tutorial, you will need access to a cloud server. You can follow this guide to creating a DigitalOcean droplet.
As mentioned in the introduction, Linux is a multi-user system. You should understand the fundamentals of Linux users and groups before ownership and permissions, because they are the entities that the ownership and permissions apply to. Let’s get started with what users are.
In Linux, there are two types of users: system users and regular users. Traditionally, system users are used to run non-interactive or background processes on a system, while regular users are used for logging in and running processes interactively. When you first initialize and log in to a Linux system, you may notice that it starts out with many system users already created to run the services that the OS depends on. This is normal.
You can view all of the users on a system by looking at the contents of the /etc/passwd
file. Each line in this file contains information about a single user, starting with its username (the name before the first :
). You can print the contents of the passwd
file with cat
:
- cat /etc/passwd
Output…
sshd:x:109:65534::/run/sshd:/usr/sbin/nologin
landscape:x:110:115::/var/lib/landscape:/usr/sbin/nologin
pollinate:x:111:1::/var/cache/pollinate:/bin/false
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
lxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false
vault:x:997:997::/home/vault:/bin/bash
stunnel4:x:112:119::/var/run/stunnel4:/usr/sbin/nologin
sammy:x:1001:1002::/home/sammy:/bin/sh
In addition to the two user types, there is the superuser, or root user, that has the ability to override any file ownership and permission restrictions. In practice, this means that the superuser has the rights to access anything on its own server. This user is used to make system-wide changes.
It is also possible to configure other user accounts with the ability to assume “superuser rights”. This is often referred to as having sudo
, because users who have permissions to temporarily gain superuser rights do so by preceding admin-level commands with sudo
. In fact, creating a normal user that has sudo
privileges for system administration tasks is considered to be best practice. This way, you can be more conservative in your use of the root user account.
Groups are collections of zero or more users. A user belongs to a default group, and can also be a member of any of the other groups on a server.
You can view all the groups on the system and their members by looking in the /etc/group
file, as you would with /etc/passwd
for users. This article does not cover group management.
Now that you know what users and groups are, let’s talk about file ownership and permissions!
In Linux, every file is owned by a single user and a single group, and has its own access permissions. Let’s look at how to view the ownership and permissions of a file.
The most common way to view the permissions of a file is to use ls
with the long listing option -l
, e.g. ls -l myfile
. If you want to view the permissions of all of the files in your current directory, run the command without the myfile
argument, like this:
- ls -l
Note: If you are in an empty home directory, and you haven’t created any files to view yet, you can follow along by listing the contents of the /etc
directory by running this command: ls -l /etc
Here is an example screenshot of ls -l
output, with labels of each column of output:
Each file lists its mode (which contains permissions), owner, group, and name are listed. To help explain what all of those letters and hyphens mean, let’s break down the mode column into its components.
To help explain what all the groupings and letters mean, here is a breakdown of the mode metadata of the first file in the above example:
In Linux, there are two types of files: normal and special. The file type is indicated by the first character of the mode of a file — in this guide, this will be referred to as the “file type field”.
Normal files can be identified by a hyphen (-
) in their file type fields. Normal files can contain data or anything else. They are called normal, or regular, files to distinguish them from special files.
Special files can be identified by a non-hyphen character, such as a letter, in their file type fields, and are handled by the OS differently than normal files. The character that appears in the file type field indicates the kind of special file a particular file is. For example, a directory, which is the most common kind of special file, is identified by the d
character that appears in its file type field (like in the previous screenshot). There are several other kinds of special files.
From the diagram, you can see that the mode column indicates the file type, followed by three triads, or classes, of permissions: user (owner), group, and other. The order of the classes is consistent across all Linux systems.
The three permissions classes work as follows:
The next thing to pay attention to are those sets of three characters. They denote the permissions, in symbolic form, that each class has for a given file.
In each triad, read, write, and execute permissions are represented in the following way:
r
in the first positionw
in the second positionx
in the third position. In some special cases, there may be a different character hereA hyphen (-
) in the place of one of these characters indicates that the respective permission is not available for the respective class. For example, if the group (second) triad for a file is r--
, the file is “read-only” to the group that is associated with the file.
Now that you know how to read the permissions of a file, you should know what each of the permissions actually allow users to do. This tutorial will cover each permission individually, but keep in mind that they are often used in combination with each other to allow for useful access to files and directories.
Here is a breakdown of the access that the three permission types grant to user:
For a normal file, read permission allows a user to view the contents of the file.
For a directory, read permission allows a user to view the names of the file in the directory.
For a normal file, write permission allows a user to modify and delete the file.
For a directory, write permission allows a user to delete the directory, modify its contents (create, delete, and rename files in it), and modify the contents of files that the user has write permissions to.
For a normal file, execute permission allows a user to execute (run) a file — the user must also have read permission. Execute permissions must be set for executable programs and shell scripts before a user can run them.
For a directory, execute permission allows a user to access, or traverse into (i.e. cd
) and access metadata about files in the directory (the information that is listed in an ls -l
).
Now that know how to read the mode of a file, and understand the meaning of each permission, you will see a few examples of common modes, with brief explanations, to bring the concepts together.
-rw-------
: A file that is only accessible by its owner-rwxr-xr-x
: A file that is executable by every user on the system. A “world-executable” file-rw-rw-rw-
: A file that is open to modification by every user on the system. A “world-writable” filedrwxr-xr-x
: A directory that every user on the system can read and accessdrwxrwx---
: A directory that is modifiable (including its contents) by its owner and groupdrwxr-x---
: A directory that is accessible by its groupThe owner of a file usually enjoys the most permissions, when compared to the other two classes. Typically, you will see that the group and other classes only have a subset of the owner’s permissions (equivalent or less). This makes sense because files should only be accessible to users who need them for a particular reason.
Another thing to note is that even though many permission combinations are possible, only certain ones make sense in most situations. For example, write or execute access is almost always accompanied by read access, since it’s hard to modify, and impossible to execute, something you can’t read.
You should now have a good understanding of how ownership and permissions work in Linux. To learn how to modify these permissions using chown
, chgrp
, and chmod
, refer to Linux Permissions Basics and How to Use Umask on a VPS.
If you would like to learn more about Linux fundamentals, read the next tutorial in this series, An Introduction to Linux I/O Redirection.
]]>Navigating and manipulating files and folders in the filesystem is a key part of working with most computers. Cloud servers mostly use the same common Linux shells, and common Linux commands, for working with files and folders. This terminal will introduce some fundamental skills for using these commands.
In order to follow along with this guide, you will need to have access to a Linux server. If you need information about connecting to your server for the first time, you can follow our guide on connecting to a Linux server using SSH.
You will also want to have an understanding of how the terminal works and what Linux commands look like. This guide covers an introduction to the terminal.
All of the material in this guide can be accomplished with a regular, non-root (non-administrative) user account. You can learn how to configure this type of user account by following your distribution’s initial server setup guide, such as for Ubuntu 22.04.
When you are ready to begin, connect to your Linux server using SSH and continue below.
The most fundamental skills you need to master are moving around the filesystem and getting an idea of what is around you. You will review the tools that allow you to do this in this section.
When you log into your server, you are typically dropped into your user account’s home directory. A home directory is a directory set aside for your user to store files and create directories. It is the location in the filesystem where you have full dominion.
To find out where your home directory is in relation to the rest of the filesystem, you can use the pwd
command. This command displays the directory that you are currently in:
- pwd
Output/home/sammy
The home directory is named after the user account. This directory is within a directory called /home
, which is itself within the top-level directory, which is usually called the “root” directory, and is represented by a single slash /
.
Now that you know how to display the directory that you are in, you can look at the contents of a directory.
Currently, your home directory does not have much to look at, so you can go to another, more populated directory to explore. Use cd
to move to this directory. Afterward, you’ll use pwd
to confirm that you successfully moved:
- cd /usr/share
- pwd
Output/usr/share
Now that you are in a new directory, let’s look at what’s inside. To do this, you can use the ls
command:
- ls
Outputadduser groff pam-configs
applications grub perl
apport grub-gfxpayload-lists perl5
apps hal pixmaps
apt i18n pkgconfig
aptitude icons polkit-1
apt-xapian-index info popularity-contest
. . .
As you can see, there are many items in this directory. You can add some optional flags to the command to modify the default behavior. For instance, to list all of the contents in an extended form, you can use the -l
flag (for “long” output):
- ls -l
Outputtotal 440
drwxr-xr-x 2 root root 4096 Apr 17 2022 adduser
drwxr-xr-x 2 root root 4096 Sep 24 19:11 applications
drwxr-xr-x 6 root root 4096 Oct 9 18:16 apport
drwxr-xr-x 3 root root 4096 Apr 17 2022 apps
drwxr-xr-x 2 root root 4096 Oct 9 18:15 apt
drwxr-xr-x 2 root root 4096 Apr 17 2022 aptitude
drwxr-xr-x 4 root root 4096 Apr 17 2022 apt-xapian-index
drwxr-xr-x 2 root root 4096 Apr 17 2022 awk
. . .
This view gives us plenty of information. The first block describes the file type (if the first column is a “d” the item is a directory, and if it is a “-”, it is a normal file) and permissions. Each subsequent column, in order, describes the number of hard links to that file elsewhere on the system, the owner, group owner, item size, last modification time, and the name of the item.
To get a listing of all files, including hidden files and directories, you can add the -a
flag. Since there are no real hidden files in the /usr/share
directory, let’s go back to your home directory and try that command. You can get back to the home directory by typing cd
with no arguments:
- cd
- ls -a
Output. .. .bash_logout .bashrc .profile
As you can see, there are three hidden files, along with .
and ..
, which are special indicators. You will find that often, configuration files are stored as hidden files, as is the case here.
For the dot and double dot entries, these aren’t exactly directories as much as built-in methods of referring to related directories. The single dot indicates the current directory, and the double dot indicates this directory’s parent directory. This will come in handy in the next section.
You have already made two directory moves in order to demonstrate some properties of ls
in the last section. Let’s take a better look at the command here.
Begin by going back to the /usr/share
directory:
- cd /usr/share
This is an example of changing a directory by providing an absolute path. In Linux, every file and directory is under the top-most directory, which is called the “root” directory, but referred to by a single leading slash “/”. An absolute path indicates the location of a directory in relation to this top-level directory. This lets us refer to directories in an unambiguous way from any place in the filesystem. Every absolute path must begin with that slash.
The alternative is to use relative paths. Relative paths refer to directories in relation to the current directory. For directories close to the current directory in the hierarchy, this is usually shorter, and it is sometimes beneficial to not need to make assumptions about where a directory is located in the broader filesystem. Any directory within the current directory can be referenced by name without a leading slash. You can change to the locale
directory within /usr/share
from your current location by typing:
- cd locale
You can also move multiple directory levels with relative paths by providing the portion of the path that comes after the current directory’s path. From here, you can get to the LC_MESSAGES
directory within the en
directory by typing:
- cd en/LC_MESSAGES
To go back up, traveling to the parent of the current directory, you can use the special double dot indicator. For instance, you are now in the /usr/share/locale/en/LC_MESSAGES
directory. To move up one level, you can type:
- cd ..
This takes us to the /usr/share/locale/en
directory.
You can always return to your home directory by running cd
without specifying a directory. You can also use ~
in place of your home directory in any other commands:
cd ~
pwd
/home/sammy
To learn more about how to use these three commands, you can check out our guide on exploring the Linux filesystem.
In the last section, you learned how to navigate the filesystem. You probably saw some files when using the ls
command in various directories. In contrast to some operating systems, Linux and other Unix-like operating systems rely on plain text files for vast portions of the system.
The main way that you will view files in this tutorial is with the less
command. This is what is called a “pager”, because it allows you to scroll through pages of a file. While the previous commands immediately executed and returned you to the command line, less
is an application that will continue to run and occupy the screen until you exit.
You will open the /etc/services
file, which is a configuration file that contains service information that the system knows about:
- less /etc/services
The file will be opened in less
, allowing you to see the portion of the document that fits in the area of the terminal window:
Output# Network services, Internet style
#
# Note that it is presently the policy of IANA to assign a single well-known
# port number for both TCP and UDP; hence, officially ports have two entries
# even if the protocol doesn't support UDP operations.
#
# Updated from http://www.iana.org/assignments/port-numbers and other
# sources like http://www.freebsd.org/cgi/cvsweb.cgi/src/etc/services .
# New ports will be added on request if they have been officially assigned
# by IANA and used in the real-world or are needed by a debian package.
# If you need a huge list of used numbers please install the nmap package.
tcpmux 1/tcp # TCP port service multiplexer
echo 7/tcp
. . .
To scroll, you can use the up and down arrow keys on your keyboard. To page down, you can use either the space bar, the “Page Down” button on your keyboard, or the CTRL-f
shortcut.
To scroll back up, you can use either the “Page Up” button, or the CTRL-b
keyboard shortcut.
To search for some text in the document, you can type a forward slash “/” followed by the search term. For instance, to search for “mail”, you would type:
/mail
This will search forward through the document and stop at the first result. To get to another result, you can type the lower-case n
key:
n
To move backwards to the previous result, use a capital N
instead:
N
To exit the less
program, you can type q
to quit:
q
There are many other ways of viewing a file that come in handy in certain circumstances. The cat
command outputs a file’s contents and returns you to the prompt immediately. The head
command, by default, shows the first 10 lines of a file. Likewise, the tail
command shows the last 10 lines. These commands display file contents in a way that is useful for “piping” to other programs. This concept is covered later in this tutorial series.
In this section, you’ll create and manipulate files and directories.
Many commands and programs can create files. The most straightforward method of creating a file is with the touch
command. This will create an empty file using the name and location specified.
First, make sure you are in your home directory, since this is a location where you have permission to save files. Then, you can create a file called file1
by typing:
- cd
- touch file1
Now, if you view the files in the directory, you can see your newly created file:
- ls
Outputfile1
If you use the touch
command on an existing file, it updates the “last modified” time associated with that file. This can be helpful to keep in mind.
You can also create multiple files at the same time. You can use absolute paths as well. For instance, you could type:
- touch /home/sammy/file2 /home/sammy/file3
- ls
Outputfile1 file2 file3
Similar to the touch
command, the mkdir
command allows you to create empty directories.
For instance, to create a directory within your home directory called test
, you could type:
- cd
- mkdir test
You can make a directory within the test
directory called example
by typing:
- mkdir test/example
For the above command to work, the test
directory must already exist. To tell mkdir
that it should create any directories necessary to construct a given directory path, you can use the -p
option. This allows you to create nested directories in one step. You can create a directory structure that looks like some/other/directories
by typing:
- mkdir -p some/other/directories
The command will make the some
directory first, then it will create the other
directory inside of that. Finally it will create the directories
directory within those two directories.
You can move a file to a new location using the mv
command. For instance, you can move file1
into the test
directory by typing:
- mv file1 test
You can move that file back to your home directory by using the special dot reference to refer to the current directory. Make sure you’re in your home directory, and then run the mv
command:
- cd
- mv test/file1 .
The mv
command is also used to rename files and directories. In essence, moving and renaming are both just adjusting the location and name for an existing item.
So to rename the test
directory to testing
, you could type:
- mv test testing
Note: The shell will not prevent you from making accidentally destructive actions. If you are renaming a file and choose a name that already exists, the previous file will be overwritten by the file you are moving. There is no way to recover the previous file if you accidentally overwrite it.
With the mv
command, you could move or rename a file or directory, but you could not duplicate it. The cp
command can make a new copy of an existing item.
For instance, you can copy file3
to a new file called file4
:
- cp file3 file4
Unlike a mv
operation, after which file3
would no longer exist, you now have both file3
and file4
.
Note: As with the mv
command, it is possible to overwrite a file if you are not careful about the filename you are using as the target of the operation. For instance, if file4
already existed in the above example, its content would be completely replaced by the content of file3
.
In order to copy entire directories, you must include the -r
option to the command. This stands for “recursive”, as it copies the directory, plus all of the directory’s contents.
For instance, to copy the some
directory structure to a new structure called again
, you could type:
- cp -r some again
Unlike with files, with which an existing destination would lead to an overwrite, if the target is an existing directory, the file or directory is copied into the target:
- cp file1 again
This will create a new copy of file1
and place it inside of the again
directory.
To delete a file, you can use the rm
command.
Note: Be extremely careful when using any destructive command like rm
. There is no “undo” command in the shell, so it is possible to accidentally destroy important files permanently.
To remove a regular file, just pass it to the rm
command:
- cd
- rm file4
Likewise, to remove empty directories, you can use the rmdir
command. This will only succeed if there is nothing in the directory in question. For instance, to remove the example
directory within the testing
directory:
- rmdir testing/example
To remove a non-empty directory, you will use the rm
command with the -r
option, which removes all of the directory’s contents recursively, plus the directory itself.
For instance, to remove the again
directory and everything within it, you can type:
- rm -r again
Currently, you know how to manipulate files as objects, but you have not learned how to actually edit them and add content to them.
nano
is one of a few common command-line Linux text editors, and is a great starting point for beginners. It operates somewhat similarly to the less
program discussed above, in that it occupies the entire terminal for the duration of its use.
The nano
editor can open existing files, or create a file. If you decide to create a new file, you can give it a name when you call the nano
editor, or later on, when you save your content.
You can open the file1
file for editing by typing:
- cd
- nano file1
The nano
application will open the file (which is currently blank). The interface looks something like this:
GNU nano 4.8 file1
[ New File ]
^G Get Help ^O WriteOut ^R Read File ^Y Prev Page ^K Cut Text ^C Cur Pos
^X Exit ^J Justify ^W Where Is ^V Next Page ^U UnCut Text ^T To Spell
Along the top, you have the name of the application and the name of the file you are editing. In the middle, the content of the file, currently blank, is displayed. Along the bottom, you have a number of key combinations that indicate some controls for the editor. For each of these, the ^
character means the CTRL
key.
To get help from within the editor, press Ctrl+G
.
When you are finished browsing the help, type Ctrl+X
to get back to your document.
For this example, you can just type these two sentences:
Hello there.
Here is some text.
To save your work, press Ctrl+O
.
File Name to Write: file1
^G Get Help M-D DOS Format M-A Append M-B Backup File
^C Cancel M-M Mac Format M-P Prepend
As you can see, the options at the bottom have also changed. These are contextual, meaning they will change depending on what you are trying to do. To confirm writing to file1
, press Enter
.
After saving, If you make additional changes and try to exit the program, you will see a similar prompt. Add a new line, and then try to exit nano
by pressing Ctrl+X
.
If you have not saved, you will be asked to save the modifications you made:
Save modified buffer (ANSWERING "No" WILL DESTROY CHANGES) ?
Y Yes
N No ^C Cancel
You can press Y
to save your changes, N
to discard your changes and exit, or Ctrl+C
to cancel quitting. If you choose to save, you will be given the same file prompt that you received before, confirming that you want to save the changes to the same file. Press Enter
to save the file and exit the editor.
You can see the contents of the file you created using either the cat
program to display the contents, or the less
program to open the file for viewing. After viewing with less
, remember that you should press q
to get back to the terminal.
- less file1
OutputHello there.
Here is some text.
Another line.
Another editor that you may see referenced in certain guides is vim
or vi
. This is a more advanced editor that is very powerful, but comes with a steep learning curve. If you are ever told to use vim
or vi
, feel free to use nano
instead. To learn how to use vim
, read our guide to getting started with vim.
By now, you should have an understanding of how to get around your Linux server and how to see the files and directories available. You should also know file manipulation commands that will allow you to view, copy, move, or delete files. Finally, you should be comfortable with some editing using the nano
text editor.
With these few skills, you should be able to continue on with other guides and learn how to get the most out of your server. In our next guide, you will understand how to view and understand Linux permissions.
]]>This tutorial, which is the first in a series that teaches Linux fundamentals, covers getting started with the terminal, the Linux command line, and executing commands. If you are new to Linux, you will want to familiarize yourself with the terminal, as it is the standard way to interact with a Linux server.
If you would like to get the most out of this tutorial, you will need a Linux server to connect to and use. If you do not already have one, you can quickly spin one up by following this link: How To Create A DigitalOcean Droplet. This tutorial is written for an Ubuntu 22.04 server but the general principles apply to any other distribution of Linux.
Let’s get started by going over what a terminal emulator is.
A terminal emulator is a program that allows the use of the terminal in a graphical environment. As most people use an OS with a graphical user interface (GUI) for their day-to-day computer needs, the use of a terminal emulator is a necessity for most Linux server users.
Here are some free, commonly-used terminal emulators by operating system:
Each terminal emulator has its own set of features. In general, you should expect a modern terminal emulator to support tabbed windows and text highlighting.
In a Linux system, the shell is a command-line interface that interprets a user’s commands and script files, and tells the server’s operating system what to do with them. There are several shells that are widely used, such as the Bourne-Again shell (bash
) and Z shell (zsh
). Each shell has its own feature set and intricacies regarding how commands are interpreted, but they all feature input and output redirection, variables, and condition-testing, among other things.
This tutorial was written using the Bourne-Again shell, usually referred to as bash
, which is the default shell for most Linux distributions, including Ubuntu, Fedora, and RHEL.
When you first login to a server, you will typically be greeted by the Message of the Day (MOTD), which is typically an informational message that includes miscellaneous information such as the version of the Linux distribution that the server is running. After the MOTD, you will be dropped into the command prompt, or shell prompt, which is where you can issue commands to the server.
The information that is presented at the command prompt can be customized by the user, but here is an example of the default Ubuntu 20.04 command prompt:
sammy@webapp:~$
Here is a breakdown of the composition of the command prompt:
sammy
: The username of the current userwebapp
: The hostname of the server~
: The current directory. In bash
, which is the default shell, the ~
, or tilde, is a special character that expands to the path of the current user’s home directory; in this case, it represents /home/sammy
$
: The prompt symbol. This denotes the end of the command prompt, after which the user’s keyboard input will appearHere is an example of what the command prompt might look like, if logged in as root
and in the /var/log
directory:
root@webapp:/var/log#
Note that the symbol that ends the command prompt is a #
, which is the standard prompt symbol for root
. In Linux, the root
user is the superuser account, which is a special user account that can perform system-wide administrative functions. It is an unrestricted user that has permission to perform any task on a server.
Commands can be issued at the command prompt by specifying the name of an executable file, which can be a binary program or a script. There are many standard Linux commands and utilities that are installed with the OS, that allow you to navigate the file system, install and software packages, and configure the system and applications.
An instance of a running command is known as a process. When a command is executed in the foreground, which is the default way that commands are executed, the user must wait for the process to finish before being returned to the command prompt, at which point they can continue issuing more commands.
It is important to note that almost everything in Linux is case-sensitive, including file and directory names, commands, arguments, and options. If something is not working as expected, double-check the spelling and case of your commands!
Here are a few examples that will cover the fundamentals of executing commands.
Note: If you’re not already connected to a Linux server, now is a good time to log in. If you have a Linux server but are having trouble connecting, follow this link: How to Connect to Your Droplet with SSH.
To run a command without any arguments or options, type in the name of the command and press Enter
.
If you run a command like this, it will exhibit its default behavior, which varies from command to command. For example, if you run the cd
command without any arguments, you will be returned to your current user’s home directory. The ls
command will print a listing of the current directory’s files and directories. The ip
command without any arguments will print a message that shows you how to use the ip
command.
Try running the ls
command with no arguments to list the files and directories in your current directory (there may be none):
- ls
Many commands accept arguments, or parameters, which can affect the behavior of a command. For example, the most common way to use the cd
command is to pass it a single argument that specifies which directory to change to. For example, to change to the /usr/bin
directory, where many standard commands are installed, you would issue this command:
cd /usr/bin
The cd
component is the command, and the first argument /usr/bin
follows the command. Note how your command prompt’s current path has been updated.
Try running the ls
command to see the files that are in your new current directory.
ls
Output…
grub-mkrescue sdiff zgrep
grub-mkstandalone sed zipdetails
grub-mount see zless
grub-ntldr-img select-editor zmore
grub-render-label semver znew
grub-script-check sensible-browser
Most commands accept options, also known as flags or switches, that modify the behavior of the command. Options follow a command, and are indicated by a single -
character followed by one or more options, which are represented by individual upper- or lower-case letters. Some multi-word options can start with --
, followed by the flag text.
For an example of how options work, let’s look at the ls
command. Here are a couple of common options that come in handy when using ls
:
-l
: print a “long listing”, which includes extra details such as permissions, ownership, file sizes, and timestamps-a
: list all of a directory’s files, including hidden ones (that start with .
)To use the -l
flag with ls
, use this command:
- ls -l
Note that the listing includes the same files as before, but with additional information about each file.
As mentioned earlier, options can often be grouped together. If you want to use the -l
and -a
option together, you could run ls -l -a
, or just combine them like in this command:
- ls -la
Note that the listing includes the hidden .
and ..
directories in the listing, because of the -a
option.
Options and arguments can almost always be combined, when running commands.
For example, you could check the contents of /home
, regardless of your current directory, by running this ls
command:
ls -la /home
ls
is the command, -la
are the options, and /home
is the argument that indicates which file or directory to list. This should print a detailed listing of the /home
directory, which should contain the home directories of all of the normal users on the server.
Environment variables are named values that are used to change how commands and processes are executed. When you first log in to a server, several environment variables will be set according to a few configuration files by default.
To view all of the environment variables that are set for a particular terminal session, run the env
command:
env
There will likely be a lot of output. Look for the PATH
entry:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
The PATH
environment variable is a colon-delimited list of directories where the shell will look for executable programs or scripts when a command is issued. For example, the env
command is located in /usr/bin
, and you are able to run it without specifying its full path because its path is in the PATH
environment variable.
The value of an environment variable can be retrieved by prefixing the variable name with a $
. This will expand the referenced variable to its value.
For example, to print out the value of the PATH
variable, you may use the echo
command:
echo $PATH
Or you could use the HOME
variable, which is set to your user’s home directory by default, to change to your home directory like this:
cd $HOME
If you try to access an environment variable that hasn’t been set, it will be expanded to nothing; an empty string.
Now that you know how to view your environment variables, you should learn how to set them.
To set an environment variable, all you need to do is start with a variable name, followed immediately by an =
sign, followed immediately by its desired value:
VAR=value
Note that if you set an existing variable, the original value will be overwritten. If the variable did not exist in the first place, it will be created.
Bash includes a command called export
which exports a variable so it will be inherited by child processes. This allows you to use scripts that reference an exported environment variable from your current session.
You can also reference existing variables when setting a variable. For example, if you installed an application to /opt/app/bin
, you could add that directory to the end of your PATH
environment variable with this command:
export PATH=$PATH:/opt/app/bin
Now verify that /opt/app/bin
has been added to the end of your PATH
variable with echo
:
echo $PATH
Keep in mind that setting environment variables in this way only sets them for your current session. This means if you log out or otherwise change to another session, the changes you made to the environment will not be preserved. There is a way to permanently change environment variables, but this will be covered in a later tutorial.
Now that you have begun to learn about the Linux terminal (and a few commands), you should have a good foundation for expanding your knowledge of Linux commands. Read the next tutorial in this series to learn how to navigate, view, and edit files and their permissions.
]]>Ansible is a configuration management system used to set up and manage infrastructure and applications in varied environments. It allows users to deploy and update applications in approachable language, using SSH, without needing to install an agent on a remote system.
The Apache HTTP Server is an open-source web server popular for its flexibility, power, and widespread support. It is extensible through a dynamically loadable module system and can process a large number of interpreted languages without connecting out to separate software.
An application programming interface (API) is a set of routines, definitions, and protocols that allow developers to build application software. APIs abstract implementation and expose only necessary objects and actions to the developer. Within cloud computing, developers use APIs to manage servers and other resources through conventional HTTP requests.
Backups are copies or archives of data used for recovery after loss, deletion, or corruption. Developers can create backups in a number of ways, including manual implementation, cloud hosting services, or backup programs (such as Bacula).
Big data is a blanket term for the non-traditional strategies and technologies needed to organize, process, and gather insights from large datasets. Many users and organizations are turning to big data for certain types of work loads, and using it to supplement their existing analysis and business tools. Tools that exist in this space offer different options for interpolating data into a system, storing it, analyzing it, and working with it through visualizations.
A block storage service functions as a hard drive provided over the network. Developers can use block storage services to store files, combine multiple devices into a RAID array, or configure a database to write directly to the block storage device. Block storage offers a different set of capacities than object storage, which allows developers to store unstructured data using an HTTP API. Developers working on complex applications often take advantage of both options.
A Boolean is a data type which has one of only two possible values: true or false. Booleans represent the truth values that are associated with the logic branch of mathematics, which informs algorithms in computer science. In programming, Booleans are used to make comparisons and to control the flow of a program.
C is an imperative, high-level programming language known for its modularity, static typing, variety of data types and operators, recursion, and structured approach to tasks. Unlike many other early programming languages, C is machine independent and highly portable. For these reasons, developers have used it to build a variety of programs and systems, including the Linux kernel.
Caching refers to the process by which reusable responses are stored to make subsequent requests faster.
A CDN (short for Content Delivery Network) is a distributed network of proxy servers and their data centers. The purpose of a CDN is to distribute content to end-users through geographically nearby intermediary servers, thereby ensuring high performance and minimal latency.
Chef is a configuration management tool that automates infrastructure as code. It uses Ruby and groups configuration details into what it calls “recipes.”
Continuous integration focuses on integrating work from individual developers into a main repository multiple times a day to catch integration bugs early and accelerate collaborative development. Continuous delivery is concerned with reducing friction in the deployment or release process, automating the steps required to deploy a build so that code can be released safely at any time. Continuous deployment takes this one step further by automatically deploying each time a code change is made.
Cloud computing is a model for sharing computer resources via the internet in which users can run their own workloads using scalable, abstracted resources. Cloud computing services typically fall into one of three categories: Infrastructure as a Services (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS).
Clustered computing is the practice of pooling the resources of multiple machines and managing their collective capabilities to complete tasks. Developers can use clusters to increase processing power and storage.
Configuration management refers to the processes by which administrators and operations teams control large numbers of servers. Automation is at the heart of most configuration management tools, which allow developers to quickly provision new servers, recover from critical events, manage version control, and replicate environments. Popular CM tools include Puppet, Ansible, Chef, and Salt.
A container is an isolated user-space instance that abstracts applications from both the underlying operating system and other applications. Containers take advantage of the host operating system by using its kernel and resources, which are abstracted into layers and shared between containers. In this way, containers differ from virtual machines: they run their own init processes, filesystems, and network stacks, making them quicker to start and more lightweight than virtual machines.
A content management system is an application used to support the creation and revision of web content. Popular CMS tools include WordPress, Joomla, and Drupal.
A control panel allows users to manage system settings and features in a single place. The nature and function of a control panel depends on its environment: in web hosting, for example, users can navigate the control panel offered by their web hosting provider for an external or global view of their servers and resources. Users can also install control panels on these servers to manage their internal aspects.
The central processing unit, more commonly known as a CPU, is a vital component of a computer system. Often referred to as the “brain” of a computer, the CPU receives instructions provided by a software program or connected hardware and performs the mathematical and logical operations required to produce the desired output.
Data analysis refers to activity, across a range of fields, that investigates the structure of data and uses it to identify patterns and possible solutions to problems. Within this domain, data science draws on methodologies from statistics, mathematics, and computer science to both analyze events using data and predict possible outcomes. One important trend within data science is machine learning, which uses algorithmic data inputs and statistical analysis to train computers to output values within a certain range. In this way, machine learning enables practices such as automated decision-making.
A Distributed Denial of Service (or DDoS) attack is a malicious attempt to deny traffic to a targeted server by flooding it with spurious requests. The attacker accomplishes this by infecting a fleet of servers and internet-connected devices (a botnet) with malware. This botnet is then instructed by the attacker to repeatedly send requests to the targeted server, overwhelming its available resources. This results in a denial of service to normal traffic.
Deployment refers to the process of readying something for use . Depending on what is being deployed (software system, hardware, etc.), this process can include installing scripts or commands for software execution, activating executable software elements, and updating older software systems, among other things.
Development can refer to a range of programming activities and routines involved in the creation and maintenance of programs. Within software development, these activities can include writing and revising code, prototyping, researching, testing, and modifying problematic components.
Django is a high-level Python framework for developing web applications rapidly. Its core principles are scalability, re-usability, and rapid development.
Short for Document Object Model, the DOM is a cross-platform and language-independent application programming interface. Frequently used by web browsers to parse and display website content, the DOM treats an HTML, XHTML, or XML document as a tree structure where each node is an object representing a part of the document which can be manipulated programmatically.
The Domain Name System is a decentralized naming system that translates memorable and accessible domain names to numerical IP addresses within underlying network protocols. Users can establish greater control over hosted domains by managing their DNS servers, opting for caching servers, forwarding servers, authoritative-only servers, or a combination of different types.
Docker is a popular containerization tool used to provide software applications with a filesystem that contains everything they need to run. Using Docker containers ensures that the software will behave the same way, regardless of where it is deployed, because its run-time environment is consistent.
Drupal is a popular content management system (CMS) used to run some of the largest blogs and websites across the internet. Due to the stability of the base, the adaptability of the platform, and its active community, Drupal remains a popular choice among users after more than a decade on the scene.
DRY, standing for do not repeat yourself, is a principle of software development that aims at reducing the repetition of patterns in favor of abstractions and avoiding redundancy.
Elasticsearch is an open-source full-text search and analytics engine used to store, search, and analyze data.
Encryption encodes information for safe transmission or storage. All encryption involves an algorithmic transformation of plaintext, and can be separated into two main categories: symmetrical and asymmetrical.
Fedora is an operating system based on the Linux kernel and GNU programs. It is maintained by the Fedora Project and sponsored by Red Hat. Fedora’s popularity stems from both its upstream relationship with Red Hat Enterprise Linux and its community of developers, who ensure that application versions stay current.
A firewall is a network-based service that blocks all unpermitted traffic, following a set of configurable rules.
Free software is any program released with a license approved by the Free Software Foundation which allows users to view, modify, and share the source code without risk of legal repercussions. Similarly to the open-source movement, the goal behind free software is to promote and support community-driven development methods and to curb the spread of proprietary software licenses.
Ghost is an open-source blogging platform for building blogs and websites. Its popularity stems from its speed, clarity of use, and engagement with well-known tools such as JavaScript, Ember, and Backbone.
Git is a widely-used version control system, originally developed by Linus Torvalds to track changes in the Linux kernel. In Git, every developer’s environment contains a copy of the repository with a full history of changes, allowing for nonlinear development workflows.
Go (or GoLang) is a modern programming language originally developed by Google that uses high-level syntax similar to scripting languages. It is popular for its minimal syntax and straightforward handling of concurrency, as well as for the ease it provides in building native binaries on foreign platforms.
Short for GNU GRand Unified Bootloader, GRUB is a second-stage boot loader that loads and transfers program execution to an operating system during the boot process. Originally developed as part of the GNU Project, it is widely used as the boot loader for most Linux distributions.
High availability describes the quality of a system or component that assures a high level of operational performance over a given period of time. Scenarios where high availability matters include decreasing downtime and eliminating single points of failure.
A hypervisor is computer software, hardware, or firmware that creates, runs, and monitors virtual machines on a host machine. The hypervisor provides a virtual operating platform and manages the execution of the guest operating systems, allowing multiple instances of different operating systems to share the same hardware resources.
Infrastructure as a Service is a category of cloud computing in which infrastructure is provided as a product over the Internet. Users provision processing, storage, networking, and other computing tools, which can include operating systems and applications. Though an IaaS user does not manage the underlying infrastructure, they do have control over operating systems, storage, deployed applications, and certain networking components (such as firewalls).
In cloud computing, an instance refers to provisioned computing services such as virtual machines or containers. A cloud instance extends and abstracts the hardware typically associated with the services it provides, offering reliability, flexibility, and scalability for development projects.
An integrated development environment (IDE) is a software application which provides a comprehensive set of resources (such as a text editor, debugger, analysis tools, a compiler, and/or an interpreter) to aid computer programmers with software development. The boundary between an IDE and other parts of the broader software development environment is fuzzy, and the features offered by IDEs can vary greatly between programs.
IPv6 is the most recent version of the Internet Protocol, which identifies computers on networks and routes traffic across the Internet. IPv6 addresses provide more address space than their IPv4 counterparts, and are part of an effort to sustain the growth and deployment of Internet-ready devices.
Java is a concurrent, class-based, object-oriented programming language designed to run with as few implementation dependencies as possible. Developers use Java because of its robust community of programmers, relative stability, and ubiquity.
JavaScript is a high-level, object-based, dynamic scripting language used to create interactive webpages and applications. Its flexibility, growing ubiquity in web and mobile applications, and front- and back-end capabilities make it a popular choice for developers.
Joomla is a widely-used, highly customizable, free and open-source web content management system written in PHP.
A kernel is a computer program that mediates access to system resources. As the core component of an operating system, it’s responsible for enabling multiple applications to share hardware resources by controlling access to CPU, memory, disk I/O, and networking.
Kubernetes is a cloud platform for automating the deployment, scaling, and management of containerized applications.
A LAMP Stack is a set of software that can be used to create dynamic websites and web applications. LAMP is an acronym for the software that comprises the stack: the Linux operating system, the Apache HTTP Server, the MySQL relational database management system, and the PHP programming language. Note that some components are interchangeable, and a LAMP Stack may include MariaDB instead of MySQL, or Perl or Python instead of PHP.
LEMP (also known as LNMP) is a software stack used for creating dynamic websites and web applications. It consists of the Linux operating system, the (E)Nginx web server, the MySQL relational database management system, and the PHP programming language. Like LAMP Stacks, some of these components are interchangeable with others.
Let’s Encrypt is a certificate authority that provides free TLS/SSL certificates. Let’s Encrypt focuses on simplicity and ease-of-use, with the ultimate goal of making encrypted connections to the World Wide Web ubiquitous.
Load balancing refers to the distribution of work among a pool of homogeneous backend servers in order to optimize the use of computing resources and prevent the overload of any single resource.
Logging refers to the recording of all the events that occur in a computer’s operating system. This information is usually stored for review in the system’s log file.
Logical Volume Management (LVM) is a storage device management technology that gives users the power to pool and abstract the physical layout of component storage devices for more flexible administration and greater control. LVM also offers advanced features like snapshotting, striping, and mirroring.
Machine learning is a subfield of artificial intelligence focused on understanding the structure of data. By training computers to use data inputs and statistical analysis to output values that fall within a specific range, machine learning research aims to build models from sample data in order to automate decision-making processes.
MariaDB is a free and open-source relational database management system. MariaDB was originally built as a fork of MySQL, and is thus highly compatible with its source code.
MEAN is a free and open-source software stack for creating dynamic websites and web applications. The software stack typically includes MongoDB, Express, Node.js, and AngularJS.
Messaging is the act of passing content or controls between users, computers, programs, and/or components of a single system. Message queuing refers to the management of messages between software users or components for a given process.
MongoDB is a free and open-source document-oriented database platform which uses JSON-like documents with schemas.
Monitoring is the process of gathering and evaluating performance data to assess a system’s behavior and attributes. This process can be broken down into three parts: gathering system data through usage metrics, analyzing these metrics, and using analysis of this data to establish alerts for particular behaviors.
MySQL is an open-source relational database management system. An incredibly flexible and powerful program, MySQL is used to store and retrieve data for a wide variety of popular applications.
Nginx (pronounced like ‘engine-x’) is an open-source web server capable of reverse proxying, load balancing, and more. Nginx is one of the most popular web servers in the world and is used to host some of the largest and most highly-trafficked sites on the internet.
Node.js is a fast, lightweight platform built on Chrome’s JavaScript runtime. It uses event-driven (as opposed to thread-based) programming to build scalable applications and network programs. By leveraging Javascript on both the front-end and the back-end, development can be more consistent and web applications can be designed within the same development environment.
A NoSQL database is any non-relational database environment that allows for the fast organization and analysis of disparate and high-volume data types. By using an unstructured (or structured-on-the-go) approach, NoSQL databases aim to eliminate the limitations of strict relations and offer many different types of ways to keep and work with the data for specific use cases (e.g. full-text document storage).
Object storage is a data storage architecture that manages data as objects (unstructured blobs of data and metadata) using an HTTP API, instead of as blocks or a file hierarchy.
Open-source software is any program released with a license approved by the Open Source Initiative which allows users to view, modify, and share the source code without risk of legal repercussions. Similarly to the free software movement, the goal behind open-source software is to promote and support community-driven software development methods.
An operating system is system software that manages hardware and software resources while providing common services for computer programs. Aside from firmware, all computer programs require an operating system to function.
Platform as a Service is a category of cloud computing in which developers can provision deployment platforms to build applications. The underlying infrastructure of each platform is abstracted, meaning that users can expect pre-configured runtime environments and predictable scaling, storage, and security options. They also have access to languages, libraries, tools, and services for application development, as well as a certain degree of control over configuration settings; however, they do not have the ability to modify the underlying operating system or network settings.
Packets are the basic data units transmitted through a TCP/IP network. Originally conceived as a way to transmit data without a pre-established connection, packets make dynamic data transference possible. Data in a packet falls into two categories: control information (source and destination network addresses, sequence information, and error detection codes), and user data (the content of the message itself).
A partition is a share of a hard disk or other secondary storage device, allowing an operating system to manage data and information in each partition separately. This can be advantageous for data security, as it simplifies data backups and reduces the risk of losing data. Partitioning also provides a convenient means for storing multiple operating systems on the same drive.
Perl is a family of programming languages, popular for their extreme versatility and their use as a glue language between software components.
PHP is a scripting language designed primarily for web development, but it’s also become widely-used as a general-purpose programming language.
Public Key Infrastructure is the set of roles, policies, and procedures involved in creating and managing digital certificates and public-key encryption. There are multiple operators within the PKI umbrella: A Certificate Authority (CA) that stores, issues, and signs SSL certificates for domains; a Registration Authority (RA) that verifies the identities of hosts making requests for SSL certificates; a central directory that houses private key information for disaster recovery; and a certificate management system that oversees access to certificates.
PostgreSQL is a free and open-source object-relational database management system which emphasizes extensibility and standards compliance.
Python is a high-level, interpreted programming language which prioritizes the clarity and readability of code.
Redis is a scalable, in-memory key-value data store which excels at caching. A non-relational database, Redis known for its flexibility, performance, and wide language support.
A Read-Eval-Print Loop, or REPL, is a basic computer environment in which user inputs are read, evaluated, and results are returned to the user. Examples include command line shells and various tools provided for specific programming languages.
A reverse proxy is a type of proxy server that handles and redistributes client requests to a server. In addition to balancing workloads between servers, reverse proxy servers can provide services not necessarily offered by application servers, such as caching, compression, and SSL encryption.
Root—also known as the root user, root account, or superuser—is a user account on a computer system with access to all commands and files on that system. Root privileges evolved out of early UNIX systems, in which multiple users shared a single mainframe computer.
Ruby is a dynamic, reflective, object-oriented, general-purpose programming language which supports multiple programming paradigms. Ruby was designed to be very programmer-friendly and boost productivity, and includes features like dynamic typing and automatic memory management.
Ruby on Rails (also known as RoR, or simply as Rails) is a server side model-view-controller web application framework written in Ruby. Rails includes tools that make common development tasks easier, like scaffolding which can automatically construct some of the models and views needed for a basic website.
Software as a Service is a category of cloud computing in which software is provided as a product over the Internet. With an SaaS, users have access to software provided by third party vendors, though they are not in charge of the production, maintenance, or modification of that software.
Scaling is the process of adapting a server setup to accommodate for growth. Methods for scaling can be broadly categorized as either horizontal or vertical. Horizontal scaling is usually done by harnessing additional servers to fulfill the workload of a single web application, while vertical scaling typically involves adding resources (like CPUs or memory) to a single server as a means of improving efficiency.
Security involves the protection of a computer system from theft or damage of its hardware, software, or data. Typical security tools include firewalls, VPNs, SSH, and SSL certificates.
Security-Enhanced Linux is a set of kernel modifications and tools in user space that provide enhanced access control for Linux distributions. It is built into the Linux kernel and enabled by default on Fedora, CentOS, and RHEL distributions, among others.
A server is a computer program or device that provides a network or data service for other programs or devices, known as clients. Servers can offer a variety of functions, ranging from website and web application hosting, to providing shared disk access, printer connections, and database services. The word “server” can refer to either a physical machine or to the services being provided to clients.
Similar to Apache’s virtual hosts, server blocks are websites or web applications that are hosted on the same Nginx server, but are otherwise distinct.
SFTP, which stands for SSH File Transfer Protocol or Secure File Transfer Protocol, is a protocol packaged with SSH used to transfer files between computers via an internet connection. Unlike the earlier FTP, SFTP allows users to transfer files and traverse the filesystem on both the local and remote systems over a secure connection.
Sinatra is a free and open-source web application library and domain-specific language written in Ruby, designed for the speedy creation of web applications.
A shell is a user interface used to access services provided by a computer’s operating system. Shells are usually either command-line interfaces (CLIs) or graphical user interfaces (GUIs). The name comes from the fact that the interface represents the outermost layer (or shell) of an operating system.
The most common types of sockets on a Linux machine are IP sockets and Unix sockets. An IP socket is a communication interface on a network that allows for two-way communication between two nodes. Each node is identifiable by a socket address, which includes an IP address and a port number associated with that node. A Unix socket is a communication endpoint for processes within a single-host operating system. Processes use filesystem inodes to refer to Unix sockets within the system, allowing for the transmission of data.
A solid-state drive is a non-volatile computer storage device that uses electronic circuits to store and retrieve information. Most SSD devices use flash memory, which retains data even when power is lost or removed, but some use battery-powered RAM. SSDs are known for their low access times and latency when compared to hard disk drives.
Secure Shell is a network protocol used to cryptographically secure communication to a remote server. By building a secure channel for communication on top of an unsecure connection, SSH allows users to communicate with and administer commands to remote servers. Common functions associated with SSH include remote command-line login, command execution, and configuration of services.
A stack is a set of software components that together create a complete platform for running applications or programs. Stacks differ based on the needs of the developer and include the components necessary for the task at hand. A LAMP (Linux, Apache, MySQL, PHP) or LEMP (Linux, Nginx, MySQL, PHP) stack can serve dynamic web pages and applications, while an Elastic/ELK (Elasticsearch, Logstash, Kibana) stack can collect, store, and search log files.
Storage includes any hardware, software, or computer systems that allow for the retention of data and information. There are many different types of storage and architectures for managing stored data, such as file systems, block storage, and object storage.
Swap is a partition on a hard drive or a special file created in a regular file system that has been designated as a place where the operating system can temporarily store data that it can no longer hold in RAM. Swap space gives users the ability to increase the amount of information that their server can keep in its working memory.
Systemd
is an init system used in many Linux distributions to boot user space and manage system processes. In addition to managing the booting process, systemd
controls numerous system resources and logging functions.
Unix is a family of multitasking, multiuser operating systems which derive from the original AT&T Unix, developed in the early 1970s. Unix’s core principles of clarity, portability, and simultaneity have led to the development of the “Unix philosophy,” which has influenced many later operating systems (such as numerous BSD and Linux distributions, as well as MacOS).
Version control software (also known as VCS) includes any revision control system used by developers to maintain current and historical versions of source code, documentation, and web pages.
Virtual hosts are websites or web applications that run on the same Apache server but are otherwise completely separate. The concept of virtual hosts on an Apache server is analogous to that of server blocks on an Nginx server.
A virtual machine is an individual emulation of a computer system, typically achieved through the use of a hypervisor.
Volatile memory depends on power for the storage and maintenance of information, and is used to process data from open programs and applications.
A virtual private cloud is a configurable pool of resources, provisioned within a cloud hosting environment, that are isolated to and managed by a single individual or organization. Within a VPC, users can often create private subnets, configure routing tables, network gateways, and security settings, and connect securely to corporate datacenters and other VPCs.
A VPN, or virtual private network, is a means of establishing secure connections between remote computers. A VPN presents its connection as if it were a local private network, allowing for secure communications between servers.
WordPress is a free and open-source web content management system based on PHP and MySQL. The most popular CMS in the world, WordPress boasts an expansive library of plugins and a large, active community of developers.
]]>The error logs on the load balancer site.com-error.logs show this.
[error] 749104#749104: *1042482 no live upstreams while connecting to upstream, client: IP, server: site.com, request: "GET / HTTP/1.1", upstream: "http://proxy_pass/", host: "site.com"
This error started showing up more times and I changed the max fails to be 3. This does not fix all the 502 errors I get in the access logs. it just masks the issue. The fact that I added a new server into the upstream could be the issue but the requests still go through to php and the site stays up until the 502 errors come in. And then I get a “no live connection error”
I would really appreciate any help here! thanks
]]>Backups are very important for cloud servers. Whether you are running a single project with all of its data stored on a single server, or deploying directly from Git to VMs that are spun up and torn down while retaining a minimum set of logs, you should always plan for a failure scenario. This can mean many different things depending on what applications you are using, how important it is to have immediate failover, and what kind of problems you are anticipating.
In this guide, you’ll explore the different approaches for providing backups and data redundancy. Because different use cases demand different solutions, this article won’t be able to give you a one-size-fits-all answer, but you will learn what is important in different scenarios and what implementations are best suited for your operation.
In the first part of this guide, you’ll look at several backup solutions and review the relative merits of each so that you can choose the approach that fits your environment. In part two, you’ll explore redundancy options.
The definitions of the terms redundant and backup are often overlapping and, in many cases, confused. These are two distinct concepts that are related, but different. Some solutions provide both.
Redundancy in data means that there is immediate failover in the event of a system problem. A failover means that if one set of data (or one host) becomes unavailable, another perfect copy is immediately swapped into production to take its place. This results in almost no perceivable downtime, and the application or website can continue serving requests as if nothing happened. In the meantime, the system administrator (in this case, you) have the opportunity to fix the problem and return the system to a fully operational state.
However, a redundancy solution is usually not also a backup solution. Redundant storage does not necessarily provide protection against a failure that affects the entire machine or system. For instance, if you have a mirrored RAID configured (such as RAID 1), your data is redundant in that if one drive fails, the other will still be available. However, if the machine itself fails, all of your data could be lost.
With redundancy solutions such as MySQL Group Replication, every operation is typically performed on every copy of the data. This includes malicious or accidental operations. By definition, a backup solution should also allow you to restore from a previous point where the data is known to be good.
In general, you need to maintain functional backups for your important data. Depending on your situation, this could mean backing up application or user data, or an entire website or machine. The idea behind backups is that in the event of a system, machine, or data loss, you can restore, redeploy, or otherwise access your data. Restoring from a backup may require downtime, but it can mean the difference between starting from a day ago and starting from scratch. Anything that you cannot afford to lose should, by definition, be backed up.
In terms of methods, there are quite a few different levels of backups. These can be layered as necessary to account for different kinds of problems. For instance, you may back up a configuration file prior to modifying it so that you can revert to your old settings should a problem arise. This is ideal for small changes that you are actively monitoring. However, this setup would fail in the case of a disk failure or anything more complex. You should also have regular, automated backups to a remote location.
Backups by themselves do not provide automatic failover. This means that your failures may not cost you any data (assuming your backups are 100% up-to-date), but they may cost you uptime. This is one reason why redundancy and backups are often used in combination with each other.
One of the most familiar forms of backing up is a file-level backup. This type of backup uses normal filesystem level copying tools to transfer files to another location or device.
In theory, you could back up a Linux machine, like your cloud server, with the cp
command. This copies files from one local location to another. On a local computer, you could mount a removable drive, and then copy files to it:
- mount /dev/sdc /mnt/my-backup
- cp -a /etc/* /mnt/my-backup
- umount /dev/sdc
This example mounts a removable disk, sdc
, as /mnt/my-backup
and then copies the /etc
directory to the disk. It then unmounts the drive, which can be stored somewhere else.
A better alternative to cp
is the rsync
command. Rsync is a powerful tool that provides a wide array of options for replicating files and directories across many different environments, with built-in checksum validation and other features. Rsync can perform the equivalent of the cp
operation above like so:
- mount /dev/sdc /mnt/my-backup
- rsync -azvP /etc/* /mnt/my-backup
- umount /dev/sdc
-azvP
is a typical set of Rsync options. As a breakdown of what each of those do:
a
enables “Archive Mode” for this copy operation, which preserves file modification times, owners, and so on. It is also the equivalent of providing each of the -rlptgoD
options individually (yes, really). Notably, the -r
option tells Rsync to recurse into subdirectories to copy nested files and folders as well. This option is common to many other copy operations, such as cp
and scp
.z
compresses data during the transfer itself, if possible. This is useful for any transfers over slow connections, especially when transferring data that compresses very effectively, like logs and other text.v
enables verbose mode, so you can read more details of your transfer while it is in progress.P
tells Rsync to retain partial copies of any files that do not transfer completely, so that transfers can be resumed later.You can review other rsync options on its man page.
Of course, in a cloud environment, you would not normally be mounting and copying files to a mounted disk each time. Rsync can also perform remote backups over a network by providing SSH-style syntax. This will work on any host that you can SSH into, as long as Rsync is installed at both ends. Because Rsync is considered a core Linux tool, this is almost always a safe assumption, even if you are working locally on a Mac or Windows machine.
- rsync -azvP /etc/* username@remote_host:/backup/
This will back up the local machine’s /etc
directory to a directory on remote_host
located at /backup
. This will succeed if you have permission to write to this directory and there is available space.
You can also review more information about how to use Rsync to sync local and remote directories.
Although cp
and rsync
are useful and ubiquitous, they are not a complete solution on their own. To automate backups using Rsync, you would need to create your own automated procedures, backup schedule, log rotation, and so on. While this may be appropriate for some very small deployments which do not want to make use of external services, or very large deployments which have dedicated resources for maintaining very granular scripts for various purposes, many users may want to invest in a dedicated backup offering.
Bacula
Bacula is a complex, flexible solution that works on a client - server model. Bacula is designed with separate concepts of clients, backup locations, and directors (the component that orchestrates the actual backup). It also configures each backup task into a unit called a “job”.
This allows for extremely granular and flexible configuration. You can back up multiple clients to one storage device, one client to multiple storage devices, and modify the backup scheme by adding nodes or adjusting their details. It functions well over a networked environment and is expandable and modular, making it great for backing up a site or application spread across multiple servers.
Duplicity
Duplicity is another open source backup tool. It uses GPG encryption by default for transfers.
The obvious benefit of using GPG encryption for file backups is that the data is not stored in plain text. Only the owner of the GPG key can decrypt the data. This provides some level of security to offset the additional security measures required when your data is stored in multiple locations.
Another benefit that may not be apparent to those who do not use GPG regularly is that each transaction has to be verified to be completely accurate. GPG, like Rsync, enforces hash checking to ensure that there was no data loss during the transfer. This means that when restoring data from a backup, you will be significantly less likely to encounter file corruption.
A slightly less common, but important alternative to file-level backups are block-level backups. This style of backup is also known as “imaging” because it can be used to duplicate and restore entire devices. Block-level backups allow you to copy on a deeper level than a file. While a file-based backup might copy file1, file2, and file3 to a backup location, a block-based backup system would copy the entire “block” that those files reside on. Another way of explaining the same concept is to say that block-level backups copy information bit after bit. They do not know about the files that may span those bits.
One advantage of the block-level backups is that they are typically faster. While file-based backups usually initiate a new transfer for each separate file, a block-based backup will transfer blocks, meaning that fewer non-sequential transfers need to be initiated to complete the copying.
The most common method of performing block-level backups is with the dd
utility. dd
can be used to create entire disk images, and is also frequently used when archiving removable media like CDs or DVDs. This means that you can back up a partition or disk to a single file or a raw device without any preliminary steps.
To use dd
, you need to specify an input location and an output location, like so:
- dd if=/path/of/original/device of=/path/to/place/backup
In this scenario, the if=
argument specifies the input device or location. The of=
arguments specifies the output file or location. Be careful not to confuse these, or you could erase an entire disk by mistake.
For example, to back up a partition containing your documents, which is located at /dev/sda3
, you can create an image of that directory by providing an output path to an .img
file:
- dd if=/dev/sda3 of=~/documents.img
One of the primary motivations for backing up data is being able to restore a previous version of a file in the event of an unwanted change or deletion. While all of the backup mechanisms mentioned so far can deliver this, you can also implement a more granular solution.
For example, a manual way of accomplishing this would be to create a backup of a file prior to editing it in nano
:
- cp file1 file1.bak
- nano file1
You could even automate this process by creating timestamped hidden files every time you modify a file with your editor. For instance, you could place this in your ~/.bashrc
file, so that every time you execute nano
from your bash
(i.e. $
) shell, it automatically creates a backup stamped with year (%y
), month (%m
), day (%d
), and so on:
- nano() { cp $1 .${1}.`date +%y-%m-%d_%H.%M.%S`.bak; /usr/bin/nano $1; }
This would work to the extent that you edit files manually with nano
, but is limited in scope, and could quickly fill up a disk. You can see how it could end up being worse than manually copying files you are going to edit.
An alternative that solves many of the problems inherent in this design is to use Git as a version control system. Although it was developed primarily to focus on versioning plain text, usually source code, line-by-line, you can use Git to track almost any kind of file. To learn more, you can review How to Use Git Effectively.
Most hosting providers will also provide their own optional backup functionality. DigtalOcean’s backup function regularly performs automated backups for droplets that have enabled this service. You can turn this on during droplet creation by checking the “Backups” check box:
This will back up your entire cloud server image on a regular basis. This means that you can redeploy from the backup, or use it as a base for new droplets.
For one-off imaging of your system, you can also create snapshots. These work in a similar way to backups, but are not automated. Although it’s possible to take a snapshot of a running system in some contexts, it is not always recommended, depending on how you are writing to your filesystem:
You can learn more about DigitalOcean backups and snapshots from the Containers and Images documentation.
Finally, it is worth noting that there are some circumstances in which you will not necessarily be looking to implement backups on a per-server basis. For example, if your deployment follows the principles of GitOps, you may treat many of your individual cloud servers as disposable, and instead treat remote data sources like Git repositories as the effective source of truth for your data. Complex, modern deployments like this can be more scalable and less prone to failure in many cases. However, you will still want to implement a backup strategy for your data stores themselves, or for a centralized log server that each of these disposable servers may be sending information to. Consider which aspects of your deployment may not need to be backed up, and which do.
In this article, you explored various backup concepts and solutions. Next, you may want to review solutions to enable redundancy.
]]>Transferring files over an SSH connection, by using either SFTP or SCP, is a popular method of moving small amounts of data between servers. In some cases, however, it may be necessary to share entire directories, or entire filesystems, between two remote environments. While this can be accomplished by configuring an SMB or NFS mount, both of these require additional dependencies and can introduce security concerns or other overhead.
As an alternative, you can install SSHFS to mount a remote directory by using SSH alone. This has the significant advantage of requiring no additional configuration, and inheriting permissions from the SSH user on the remote system. SSHFS is particularly useful when you need to read from a large set of files interactively on an individual basis.
SSHFS is available for most Linux distributions. On Ubuntu, you can install it using apt
.
First, use apt update
to refresh your package sources:
- sudo apt update
Then, use apt install
to install the sshfs
package.
- sudo apt install sshfs
Note: SSHFS can be installed on Mac or Windows through the use of filesystem libraries called FUSE, which provide interoperability with Linux environments. They will use identical concepts and connection details to this tutorial, but may require you to use different configuration interfaces or install third-party libraries. This tutorial will cover SSHFS on Linux only, but you should be able to adapt these steps to Mac or Windows FUSE implementations.
You can install SSHFS for Windows from the project’s GitHub Repository.
You can install SSHFS for Mac from the macFUSE Project.
Whenever you are mounting a remote filesystem in a Linux environment, you first need an empty directory to mount it in. Most Linux environments include a directory called /mnt
that you can create subdirectories within for this purpose.
Note: On Windows, remote filesystems are sometimes mounted with their own drive letter like G:
, and on Mac, they are usually mounted in the /Volumes
directory.
Create a subdirectory within /mnt
called droplet
using the mkdir
command:
- sudo mkdir /mnt/droplet
You can now mount a remote directory using sshfs
.
- sudo sshfs -o allow_other,default_permissions sammy@your_other_server:~/ /mnt/droplet
The options to this command behave as follows:
-o
precedes miscellaneous mount options (this is the same as when running the mount
command normally for non-SSH disk mounts). In this case, you are using allow_other
to allow other users to have access to this mount (so that it behaves like a normal disk mount, as sshfs
prevents this by default), and default_permissions
(so that it otherwise uses regular filesystem permissions).sammy@your_other_server:~/
provides the full path to the remote directory, including the remote username, sammy
, the remote server, your_other_server
, and the path, in this case ~/
for the remote user’s home directory. This uses the same syntax as SSH or SCP./mnt/droplet
is the path to the local directory being used as a mount point.If you receive a Connection reset by peer
message, make sure that you have copied your SSH key to the remote system. sshfs
uses an ordinary SSH connection in the background, and if it is your first time connecting to the remote system over SSH, you may be prompted to accept the remote host’s key fingerprint.
OutputThe authenticity of host '164.90.133.64 (164.90.133.64)' can't be established.
ED25519 key fingerprint is SHA256:05SYulMxeTDWFZtf3/ruDDm/3mmHkiTfAr+67FBC0+Q.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Note: If you need to mount a remote directory using SSHFS without requiring sudo
permissions, you can create a user group called fuse
on your local machine, by using sudo groupadd fuse
, and then adding your local user to that group, by using sudo usermod -a -G fuse sammy
.
You can use ls
to list the files in the mounted directory to see if they match the contents of the remote directory:
- ls /mnt/droplet
Outputremote_file1 remote_file2
Now you can work with files on your remote server as if it were a physical device attached to your local machine. For instance, if you create a file in the /mnt/droplet
directory, the file will appear on your virtual server. Likewise, you can copy files into or out of the /mnt/droplet
folder and they will be uploaded to or from your remote server in the background.
It is important to note that the mount
command only mounts a remote disk for your current session. If the virtual server or local machine is powered off or restarted, you will need to use the same process to mount it again.
If you no longer need this mount, you can unmount it with the umount
command:
- sudo umount /mnt/droplet
In the last step, you’ll walk through an example of configuring a permanent mount.
As with other types of disk and network mounts, you can configure a permanent mount using SSHFS. To do this, you’ll need to add a configuration entry to a file named /etc/fstab
, which handles Linux filesystem mounts at startup.
Using nano
or your favorite text editor, open /etc/fstab
:
- sudo nano /etc/fstab
At the end of the file, add an entry like this:
…
sammy@your_other_server:~/ /mnt/droplet fuse.sshfs noauto,x-systemd.automount,_netdev,reconnect,identityfile=/home/sammy/.ssh/id_rsa,allow_other,default_permissions 0 0
Permanent mounts often require a number of different options like this to ensure they behave as expected. They work as follows:
sammy@your_other_server:~/
is the remote path again, just as before./mnt/droplet
is the local path again.fuse.sshfs
specifies the driver being used to mount this remote directory.noauto,x-systemd.automount,_netdev,reconnect
are a set of options that work together to ensure that permanent mounts to network drives behave gracefully in case the network connection drops from the local machine or the remote machine.identityfile=/home/sammy/.ssh/id_rsa
specifies a path to a local SSH key so that the remote directory can be mounted automatically. Note that this example assumes that both your local and your remote username are sammy
– this refers to the local path. It is necessary to specify this because /etc/fstab
effectively runs as root, and would not otherwise know which username’s SSH configurations to check for a key that is trusted by the remote server.allow_other,default_permissions
use the same permissions from the mount
command above.0 0
signifies that the remote filesystem should never be dumped or validated by the local machine in case of errors. These options may be different when mounting a local disk.Save and close the file. If you are using nano
, press Ctrl+X
, then when prompted, Y
and then ENTER
. You can then test the /etc/fstab
configuration by restarting your local machine, for example by using sudo reboot now
, and verifying that the mount is recreated automatically.
It should be noted that permanent SSHFS mounts are not necessarily popular. The nature of SSH connections and SSHFS means that it is usually better suited to temporary, one-off solutions, when you don’t need to commit to an SMB or NFS mount which can be configured with greater redundancy and other options. That said, SSHFS is very flexible, and more importantly, acts as a full-fledged filesystem driver, which allows you to configure it in /etc/fstab
like any other disk mount and use it as much as needed. Be careful that you do not accidentally expose more of the remote filesystem over SSH than you intend.
In this tutorial, you configured an SSHFS mount from one Linux environment to another. Although it is not the most scalable or performant solution for a production deployment, SSHFS can be very useful with minimal configuration.
Next, you may want to learn about working with object storage which can be mounted concurrently across multiple servers.
]]>The Ubuntu operating system’s latest Long Term Support (LTS) release, Ubuntu 22.04 (Jammy Jellyfish), was released on April 21, 2022. This guide will explain how to upgrade an Ubuntu system of version 20.04 or later to Ubuntu 22.04.
Warning: As with almost any upgrade between major releases of an operating system, this process carries an inherent risk of failure, data loss, or broken software configuration. Comprehensive backups and extensive testing are strongly advised.
To avoid these problems, we recommend migrating to a fresh Ubuntu 22.04 server rather than upgrading in-place. You may still need to review differences in software configuration when upgrading, but the core system will likely have greater stability. You can follow our series on how to migrate to a new Linux server to learn how to migrate between servers.
This guide assumes that you have an Ubuntu 20.04 or later system configured with a sudo-enabled non-root user.
Although many systems can be upgraded in place without incident, it is often safer and more predictable to migrate to a major new release by installing the distribution from scratch, configuring services with careful testing along the way, and migrating application or user data as a separate step.
You should never upgrade a production system without first testing all of your deployed software and services against the upgrade in a staging environment. Keep in mind that libraries, languages, and system services may have changed substantially. Before upgrading, consider reading the Jammy Jellyfish Release Notes.
Before attempting a major upgrade on any system, you should make sure you won’t lose data if the upgrade goes awry. The best way to accomplish this is to make a backup of your entire filesystem. Failing that, ensure that you have copies of user home directories, any custom configuration files, and data stored by services such as relational databases.
On a DigitalOcean Droplet, one approach is to power down the system and take a snapshot (powering down ensures that the filesystem will be more consistent). See How to Create Snapshots of Droplets for more details on the snapshot process. After you have verified that the Ubuntu update was successful, you can delete the snapshot so that you will no longer be charged for its storage.
For backup methods which will work on most Ubuntu systems, see How To Choose an Effective Backup Strategy for your VPS.
Before beginning the release upgrade, it’s safest to update to the latest versions of all packages for the current release. Begin by updating the package list:
- sudo apt update
Next, upgrade installed packages to their latest available versions:
- sudo apt upgrade
You will be shown a list of upgrades, and prompted to continue. Press y to confirm and press Enter.
This process may take some time. Once it finishes, use the dist-upgrade
command with apt-get
, which will perform any additional upgrades that involve changing dependencies, adding or removing new packages as necessary. This will handle a set of upgrades which may have been held back by the previous apt upgrade
step:
- sudo apt dist-upgrade
Again, answer y when prompted to continue, and wait for upgrades to finish.
Now that you have an up-to-date installation of Ubuntu, you can use do-release-upgrade
to upgrade to the 22.04 release.
Traditionally, Ubuntu releases have been upgradeable by changing Apt’s /etc/apt/sources.list
– which specifies package repositories – and using apt-get dist-upgrade
to perform the upgrade itself. Though this process is still likely to work, Ubuntu provides a tool called do-release-upgrade
to make the upgrade safer and easier.
do-release-upgrade
handles checking for a new release, updating sources.list
, and a range of other tasks, and is the officially recommended upgrade path for server upgrades which must be performed over a remote connection.
Start by running do-release-upgrade
with no options:
- sudo do-release-upgrade
If the new Ubuntu version has not been officially released yet, you may get the following output:
OutputChecking for a new Ubuntu release
No new release found
Note that on Ubuntu Server, the new LTS release isn’t made available to do-release-upgrade
until its first point release, in this case 22.04.1
. This usually comes a few months after the initial release date.
If you don’t see an available release, add the -d
option to upgrade to the development release:
- sudo do-release-upgrade -d
If you’re connected to your system over SSH, you’ll be asked whether you wish to continue. For virtual machines or managed servers you should keep in mind that losing SSH connectivity is a risk, particularly if you don’t have another means of remotely connecting to the system’s console (such as a web-based console feature, for example).
For other systems under your control, remember that it’s safest to perform major operating system upgrades only when you have direct physical access to the machine.
At the prompt, type y and press Enter to continue:
OutputReading cache
Checking package manager
Continue running under SSH?
This session appears to be running under ssh. It is not recommended
to perform a upgrade over ssh currently because in case of failure it
is harder to recover.
If you continue, an additional ssh daemon will be started at port
'1022'.
Do you want to continue?
Continue [yN]
Next, you’ll be informed that do-release-upgrade
is starting a new instance of sshd
on port 1022:
OutputStarting additional sshd
To make recovery in case of failure easier, an additional sshd will
be started on port '1022'. If anything goes wrong with the running
ssh you can still connect to the additional one.
If you run a firewall, you may need to temporarily open this port. As
this is potentially dangerous it's not done automatically. You can
open the port with e.g.:
'iptables -I INPUT -p tcp --dport 1022 -j ACCEPT'
To continue please press [ENTER]
Press Enter
. Next, you may be warned that a mirror entry was not found. On DigitalOcean systems, it is safe to ignore this warning and proceed with the upgrade, since a local mirror for 22.04 is in fact available. Enter y:
OutputUpdating repository information
No valid mirror found
While scanning your repository information no mirror entry for the
upgrade was found. This can happen if you run an internal mirror or
if the mirror information is out of date.
Do you want to rewrite your 'sources.list' file anyway? If you choose
'Yes' here it will update all 'focal' to 'jammy' entries.
If you select 'No' the upgrade will cancel.
Continue [yN]
Once the new package lists have been downloaded and changes calculated, you’ll be asked if you want to start the upgrade. Again, enter y
to continue:
OutputDo you want to start the upgrade?
4 packages are going to be removed. 107 new packages are going to be
installed. 554 packages are going to be upgraded.
You have to download a total of 547 M. This download will take about
1 minute with a 40Mbit connection and about 14 minutes with a 5Mbit
connection.
Fetching and installing the upgrade can take several hours. Once the
download has finished, the process cannot be canceled.
Continue [yN] Details [d]
You may receive another warning about not being able to disable a lock screen:
OutputUnable to disable lock screen
It is highly recommended that the lock screen be disabled during the
upgrade to prevent later issues. Please ensure your screen lock is
disabled before continuing.
If you are connecting to a Ubuntu server, rather than a desktop, you can ignore this warning by pressing Enter
.
New packages will now be retrieved, unpacked, and installed. Even if your system is on a fast connection, this will take a while.
During the installation, you may be presented with interactive dialogs for various questions. For example, you may be asked if you want to automatically restart services when required:
In this case, it is safe to answer Yes. In other cases, you may be asked if you wish to replace a configuration file that you have modified. This is often a judgment call, and is likely to require knowledge about specific software that is outside the scope of this tutorial.
Once new packages have finished installing, you’ll be asked whether you’re ready to remove obsolete packages. On a stock system with no custom configuration, it should be safe to enter y here. On a system you have modified heavily, you may wish to enter d and inspect the list of packages to be removed, in case it includes anything you’ll need to reinstall later.
OutputRemove obsolete packages?
53 packages are going to be removed.
Continue [yN] Details [d]
Finally, assuming all has gone well, you’ll be informed that the upgrade is complete and a restart is required. Enter y to continue:
OutputSystem upgrade is complete.
Restart required
To finish the upgrade, a restart is required.
If you select 'y' the system will be restarted.
Continue [yN]
On an SSH session, you’ll likely see something like the following:
OutputConnection to 203.0.113.241 closed by remote host.
Connection to 203.0.113.241 closed.
You may need to press a key here to exit to your local prompt, since your SSH session will have terminated on the server end.
Wait a moment for your server to reboot, then reconnect. On login, you should be greeted by a message confirming that you’re now on Focal Fossa :
OutputWelcome to Ubuntu 22.04 LTS (GNU/Linux 5.15.0-25-generic x86_64)
You should now have a working Ubuntu 22.04 installation. From here, you likely need to investigate necessary configuration changes to services and deployed applications.
You can find more 22.04 tutorials and questions on our Ubuntu 22.04 Tutorials tag page.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with an Ubuntu server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for an Ubuntu 22.04 installation. SSH keys provide a secure way of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your computer):
- ssh-keygen
By default recent versions of ssh-keygen
will create a 3072-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following output:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press enter to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you optionally may enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.
You should then see the output similar to the following:
OutputYour identification has been saved in /your_home/.ssh/id_rsa
Your public key has been saved in /your_home/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:/hk7MJ5n5aiqdfTVUZr+2Qt+qCiS7BIm5Iv0dxrc3ks user@host
The key's randomart image is:
+---[RSA 3072]----+
| .|
| + |
| + |
| . o . |
|o S . o |
| + o. .oo. .. .o|
|o = oooooEo+ ...o|
|.. o *o+=.*+o....|
| =+=ooB=o.... |
+----[SHA256]-----+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Ubuntu host is to use a utility called ssh-copy-id
. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you specify the remote host that you would like to connect to, and the user account that you have password-based SSH access to. This is the account to which your public SSH key will be copied.
The syntax is:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed, for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.
We can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, we can make sure that the ~/.ssh
directory exists and has the correct permissions under the account we’re using.
We can then output the content we piped over into a file called authorized_keys
within this directory. We’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the above process manually.
We will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Access your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, we’ll ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root:
- chown -R sammy:sammy ~/.ssh
In this tutorial our user is named sammy but you should substitute the appropriate username into the above command.
We can now attempt passwordless authentication with our Ubuntu server.
If you have successfully completed one of the procedures above, you should be able to log into the remote host without providing the remote account’s password.
The basic process is the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type “yes” and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be prompted to enter it now (note that your keystrokes will not display in the terminal session for security). After authenticating, a new shell session should open for you with the configured account on the Ubuntu server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.
If you were able to log into your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Note: If you provided an SSH key when creating your DigitalOcean droplet, password authentication may have been automatically disabled. You can still verify this by reading on.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo nano /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This line may be commented out with a #
at the beginning of the line. Uncomment the line by removing the #
, and set the value to no
. This will disable your ability to log in via SSH using account passwords:
. . .
PasswordAuthentication no
. . .
Save and close the file when you are finished by pressing CTRL+X
, then Y
to confirm saving the file, and finally ENTER
to exit nano. To actually activate these changes, we need to restart the sshd
service:
- sudo systemctl restart ssh
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing your current session:
- ssh username@remote_host
Once you have verified your SSH service is functioning properly, you can safely close all current server sessions.
The SSH daemon on your Ubuntu server now only responds to SSH-key-based authentication. Password-based logins have been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>One way to guard against out-of-memory errors in applications is to add some swap space to your server. In this guide, we will cover how to add a swap file to an Ubuntu 22.04 server.
Swap is a portion of hard drive storage that has been set aside for the operating system to temporarily store data that it can no longer hold in RAM. This lets you increase the amount of information that your server can keep in its working memory, with some caveats. The swap space on the hard drive will be used mainly when there is no longer sufficient space in RAM to hold in-use application data.
The information written to disk will be significantly slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fallback for when your system’s RAM is depleted can be a good safety net against out-of-memory exceptions on systems with non-SSD storage available.
Before we begin, we can check if the system already has some swap space available. It is possible to have multiple swap files or swap partitions, but generally one should be enough.
We can see if the system has any configured swap by typing:
- sudo swapon --show
If you don’t get back any output, this means your system does not have swap space available currently.
You can verify that there is no active swap using the free
utility:
- free -h
Output total used free shared buff/cache available
Mem: 981Mi 122Mi 647Mi 0.0Ki 211Mi 714Mi
Swap: 0B 0B 0B
As you can see in the Swap row of the output, no swap is active on the system.
Before we create our swap file, we’ll check our current disk usage to make sure we have enough space. Do this by entering:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
udev 474M 0 474M 0% /dev
tmpfs 99M 932K 98M 1% /run
/dev/vda1 25G 1.4G 23G 7% /
tmpfs 491M 0 491M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 491M 0 491M 0% /sys/fs/cgroup
/dev/vda15 105M 3.9M 101M 4% /boot/efi
/dev/loop0 55M 55M 0 100% /snap/core18/1705
/dev/loop1 69M 69M 0 100% /snap/lxd/14804
/dev/loop2 28M 28M 0 100% /snap/snapd/7264
tmpfs 99M 0 99M 0% /run/user/1000
The device with /
in the Mounted on
column is our disk in this case. We have plenty of space available in this example (only 1.4G used). Your usage will probably be different.
Although there are many opinions about the appropriate size of a swap space, it really depends on your personal preferences and your application requirements. Generally, an amount equal to or double the amount of RAM on your system is a good starting point. Another good rule of thumb is that anything over 4G of swap is probably unnecessary if you are just using it as a RAM fallback.
Now that we know our available hard drive space, we can create a swap file on our filesystem. We will allocate a file of the size that we want called swapfile
in our root (/
) directory.
The best way of creating a swap file is with the fallocate
program. This command instantly creates a file of the specified size.
Since the server in our example has 1G of RAM, we will create a 1G file in this guide. Adjust this to meet the needs of your own server:
- sudo fallocate -l 1G /swapfile
We can verify that the correct amount of space was reserved by typing:
- ls -lh /swapfile
- -rw-r--r-- 1 root root 1.0G Apr 25 11:14 /swapfile
Our file has been created with the correct amount of space set aside.
Now that we have a file of the correct size available, we need to actually turn this into swap space.
First, we need to lock down the permissions of the file so that only users with root privileges can read the contents. This prevents normal users from being able to access the file, which would have significant security implications.
Make the file only accessible to root by typing:
- sudo chmod 600 /swapfile
Verify the permissions change by typing:
- ls -lh /swapfile
Output-rw------- 1 root root 1.0G Apr 25 11:14 /swapfile
As you can see, only the root user has the read and write flags enabled.
We can now mark the file as swap space by typing:
- sudo mkswap /swapfile
OutputSetting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=6e965805-2ab9-450f-aed6-577e74089dbf
After marking the file, we can enable the swap file, allowing our system to start using it:
- sudo swapon /swapfile
Verify that the swap is available by typing:
- sudo swapon --show
OutputNAME TYPE SIZE USED PRIO
/swapfile file 1024M 0B -2
We can check the output of the free
utility again to corroborate our findings:
- free -h
Output total used free shared buff/cache available
Mem: 981Mi 123Mi 644Mi 0.0Ki 213Mi 714Mi
Swap: 1.0Gi 0B 1.0Gi
Our swap has been set up successfully and our operating system will begin to use it as necessary.
Our recent changes have enabled the swap file for the current session. However, if we reboot, the server will not retain the swap settings automatically. We can change this by adding the swap file to our /etc/fstab
file.
Back up the /etc/fstab
file in case anything goes wrong:
- sudo cp /etc/fstab /etc/fstab.bak
Add the swap file information to the end of your /etc/fstab
file by typing:
- echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Next we’ll review some settings we can update to tune our swap space.
There are a few options that you can configure that will have an impact on your system’s performance when dealing with swap.
The swappiness
parameter configures how often your system swaps data out of RAM to the swap space. This is a value between 0 and 100 that represents a percentage.
With values close to zero, the kernel will not swap data to the disk unless absolutely necessary. Remember, interactions with the swap file are “expensive” in that they take a lot longer than interactions with RAM and they can cause a significant reduction in performance. Telling the system not to rely on the swap much will generally make your system faster.
Values that are closer to 100 will try to put more data into swap in an effort to keep more RAM space free. Depending on your applications’ memory profile or what you are using your server for, this might be better in some cases.
We can see the current swappiness value by typing:
- cat /proc/sys/vm/swappiness
Output60
For a Desktop, a swappiness setting of 60 is not a bad value. For a server, you might want to move it closer to 0.
We can set the swappiness to a different value by using the sysctl
command.
For instance, to set the swappiness to 10, we could type:
- sudo sysctl vm.swappiness=10
Outputvm.swappiness = 10
This setting will persist until the next reboot. We can set this value automatically at restart by adding the line to our /etc/sysctl.conf
file:
- sudo nano /etc/sysctl.conf
At the bottom, you can add:
vm.swappiness=10
Save and close the file when you are finished.
Another related value that you might want to modify is the vfs_cache_pressure
. This setting configures how much the system will choose to cache inode and dentry information over other data.
Basically, this is access data about the filesystem. This is generally very costly to look up and very frequently requested, so it’s an excellent thing for your system to cache. You can see the current value by querying the proc
filesystem again:
- cat /proc/sys/vm/vfs_cache_pressure
Output100
As it is currently configured, our system removes inode information from the cache too quickly. We can set this to a more conservative setting like 50 by typing:
- sudo sysctl vm.vfs_cache_pressure=50
Outputvm.vfs_cache_pressure = 50
Again, this is only valid for our current session. We can change that by adding it to our configuration file like we did with our swappiness setting:
- sudo nano /etc/sysctl.conf
At the bottom, add the line that specifies your new value:
vm.vfs_cache_pressure=50
Save and close the file when you are finished.
Following the steps in this guide will give you some breathing room in cases that would otherwise lead to out-of-memory exceptions. Swap space can be incredibly useful in avoiding some of these common problems.
If you are running into OOM (out of memory) errors, or if you find that your system is unable to use the applications you need, the best solution is to optimize your application configurations or upgrade your server.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with an Ubuntu server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for an Ubuntu 18.04 installation. SSH keys provide a secure way of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your local computer):
- ssh-keygen
By default ssh-keygen
will create a 2048-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should receive the following output:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press ENTER
to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you’ve previously generated an SSH key pair, you may receive the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
The next prompt will ask you to enter a secure passphrase:
OutputEnter passphrase (empty for no passphrase):
Here you have the option to enter a secure passphrase, which is highly recommended. A passphrase adds a layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.
All together, the ssh-keygen
command will return output like the following:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Ubuntu host is to use a utility called ssh-copy-id
. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system.
Note: For this method to work, you must already have password-based SSH access to your server.
To use the utility, you specify the remote host that you would like to connect to, and the user account that you have password-based SSH access to. This is the account to which your public SSH key will be copied.
The syntax is the following:
- ssh-copy-id username@remote_host
You may receive the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Write “yes” and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that you created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Write in the password (nothing will be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You should receive the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method. Remember, this will only work if you have password-based SSH access to your server.
You can do this by using the cat
command to read the contents of the public SSH key on your local computer and piping that through an SSH connection to the remote server.
On the other side, you can make sure that the ~/.ssh
directory exists and has the correct permissions under the account you’re using.
You can then output the content you piped over into a file called authorized_keys
within this directory. Use the >>
redirect symbol to append the content instead of overwriting it. This will let you add keys without destroying previously added keys.
The full command displays as the following:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may receive the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Write “yes” and press ENTER
to continue.
After, you should be prompted to enter the remote user account’s password:
Outputusername@203.0.113.1's password:
After entering your password, the contents of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the process manually.
This section outlines how to manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the contents of your id_rsa.pub
run the following command on your local computer:
- cat ~/.ssh/id_rsa.pub
This will return the key’s content in the command’s output:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Access your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary. For this command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
:
- echo public_key_string >> ~/.ssh/authorized_keys
Finally, ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root. In this tutorial our user is named sammy but you should substitute the appropriate username into the following command:
- chown -R sammy:sammy ~/.ssh
Now you can attempt passwordless authentication with your Ubuntu server.
If you’ve successfully completed one of the procedures in Step 2, you should be able to log into the remote host without the remote account’s password.
The process is the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the manual method), you may receive something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Write “yes” and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be prompted to enter it (note that your keystrokes will not display in the terminal session for security). After authenticating, a new shell session should open for you with the configured account on the Ubuntu server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.
If you were able to log into your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo nano /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out with a #
at the beginning of the line. Uncomment the line by removing the #
, and set the value to no
. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
Save and close the file when you’re finished by pressing CTRL + X
, then Y
and ENTER
to exit nano
. To activate these changes, you need to restart the sshd
service:
- sudo systemctl restart ssh
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing the current session:
- ssh username@remote_host
Once you’ve verified that your SSH service is functioning properly, you can safely close all current server sessions.
The SSH daemon on your Ubuntu server now only responds to SSH-key-based authentication and password-based authentication has been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a CentOS server, chances are, you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, you’ll focus on setting up SSH keys for a CentOS 7 installation. SSH keys provide a straightforward, secure way of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your computer):
- ssh-keygen
By default, ssh-keygen
will create a 2048-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following prompt:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press ENTER
to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes
, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you optionally may enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.
You should then see the following output:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the CentOS host is to use a utility called ssh-copy-id
. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you need only specify the remote host that you would like to connect to and the user account that you have password SSH access to. This is the account to which your public SSH key will be copied.
The syntax is:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that you created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.
You can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, you can make sure that the ~/.ssh
directory exists and has the correct permissions under the account you’re using.
You can then output the content you piped over into a file called authorized_keys
within this directory. You’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let you add keys without destroying previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the process manually.
You will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Access your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root
account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root
. In the following example, the user is named sammy but you should substitute the appropriate username into the command.
- chown -R sammy:sammy ~/.ssh
You can now attempt passwordless authentication with your CentOS server.
If you have successfully completed one of the procedures above, you should be able to log into the remote host without the remote account’s password.
The basic process is the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type yes
and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created it, you will be prompted to enter the passphrase now. After authenticating, a new shell session should open for you with the configured account on the CentOS server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.
If you were able to login to your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo vi /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out. If it is, press i
to insert text, and then uncomment the line by deleting the #
in front of the PasswordAuthentication
directive. When you find the directive, set the value to no
. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
When you are finished making changes, press ESC
and then :wq
to write the changes to the file and quit. To implement these changes, you need to restart the sshd
service:
- sudo systemctl restart sshd.service
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing this session:
- ssh username@remote_host
Once you have verified your SSH service, you can safely close all current server sessions.
The SSH daemon on your CentOS server now only responds to SSH keys. Password-based authentication has successfully been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>Most modern Unix-like operating systems offer a centralized mechanism for finding and installing software. Software is usually distributed in the form of packages, kept in repositories. Working with packages is known as package management. Packages provide the core components of an operating system, along with shared libraries, applications, services, and documentation.
A package management system does much more than one-time installation of software. It also provides tools for upgrading already-installed packages. Package repositories help to ensure that code has been vetted for use on your system, and that the installed versions of software have been approved by developers and package maintainers.
When configuring servers or development environments, it’s often necessary to look beyond official repositories. Packages in the stable release of a distribution may be out of date, especially where new or rapidly-changing software is concerned. Nevertheless, package management is a vital skill for system administrators and developers, and the wealth of packaged software for major distributions is a tremendous resource.
This guide is intended as a quick reference for the fundamentals of finding, installing, and upgrading packages on a variety of distributions, and should help you translate that knowledge between systems.
Most package systems are built around collections of package files. A package file is usually an archive which contains compiled applications and other resources used by the software, along with installation scripts. Packages also contain valuable metadata, including their dependencies, a list of other packages required to install and run them.
While their functionality and benefits are broadly similar, packaging formats and tools vary by platform:
.deb
packages installed by apt
and dpkg
.rpm
packages installed by yum
.txz
packages installed by pkg
In Debian and systems based on it, like Ubuntu, Linux Mint, and Raspbian, the package format is the .deb
file. apt
, the Advanced Packaging Tool, provides commands used for most common operations: Searching repositories, installing collections of packages and their dependencies, and managing upgrades. apt
commands operate as a front-end to the lower-level dpkg
utility, which handles the installation of individual .deb
files on the local system, and is sometimes invoked directly.
Recent releases of most Debian-derived distributions include a single apt
command, which offers a concise and unified interface to common operations that have traditionally been handled by the more-specific apt-get
and apt-cache
.
Rocky Linux, Fedora, and other members of the Red Hat family use RPM files. These used to use a package manager called yum
. In recent versions of Fedora and its derivatives, yum
has been supplanted by dnf
, a modernized fork which retains most of yum
’s interface.
FreeBSD’s binary package system is administered with the pkg
command. FreeBSD also offers the Ports Collection, a local directory structure and tools which allow the user to fetch, compile, and install packages directly from source using Makefiles. It’s usually much more convenient to use pkg
, but occasionally a pre-compiled package is unavailable, or you may need to change compile-time options.
Most systems keep a local database of the packages available from remote repositories. It’s best to update this database before installing or upgrading packages. As a partial exception to this pattern, dnf
will check for updates before performing some operations, but you can ask at any time whether updates are available.
sudo apt update
dnf check-update
sudo pkg update
sudo portsnap fetch update
Making sure that all of the installed software on a machine stays up to date would be an enormous undertaking without a package system. You would have to track upstream changes and security alerts for hundreds of different packages. While a package manager doesn’t solve every problem you’ll encounter when upgrading software, it does enable you to maintain most system components with a few commands.
On FreeBSD, upgrading installed ports can introduce breaking changes or require manual configuration steps. It’s best to read /usr/ports/UPDATING
before upgrading with portmaster
.
sudo apt upgrade
sudo dnf upgrade
sudo pkg upgrade
Most distributions offer a graphical or menu-driven front end to package collections. These can be a good way to browse by category and discover new software. Often, however, the quickest and most effective way to locate a package is to search with command-line tools.
apt search search_string
dnf search search_string
pkg search search_string
Note: On Rocky, Fedora, or RHEL, you can search package titles and descriptions together by using dnf search all
. On FreeBSD, you can search descriptions by using pkg search -D
When deciding what to install, it’s often helpful to read detailed descriptions of packages. Along with human-readable text, these often include metadata like version numbers and a list of the package’s dependencies.
apt show package
dnf info package
pkg info package
cd /usr/ports/category/port && cat pkg-descr
Once you know the name of a package, you can usually install it and its dependencies with a single command. In general, you can supply multiple packages to install at once by listing them all.
sudo apt install package
sudo dnf install package
sudo pkg install package
Sometimes, even though software isn’t officially packaged for a given operating system, a developer or vendor will offer package files for download. You can usually retrieve these with your web browser, or via curl
on the command line. Once a package is on the target system, it can often be installed with a single command.
On Debian-derived systems, dpkg
handles individual package files. If a package has unmet dependencies, gdebi
can often be used to retrieve them from official repositories.
On On Rocky Linux, Fedora, or RHEL, dnf
is used to install individual files, and will also handle needed dependencies.
sudo dpkg -i package.deb
sudo dnf install package.rpm
sudo pkg add package.txz
Since a package manager knows what files are provided by a given package, it can usually remove them cleanly from a system if the software is no longer needed.
sudo apt remove package
sudo dnf erase package
sudo pkg delete package
In addition to web-based documentation, keep in mind that Unix manual pages (usually referred to as man pages) are available for most commands from the shell. To read a page, use man
:
- man page
In man
, you can navigate with the arrow keys. Press / to search for text within the page, and q to quit.
man apt
man dnf
man pkg
man ports
This guide provides an overview of operations that can be cross-referenced between systems, but only scratches the surface of a complex topic. For greater detail on a given system, you can consult the following resources:
dnf
, and an official manual for dnf
itself.pkg
.This site has been going down frequently, so the first thing I checked for was updates. It was running 16.04.2, and the login screen showed 341 packages needed updating, 184 of which were security updates (yikes).
So after reading some docs here, I got a snapshot taken, then ran sudo apt-get update and sudo apt-get upgrade, followed by sudo apt autoremove. That all went great and I got it up to 16.04.7.
It looks though like 16.04.x is old hat, so I tried to upgrade to 18.04. using sudo apt-get dist-upgrade. During that, I get the error message “the essential package ‘ubuntu-minimal’ could not be located”. Searching, I found this:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/1825938
It looks like they were going from cosmic (18) to disco (19), but it was pretty spot on. I found all the lines in sources.list commented out, and some were pointing to xenial while others were pointed to bionic. I followed Vivien Milat (d-ubnntu-t) 's lead by un-commenting all the lines, and setting all the references to bionic. Upgrade won’t complete however and indicates that some of the packages may be broken.
Any thoughts/ideas on what I should do from here? It looks like I should setup another droplet, do a clean install of 20, then move things over. But I’m not sure how to even start with that. If I had my choice, I’d like to find a way of getting this upgrade to work as it seems simpler, but I really don’t have any clue if it would be or not.
]]>Hope that this is helpful! If you have any questions post the below!
]]>Package management is one of the fundamental features of a Linux system. The packaging format and the package management tools differ from distribution to distribution, but most distributions use one of two core sets of tools.
For Red Hat Enterprise Linux-based distributions (such as RHEL itself and Rocky Linux), the RPM packaging format and packaging tools like rpm
and yum
are common. The other major family, used by Debian, Ubuntu, and related distributions, uses the .deb
packaging format and tools like apt
and dpkg
.
In recent years, there have been more auxiliary package managers designed to run in parallel with the core apt
and dpkg
tooling: for example, snap provides more portability and sandboxing, and Homebrew, ported from macOS, provides command-line tools which can be installed by individual users to avoid conflicting with system packages.
In this guide, you will learn some of the most common package management tools that system administrators use on Debian and Ubuntu systems. This can be used as a quick reference when you need to know how to accomplish a package management task within these systems.
The Debian/Ubuntu ecosystem employs quite a few different package management tools in order to manage software on the system.
Most of these tools are interrelated and work on the same package databases. Some of these tools attempt to provide high-level interfaces to the packaging system, while other utilities concentrate on providing low-level functionality.
The apt
command is probably the most often used member of the apt
suite of packaging tools. Its main purpose is interfacing with remote repositories maintained by the distribution’s packaging team and performing actions on the available packages.
The apt
suite in general functions by pulling information from remote repositories into a cache maintained on the local system. The apt
command is used to refresh the local cache. It is also used to modify the package state, meaning to install or remove a package from the system.
In general, apt
will be used to update the local cache, and to make modifications to the live system.
Note: In earlier versions of Ubuntu, the core apt
command was known as apt-get
. It has been streamlined, but you can still call it with apt-get
out of habit or for backwards compatibility.
Another important member of the apt
suite is apt-cache
. This utility uses the local cache to query information about the available packages and their properties.
For instance, any time you wish to search for a specific package or a tool that will perform a certain function, apt-cache
is a good place to start. It can also be informative on what exact package version will be targeted by a procedure. Dependency and reverse dependency information is another area where apt-cache
is useful.
While the previous tools were focused on managing packages maintained in repositories, the dpkg
command can also be used to operate on individual .deb
packages. The dpkg
tool actually is responsible for most of the behind-the-scenes work of the commands above; apt
provides additional housekeeping while dpkg
interacts with the packages themselves.
Unlike the apt
commands, dpkg
does not have the ability to resolve dependencies automatically. It’s main feature is the ability to work with .deb
packages directly, and its ability to dissect a package and find out more about its structure. Although it can gather some information about the packages installed on the system, you should not use it as a primary package manager. In the next step, you’ll learn about package upgrade best practices.
The Debian and Ubuntu package management tools help to keep your system’s list of available packages up-to-date. They also provide various methods of updating packages you currently have installed on your server.
The remote repositories that your packaging tools rely on for package information are updated all of the time. However, most Linux package management tools are designed, for historical reasons, to work directly with a local cache of this information. That cache needs to be periodically refreshed.
It is usually a good idea to update your local package cache every session before performing other package commands. This will ensure that you are operating on the most up-to-date information about the available software. Some installation commands will fail if you are operating with stale package information.
To update the local cache, use the apt
command with the update
sub-command:
- sudo apt update
This will pull down an updated list of the available packages in the repositories you are tracking.
The apt
command distinguishes between two different update procedures. The first update procedure (covered in this section) can be used to upgrade any components that do not require component removal. To learn how to update and allow apt
to remove and swap components as necessary, see the section below.
This can be very important when you do not want to remove any of the installed packages under any circumstance. However, some updates involve replacing system components or removing conflicting files. This procedure will ignore any updates that require package removal:
- sudo apt upgrade
The second procedure will update all packages, even those that require package removal. This is often necessary as dependencies for packages change.
Usually, the packages being removed will be replaced by functional equivalents during the upgrade procedure, so this is generally safe. However, it is a good idea to keep an eye on the packages to be removed, in case some essential components are marked for removal. To perform this action, type:
- sudo apt full-upgrade
This will update all packages on your system. In the next step, you’ll learn about downloading and installing new packages.
The first step when downloading and installing packages is often to search your distribution’s repositories for the packages you are looking for.
Searching for packages is one operation that targets the package cache for information. In order to do this, use apt-cache search
. Keep in mind that you should ensure that your local cache is up-to-date using sudo apt update
prior to searching for packages:
- apt-cache search package
Since this procedure is only querying for information, it does not require sudo
privileges. Any search performed will look at the package names, as well as the full descriptions for packages.
For instance, if you search for htop
, you will see results like these:
- apt-cache search htop
Outputaha - ANSI color to HTML converter
htop - interactive processes viewer
libauthen-oath-perl - Perl module for OATH One Time Passwords
As you can see, you have a package named htop
, but you can also see two other programs, each of which mention htop
in the full description field of the package (the description next to the output is only a short summary).
To install a package from the repositories, as well as all of the necessary dependencies, you can use the apt
command with the install
argument.
The arguments for this command should be the package name or names as they are labeled in the repository:
- sudo apt install package
You can install multiple packages at once, separated by a space:
- sudo apt install package1 package2
If your requested package requires additional dependencies, these will be printed to standard output and you will be asked to confirm the procedure. It will look something like this:
- sudo apt install apache2
OutputReading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2-data
Suggested packages:
apache2-doc apache2-suexec-pristine apache2-suexec-custom
apache2-utils
The following NEW packages will be installed:
apache2 apache2-data
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 236 kB of archives.
After this operation, 1,163 kB of additional disk space will be used.
Do you want to continue [Y/n]?
As you can see, even though our install target was the apache2
package, the apache2-data
package is needed as a dependency. In this case, you can continue by pressing Enter or “Y”, or cancel by typing “n”.
If you need to install a specific version of a package, you can provide the version you would like to target with =
, like this:
sudo apt install package=version
The version in this case must match one of the package version numbers available in the repository. This means utilizing the versioning scheme employed by your distribution. You can find the available versions by using apt-cache policy package
:
- apt-cache policy nginx
Outputnginx:
Installed: (none)
Candidate: 1.18.0-0ubuntu1.2
Version table:
1.18.0-0ubuntu1.2 500
500 http://mirrors.digitalocean.com/ubuntu focal-updates/main amd64 Packages
500 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages
1.17.10-0ubuntu1 500
500 http://mirrors.digitalocean.com/ubuntu focal/main amd64 Packages
Many packages include post-installation configuration scripts that are automatically run after the installation is complete. These often include prompts for the administrator to make configuration choices.
If you need to run through these (and additional) configuration steps at a later time, you can use the dpkg-reconfigure
command. This command looks at the package passed to it and re-runs any post-configuration commands included within the package specification:
- sudo dpkg-reconfigure package
This will allow you access to the same (and often more) prompts that you ran upon installation.
Many times, you will want to see the side effects of a procedure before without actually committing to executing the command. apt
allows you to add the -s
flag to “simulate” a procedure.
For instance, to see what would be done if you choose to install a package, you can type:
- apt install -s package
This will let you see all of the dependencies and the changes to your system that will take place if you remove the -s
flag. One benefit of this is that you can see the results of a process that would normally require root privileges, without using sudo
.
For instance, if you want to evaluate what would be installed with the apache2
package, you can type:
- apt install -s apache2
OutputNOTE: This is only a simulation!
apt needs root privileges for real execution.
Keep also in mind that locking is deactivated,
so don't depend on the relevance to the real current situation!
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
apache2-data
Suggested packages:
apache2-doc apache2-suexec-pristine apache2-suexec-custom
apache2-utils
The following NEW packages will be installed:
apache2 apache2-data
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Inst apache2-data (2.4.6-2ubuntu2.2 Ubuntu:13.10/saucy-updates [all])
Inst apache2 (2.4.6-2ubuntu2.2 Ubuntu:13.10/saucy-updates [amd64])
Conf apache2-data (2.4.6-2ubuntu2.2 Ubuntu:13.10/saucy-updates [all])
Conf apache2 (2.4.6-2ubuntu2.2 Ubuntu:13.10/saucy-updates [amd64])
You get all of the information about the packages and versions that would be installed, without having to complete the actual process.
This also works with other procedures, like doing system upgrades:
- apt -s dist-upgrade
By default, apt
will prompt the user for confirmation for many processes. This includes installations that require additional dependencies, and package upgrades.
In order to bypass these upgrades, and default to accepting any of these prompts, you can pass the -y
flag when performing these operations:
- sudo apt install -y package
This will install the package and any dependencies without further prompting from the user. This can be used for upgrade procedures as well:
- sudo apt dist-upgrade -y
There are times when an installation may not finish successfully due to dependencies or other problems. One common scenario where this may happen is when installing a .deb
package with dpkg
, which does not resolve dependencies.
The apt
command can attempt to sort out this situation by passing it the -f
command.
- sudo apt install -f
This will search for any dependencies that are not satisfied and attempt to install them to fix the dependency tree. If your installation complained about a dependency problem, this should be your first step in attempting to resolve it. If you aren’t able to resolve an issue this way, and you installed a third-party package, you should remove it and look for a newer version that is more actively maintained.
There are main instances where it may be helpful to download a package from the repositories without actually installing it. You can do this by running apt
with the download
argument.
Because this is only downloading a file and not impacting the actual system, no sudo
privileges are required:
- apt download package
This will download the specified package(s) to the current directory.
Although most distributions recommend installing software from their maintained repositories, some vendors supply raw .deb
files which you can install on your system.
In order to do this, you use dpkg
. dpkg
is mainly used to work with individual packages. It does not attempt to perform installs from the repository, and instead looks for .deb
packages in the current directory, or the path supplied:
- sudo dpkg --install debfile.deb
It is important to note that the dpkg
tool does not implement any dependency handling. This means that if there are any unmet dependencies, the installation will fail. However, it marks the dependencies needed, so if all of the dependencies are available within the repositories, you can satisfy them by typing this afterwards:
- sudo apt install -f
This will install any unmet dependencies, including those marked by dpkg
. In the next step, you’ll learn about removing some of the packages you’ve installed.
This section will discuss how to uninstall packages and clean up the files that may be left behind by package operations.
In order to remove an installed package, you use apt remove
. This will remove most of the files that the package installed to the system, with one notable exception.
This command leaves configuration files in place so that your configuration will remain available if you need to reinstall the application at a later date. This is helpful because it means that any configuration files that you customized won’t be removed if you accidentally get rid of a package.
To complete this operation, you need to provide the name of the package you wish to uninstall:
- sudo apt remove package
The package will be uninstalled with the exception of your configuration files.
If you wish to remove a package and all associated files from your system, including configuration files, you can use apt purge
.
Unlike the remove
command mentioned above, the purge
command removes everything. This is useful if you do not want to save the configuration files or if you are having issues and want to start from a clean slate.
Keep in mind that once your configuration files are removed, you won’t be able to get them back:
- sudo apt purge package
Now, if you ever need to reinstall that package, the default configuration will be used.
When removing packages from your system with apt remove
or apt purge
, the package target will be removed. However, any dependencies that were automatically installed in order to fulfill the installation requirements will remain behind.
In order to automatically remove any packages that were installed as dependencies that are no longer required by any packages, you can use the autoremove
command:
- sudo apt autoremove
If you wish to remove all of the associated configuration files from the dependencies being removed, you will want to add the --purge
option to the autoremove
command. This will clean up configuration files as well, just like the purge
command does for a targeted removal:
- sudo apt --purge autoremove
As packages are added and removed from the repositories by a distribution’s package maintainers, some packages will become obsolete.
The apt
tool can remove any package files on the local system that are associated with packages that are no longer available from the repositories by using the autoclean
command.
This will free up space on your server and remove any potentially outdated packages from your local cache.
- sudo apt autoclean
In the next step, you’ll learn more ways of querying packages without necessarily installing them.
Each package contains a large amount of metadata that can be accessed using the package management tools. This section will demonstrate some common ways to get information about available and installed packages.
To show detailed information about a package in your distribution’s repositories, you can use the apt-cache show
. The target of this command is a package name within the repository:
- apt-cache show nginx
This will display information about any installation candidates for the package in question. Each candidate will have information about its dependencies, version, architecture, conflicts, the actual package file name, the size of the package and installation, and a detailed description among other things.
OutputPackage: nginx
Architecture: all
Version: 1.18.0-0ubuntu1.2
Priority: optional
Section: web
Origin: Ubuntu
Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
Original-Maintainer: Debian Nginx Maintainers <pkg-nginx-maintainers@lists.alioth.debian.org>
Bugs: https://bugs.launchpad.net/ubuntu/+filebug
Installed-Size: 44
Depends: nginx-core (<< 1.18.0-0ubuntu1.2.1~) | nginx-full (<< 1.18.0-0ubuntu1.2.1~) | nginx-light (<< 1.18.0-0ubuntu1.2.1~) | nginx-extras (<< 1.18.0-0ubuntu1.2.1~), nginx-core (>= 1.18.0-0ubuntu1.2) | nginx-full (>= 1.18.0-0ubuntu1.2) | nginx-light (>= 1.18.0-0ubuntu1.2) | nginx-extras (>= 1.18.0-0ubuntu1.2)
Filename: pool/main/n/nginx/nginx_1.18.0-0ubuntu1.2_all.deb
…
To show additional information about each of the candidates, including a full list of reverse dependencies (a list of packages that depend on the queried package), use the showpkg
command instead. This will include information about this package’s relationship to other packages:
- apt-cache showpkg package
To show details about a .deb
file, you can use the --info
flag with the dpkg
command. The target of this command should be the path to a .deb
file:
- dpkg --info debfile.deb
This will show you some metadata about the package in question. This includes the package name and version, the architecture it was built for, the size and dependencies required, a description and conflicts.
To specifically list the dependencies (packages this package relies on) and the reverse dependencies (the packages that rely on this package), you can use the apt-cache
utility.
For conventional dependency information, you can use the depends
sub-command:
- apt-cache depends nginx
Outputnginx
|Depends: nginx-core
|Depends: nginx-full
|Depends: nginx-light
Depends: nginx-extras
|Depends: nginx-core
|Depends: nginx-full
|Depends: nginx-light
Depends: nginx-extras
This will show information about every package that is listed as a hard dependency, suggestion, recommendation, or conflict.
If you need to find out which packages depend on a certain package, you can pass that package to apt-cache rdepends
:
- apt-cache rdepends package
Often, there are multiple versions of a package within the repositories, with a single default package. To see the available versions of a package you can use apt-cache policy
:
- apt-cache policy package
This will show you which version is installed (if any), the package that will be installed by default if you do not specify a version with the installation command, and a table of package versions, complete with the weight that indicates each version’s priority.
This can be used to determine what version will be installed and which alternatives are available. Because this also lists the repositories where each version is located, this can be used for determining if any extra repositories are superseding the packages from the default repositories.
To show the packages installed on your system, you have a few separate options, which vary in format and verbosity of output.
The first method involves using either the dpkg
or the dpkg-query
command with the -l
flag. The output from both of these commands is identical. With no arguments, it gives a list of every installed or partially installed package on the system. The output will look like this:
- dpkg -l
OutputDesired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===========================================-=======================================-============-=====================================================================================================================
ii account-plugin-generic-oauth 0.10bzr13.03.26-0ubuntu1.1 amd64 GNOME Control Center account plugin for single signon - generic OAuth
ii accountsservice 0.6.34-0ubuntu6 amd64 query and manipulate user account information
ii acl 2.2.52-1 amd64 Access control list utilities
ii acpi-support 0.142 amd64 scripts for handling many ACPI events
ii acpid 1:2.0.18-1ubuntu2 amd64 Advanced Configuration and Power Interface event daemon
. . .
The output continues for every package on the system. At the top of the output, you can see the meanings of the first three characters on each line. The first character indicates the desired state of the package. It can be:
The second character indicates the actual status of the package as known to the packaging system. These can be:
The third character, which will be a blank space for most packages, only has one potential other option:
The rest of the columns contain the package name, version, architecture, and a description.
If you add a search pattern after the -l
pattern, dpkg
will list all packages (whether installed or not) that contain that pattern. For instance, you can search for YAML processing libraries here:
- dpkg -l libyaml*
OutputDesired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===============-============-============-===================================
ii libyaml-0-2:amd 0.1.4-2ubunt amd64 Fast YAML 1.1 parser and emitter li
ii libyaml-dev:amd 0.1.4-2ubunt amd64 Fast YAML 1.1 parser and emitter li
un libyaml-perl <none> (no description available)
un libyaml-syck-pe <none> (no description available)
ii libyaml-tiny-pe 1.51-2 all Perl module for reading and writing
As you can see from the first column, the third and fourth results are not installed. This gives you every package that matches the pattern, as well as their current and desired states.
An alternative way to render the packages that are installed on your system is with dpkg –get-selections
.
This provides a list of all of the packages installed or removed but not purged:
- dpkg --get-selections
To differentiate between these two states, you can pipe output from dpkg
to awk
in order to filter by state. To see only installed packages, type:
- dpkg --get-selections | awk '$2 ~ /^install/'
To get a list of removed packages that have not had their configuration files purged, you can instead type:
- dpkg --get-selections | awk '$2 !~ /^install/'
You may also want to learn more about piping command output through awk,
To search your installed package base for a specific package, you can add a package filter string after the --get-selections
option. This supports matching with wildcards. Again, this will show any packages that are installed or that still have configuration files on the system:
- dpkg --get-selections libz*
You can, once again, filter using the awk
expressions from the last section.
To find out which files a package is responsible for, you can use the -L
flag with the dpkg
command:
- dpkg -L package
This will print out the absolute path of each file that is controlled by the package. This will not include any configuration files that are generated by processes within the package.
To find out which package is responsible for a certain file in your filesystem, you can pass the absolute path to the dpkg
command with the -S
flag.
This will print out the package that installed the file in question:
- dpkg -S /path/to/file
Keep in mind that any files that are moved into place by post installation scripts cannot be tied back to the package with this technique.
Using dpkg
, you can find out which package owns a file using the -S
option. However, there are times when you may need to know which package provides a file or command, even if you may not have the associated package installed.
To do so, you will need to install a utility called apt-file
. This maintains its own database of information, which includes the installation path of every file controlled by a package in the database.
Install the apt-file
package as normal:
- sudo apt update
- sudo apt install apt-file
Now, update the tool’s database and search for a file by typing:
- sudo apt-file update
- sudo apt-file search /path/to/file
This will only work for file locations that are installed directly by a package. Any file that is created through post-installation scripts cannot be queried. In the next step, you’ll learn how to import and export lists of installed packages.
Many times, you may need to back up the list of installed packages from one system and use it to install an identical set of packages on a different system. This is also helpful for backup purposes. This section will demonstrate how to export and import package lists.
If you need to replicate the set of packages installed on one system to another, you will first need to export your package list.
You can export the list of installed packages to a file by redirecting the output of dpkg --get-selections
to a text file:
- dpkg --get-selections > ~/packagelist.txt
You may also want to learn more about input and output redirection.
This list can then be copied to the second machine and imported.
You also may need to back up your sources lists and your trusted key list. You can back up your sources by creating a new directory and copying them over from the system configuration in /etc/apt/
:
- mkdir ~/sources
- cp -R /etc/apt/sources.list* ~/sources
Any keys which you’ve added in order to install packages from third-party repositories can be exported using apt-key exportall
:
- apt-key exportall > ~/trusted_keys.txt
You can now transfer the packagelist.txt
file, the sources
directory, and the trusted_keys.txt
file to another computer to import.
If you have created a package list using dpkg --get-selections
as demonstrated above, you can import the packages on another computer using the dpkg
command as well.
First, you need to add the trusted keys and implement the sources lists you copied from the first environment. Assuming that all of the data you backed up has been copied to the home directory of the new computer, you could type:
- sudo apt-key add ~/trusted_keys.txt
- sudo cp -R ~sources/* /etc/apt/
Next, clear the state of all non-essential packages from the new computer. This will ensure that you are applying the changes to a clean slate. This must be done with the root account or sudo
privileges:
- sudo dpkg --clear-selections
This will mark all non-essential packages for deinstallation. You should update the local package list so that your installation will have records for all of the software you plan to install. The actual installation and upgrade procedure will be handled by a tool called dselect
.
You should ensure that the dselect
tool is installed. This tool maintains its own database, so you also need to update that before you can continue:
- sudo apt update
- sudo apt install dselect
- sudo dselect update
Next, you can apply the package list on top of the current list to configure which packages should be kept or downloaded:
- sudo dpkg --set-selections < packagelist.txt
This sets the correct package states. To apply the changes, run apt dselect-upgrade
:
- sudo apt dselect-upgrade
This will download and install any necessary packages. It will also remove any packages marked for deselection. In the end, your package list should match that of the previous computer, although configuration files will still need to be copied or modified. You may want to use a tool like etckeeper to migrate configuration files from the /etc
directory.
In the next and final step, you’ll learn about working with third party package repositories.
Although the default set of repositories provided by most distributions are generally the most maintainable, there are times when additional sources may be helpful. In this section, you’ll learn how to configure your packaging tools to consult additional sources.
An alternative to traditional repositories on Ubuntu are PPAs, or personal package archives. Other Linux flavors typically use different, but similar, concepts of third-party repositories. Usually, PPAs have a smaller scope than repositories and contain focused sets of applications maintained by the PPA owner.
Adding PPAs to your system allows you to manage the packages they contain with your usual package management tools. This can be used to provide more up-to-date packages that are not included with the distribution’s repositories. Take care that you only add PPAs that you trust, as you will be allowing a non-standard maintainer to build packages for your system.
To add a PPA, you can use the add-apt-repository
command. The target should include the label ppa:
, followed by the PPA owner’s name on Launchpad, a slash, and the PPA name:
- sudo add-apt-repository ppa:owner_name/ppa_name
You may be asked to accept the packager’s key. Afterwards, the PPA will be added to your system, allowing you to install the packages with the normal apt
commands. Before searching for or installing packages, make sure to update your local cache with the information about your new PPA:
- sudo apt update
You can also edit your repository configuration directly. You can either edit the /etc/apt/sources.list
file or place a new list in the /etc/apt/sources.list.d
directory. If you go this latter route, the filename you create must end in .list
:
- sudo nano /etc/apt/sources.list.d/new_repo.list
Inside the file, you can add the location of the new repository by using the following format:
deb_or_deb-src url_of_repo release_code_name_or_suite component_names
The different parts of the repository specification are:
deb
, while source repositories begin with deb-src
.You can add these lines within the file. Most repositories will contain information about the exact format that should be used. On some other Linux distributions, you can add additional repository sources by actually installing packages which contain only a configuration file for that repository – which is consistent with the way that package managers are designed to work.
Package management is perhaps the single most important aspect of administering a Linux system. There are many other package management operations that you can perform, but this tutorial has provided a baseline of Ubuntu fundamentals, many of which are generalizable to other distributions with minor changes.
Next, you may want to learn more about package management on other platforms.
]]>Every computer system benefits from proper administration and monitoring. Keeping an eye on how your system is running will help you discover issues and resolve them quickly.
There are plenty of command line utilities created for this purpose. This guide will introduce you to some of the most helpful applications to have in your toolbox.
To follow along with this guide, you will need access to a computer running a Linux-based operating system. This can either be a virtual private server which you’ve connected to with SSH or your local machine. Note that this tutorial was validated using a Linux server running Ubuntu 20.04, but the examples given should work on a computer running any version of any Linux distribution.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
You can see all of the processes running on your server by using the top
command:
- top
Outputtop - 15:14:40 up 46 min, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1019600k total, 316576k used, 703024k free, 7652k buffers
Swap: 0k total, 0k used, 0k free, 258976k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0
6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0
8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset
9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
The first several lines of output provide system statistics, such as CPU/memory load and the total number of running tasks.
You can see that there is 1 running process, and 55 processes that are considered to be sleeping because they are not actively using CPU cycles.
The remainder of the displayed output shows the running processes and their usage statistics. By default, top
automatically sorts these by CPU usage, so you can see the busiest processes first. top
will continue running in your shell until you stop it using the standard key combination of Ctrl+C
to exit a running process. This sends a kill
signal, instructing the process to stop gracefully if it is able to.
An improved version of top
, called htop
, is available in most package repositories. On Ubuntu 20.04, you can install it with apt
:
- sudo apt install htop
After that, the htop
command will be available:
- htop
Output Mem[||||||||||| 49/995MB] Load average: 0.00 0.03 0.05
CPU[ 0.0%] Tasks: 21, 3 thr; 1 running
Swp[ 0/0MB] Uptime: 00:58:11
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop
1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init
311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid
314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae
389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys
407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5
408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5
409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5
406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5
553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
htop
provides better visualization of multiple CPU threads, better awareness of color support in modern terminals, and more sorting options, among other features. Unlike top
, It is not always installed by default, but can be considered a drop-in-replacement. You can exit htop
by pressing Ctrl+C
as with top
.
Here are some keyboard shortcuts that will help you use htop more effectively:
There are many other options that you can access through help or setup. These should be your first stops in exploring htop’s functionality. In the next step, you’ll learn how to monitor your network bandwidth.
If your network connection seems overutilized and you are unsure which application is the culprit, a program called nethogs
is a good choice for finding out.
On Ubuntu, you can install nethogs with the following command:
- sudo apt install nethogs
After that, the nethogs
command will be available:
- nethogs
OutputNetHogs version 0.8.0
PID USER PROGRAM DEV SENT RECEIVED
3379 root /usr/sbin/sshd eth0 0.485 0.182 KB/sec
820 root sshd: root@pts/0 eth0 0.427 0.052 KB/sec
? root unknown TCP 0.000 0.000 KB/sec
TOTAL 0.912 0.233 KB/sec
nethogs
associates each application with its network traffic.
There are only a few commands that you can use to control nethogs
:
iptraf-ng
is another way to monitor network traffic. It provides a number of different interactive monitoring interfaces.
Note: IPTraf requires a screen size of at least 80 columns by 24 lines.
On Ubuntu, you can install iptraf-ng
with the following command:
- sudo apt install iptraf-ng
iptraf-ng
needs to be run with root privileges, so you should precede it with sudo
:
- sudo iptraf-ng
You’ll be presented with a menu that uses a popular command line interface framework called ncurses
.
With this menu, you can select which interface you would like to access.
For example, to get an overview of all network traffic, you can select the first menu and then “All interfaces”. It will give you a screen that looks like this:
Here, you can see what IP addresses you are communicating on all of your network interfaces.
If you would like to have those IP addresses resolved into domains, you can enable reverse DNS lookup by exiting the traffic screen, selecting Configure
and then toggling on Reverse DNS lookups
.
You can also enable TCP/UDP service names
to see the names of the services being run instead of the port numbers.
With both of these options enabled, the display may look like this:
The netstat
command is another versatile tool for gathering network information.
netstat
is installed by default on most modern systems, but you can install it yourself by downloading it from your server’s default package repositories. On most Linux systems, including Ubuntu, the package containing netstat
is net-tools
:
- sudo apt install net-tools
By default, the netstat
command on its own prints a list of open sockets:
- netstat
OutputActive Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 192.241.187.204:ssh ip223.hichina.com:50324 ESTABLISHED
tcp 0 0 192.241.187.204:ssh rrcs-72-43-115-18:50615 ESTABLISHED
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
unix 5 [ ] DGRAM 6559 /dev/log
unix 3 [ ] STREAM CONNECTED 9386
unix 3 [ ] STREAM CONNECTED 9385
. . .
If you add an -a
option, it will list all ports, listening and non-listening:
- netstat -a
OutputActive Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 192.241.187.204:ssh rrcs-72-43-115-18:50615 ESTABLISHED
tcp6 0 0 [::]:ssh [::]:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
unix 2 [ ACC ] STREAM LISTENING 6195 @/com/ubuntu/upstart
unix 2 [ ACC ] STREAM LISTENING 7762 /var/run/acpid.socket
unix 2 [ ACC ] STREAM LISTENING 6503 /var/run/dbus/system_bus_socket
. . .
If you’d like to filter to see only TCP or UDP connections, use the -t
or -u
flags respectively:
- netstat -at
OutputActive Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 192.241.187.204:ssh rrcs-72-43-115-18:50615 ESTABLISHED
tcp6 0 0 [::]:ssh [::]:* LISTEN
See statistics by passing the “-s” flag:
- netstat -s
OutputIp:
13500 total packets received
0 forwarded
0 incoming packets discarded
13500 incoming packets delivered
3078 requests sent out
16 dropped because of missing route
Icmp:
41 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
echo requests: 1
echo replies: 40
. . .
If you would like to continuously update the output, you can use the -c
flag. There are many other options available to netstat which you learn by reviewing its manual page.
In the next step, you’ll learn some useful ways of monitoring your disk usage.
For a quick overview of how much disk space is left on your attached drives, you can use the df
program.
Without any options, its output looks like this:
- df
OutputFilesystem 1K-blocks Used Available Use% Mounted on
/dev/vda 31383196 1228936 28581396 5% /
udev 505152 4 505148 1% /dev
tmpfs 203920 204 203716 1% /run
none 5120 0 5120 0% /run/lock
none 509800 0 509800 0% /run/shm
This outputs disk usage in bytes, which may be a bit hard to read.
To fix this problem, you can specify output in a human-readable format:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda 30G 1.2G 28G 5% /
udev 494M 4.0K 494M 1% /dev
tmpfs 200M 204K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
If you want to see the total disk space available on all filesystems, you can pass the --total
option. This will add a row at the bottom with summary information:
- df -h --total
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda 30G 1.2G 28G 5% /
udev 494M 4.0K 494M 1% /dev
tmpfs 200M 204K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
total 32G 1.2G 29G 4%
df
can provide a useful overview. Another command, du
gives a breakdown by directory.
du
will analyze usage for the current directory and any subdirectories. The default output of du
running in a nearly-empty home directory looks like this:
- du
Output4 ./.cache
8 ./.ssh
28 .
Once again, you can specify human-readable output by passing it -h
:
- du -h
Output4.0K ./.cache
8.0K ./.ssh
28K .
To see file sizes as well as directories, type the following:
- du -a
Output0 ./.cache/motd.legal-displayed
4 ./.cache
4 ./.ssh/authorized_keys
8 ./.ssh
4 ./.profile
4 ./.bashrc
4 ./.bash_history
28 .
For a total at the bottom, you can add the -c
option:
- du -c
Output4 ./.cache
8 ./.ssh
28 .
28 total
If you are only interested in the total and not the specifics, you can issue:
- du -s
Output28 .
There is also an ncurses
interface for du
, appropriately called ncdu
, that you can install:
- sudo apt install ncdu
This will graphically represent your disk usage:
- ncdu
Output--- /root ----------------------------------------------------------------------
8.0KiB [##########] /.ssh
4.0KiB [##### ] /.cache
4.0KiB [##### ] .bashrc
4.0KiB [##### ] .profile
4.0KiB [##### ] .bash_history
You can step through the filesystem by using the up and down arrows and pressing Enter on any directory entry.
In the last section, you’ll learn how to monitor your memory usage.
You can check the current memory usage on your system by using the free
command.
When used without options, the output looks like this:
- free
Output total used free shared buff/cache available
Mem: 1004896 390988 123484 3124 490424 313744
Swap: 0 0 0
To display in a more readable format, you can pass the -m
option to display the output in megabytes:
- free -m
Output total used free shared buff/cache available
Mem: 981 382 120 3 478 306
Swap: 0 0 0
The Mem
line includes the memory used for buffering and caching, which is freed up as soon as needed for other purposes. Swap
is memory that has been written to a swapfile
on disk in order to conserve active memory.
Finally, the vmstat
command can output various information about your system, including memory, swap, disk io, and cpu information.
You can use the command to get another view into memory usage:
- vmstat
Outputprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 99340 123712 248296 0 0 0 1 9 3 0 0 100 0
You can see this in megabytes by specifying units with the -S
flag:
- vmstat -S M
Outputprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 96 120 242 0 0 0 1 9 3 0 0 100 0
To get some general statistics about memory usage, type:
- vmstat -s -S M
Output 495 M total memory
398 M used memory
252 M active memory
119 M inactive memory
96 M free memory
120 M buffer memory
242 M swap cache
0 M total swap
0 M used swap
0 M free swap
. . .
To get information about individual system processes’ cache usage, type:
- vmstat -m -S M
OutputCache Num Total Size Pages
ext4_groupinfo_4k 195 195 104 39
UDPLITEv6 0 0 768 10
UDPv6 10 10 768 10
tw_sock_TCPv6 0 0 256 16
TCPv6 11 11 1408 11
kcopyd_job 0 0 2344 13
dm_uevent 0 0 2464 13
bsg_cmd 0 0 288 14
. . .
This will give you details about what kind of information is stored in the cache.
Using these tools, you should begin to be able to monitor your server from the command line. There are many other monitoring utilities that are used for different purposes, but these are a good starting point.
Next, you may want to learn about Linux process management using ps, kill, and nice.
]]>A Linux server, like any modern computer, runs multiple applications. These are referred to and managed as individual processes.
While Linux will handle the low-level, behind-the-scenes management in a process’s life-cycle – i.e., startup, shutdown, memory allocation, and so on – you will need a way of interacting with the operating system to manage them from a higher level.
In this guide, you will learn some fundamental aspects of process management. Linux provides a number of standard, built-in tools for this purpose.
You will explore these ideas in a Ubuntu 20.04 environment, but any modern Linux distribution will operate in a similar way.
You can see all of the processes running on your server by using the top
command:
- top
Outputtop - 15:14:40 up 46 min, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1019600k total, 316576k used, 703024k free, 7652k buffers
Swap: 0k total, 0k used, 0k free, 258976k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0
6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0
7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0
8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset
9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
The first several lines of output provide system statistics, such as CPU/memory load and the total number of running tasks.
You can see that there is 1 running process, and 55 processes that are considered to be sleeping because they are not actively using CPU cycles.
The remainder of the displayed output shows the running processes and their usage statistics. By default, top
automatically sorts these by CPU usage, so you can see the busiest processes first. top
will continue running in your shell until you stop it using the standard key combination of Ctrl+C
to exit a running process. This sends a kill
signal, instructing the process to stop gracefully if it is able to.
An improved version of top
, called htop
, is available in most package repositories. On Ubuntu 20.04, you can install it with apt
:
- sudo apt install htop
After that, the htop
command will be available:
- htop
Output Mem[||||||||||| 49/995MB] Load average: 0.00 0.03 0.05
CPU[ 0.0%] Tasks: 21, 3 thr; 1 running
Swp[ 0/0MB] Uptime: 00:58:11
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop
1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init
311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid
314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae
389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys
407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5
408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5
409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5
406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5
553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
htop
provides better visualization of multiple CPU threads, better awareness of color support in modern terminals, and more sorting options, among other features. Unlike top
, It is not always installed by default, but can be considered a drop-in-replacement. You can exit htop
by pressing Ctrl+C
as with top
. You can also learn more about how to use top and htop.
In the next section, you’ll learn about how to use tools to query specific processes.
top
and htop
provide a dashboard interface to view running processes similar to a graphical task manager. A dashboard interface can provide an overview, but usually does not return directly actionable output. For this, Linux provides another standard command called ps
to query running processes.
Running ps
without any arguments provides very little information:
- ps
Output PID TTY TIME CMD
1017 pts/0 00:00:00 bash
1262 pts/0 00:00:00 ps
This output shows all of the processes associated with the current user and terminal session. This makes sense if you are only running the bash
shell and this ps
command within this terminal currently.
To get a more complete picture of the processes on this system, you can run ps aux
:
- ps aux
OutputUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 24188 2120 ? Ss 14:28 0:00 /sbin/init
root 2 0.0 0.0 0 0 ? S 14:28 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S 14:28 0:00 [ksoftirqd/0]
root 6 0.0 0.0 0 0 ? S 14:28 0:00 [migration/0]
root 7 0.0 0.0 0 0 ? S 14:28 0:00 [watchdog/0]
root 8 0.0 0.0 0 0 ? S< 14:28 0:00 [cpuset]
root 9 0.0 0.0 0 0 ? S< 14:28 0:00 [khelper]
…
These options tell ps
to show processes owned by all users (regardless of their terminal association) in a more human-readable format.
By making use of pipes, you can search within the output of ps aux
using grep
, in order to return the name of a specific process. This is useful if you believe it has crashed, or if you need to stop it for some reason.
- ps aux | grep bash
Outputsammy 41664 0.7 0.0 34162880 2528 s000 S 1:35pm 0:00.04 -bash
sammy 41748 0.0 0.0 34122844 828 s000 S+ 1:35pm 0:00.00 grep bash
This returns both the grep
process you just ran, and the bash
shell that’s currently running. It also return their total memory and CPU usage, how long they’ve been running, and in the highlighted output above, their process ID. In Linux and Unix-like systems, each process is assigned a process ID, or PID. This is how the operating system identifies and keeps track of processes.
A quick way of getting the PID of a process is with the pgrep
command:
- pgrep bash
Output1017
The first process spawned at boot, called init, is given the PID of “1”.
- pgrep init
Output1
This process is then responsible for spawning every other process on the system. The later processes are given larger PID numbers.
A process’s parent is the process that was responsible for spawning it. Parent processes have a PPID, which you can see in the column headers in many process management applications, including top
, htop
and ps
.
Any communication between the user and the operating system about processes involves translating between process names and PIDs at some point during the operation. This is why these utilities will always include the PID in their output. In the next section, you’ll learn how to use PIDs to send stop, resume, or other signals to running processes.
All processes in Linux respond to signals. Signals are an operating system-level way of telling programs to terminate or modify their behavior.
The most common way of passing signals to a program is with the kill
command. As you might expect, the default functionality of this utility is to attempt to kill a process:
- kill PID_of_target_process
This sends the TERM signal to the process. The TERM signal tells the process to please terminate. This allows the program to perform clean-up operations and exit smoothly.
If the program is misbehaving and does not exit when given the TERM signal, you can escalate the signal by passing the KILL
signal:
- kill -KILL PID_of_target_process
This is a special signal that is not sent to the program.
Instead, it is given to the operating system kernel, which shuts down the process. This is used to bypass programs that ignore the signals sent to them.
Each signal has an associated number that can be passed instead of the name. For instance, You can pass “-15” instead of “-TERM”, and “-9” instead of “-KILL”.
Signals are not only used to shut down programs. They can also be used to perform other actions.
For instance, many processes that are designed to run constantly in the background (sometimes called “daemons”) will automatically restart when they are given the HUP
, or hang-up signal. The Apache webserver typically operates this way.
- sudo kill -HUP pid_of_apache
The above command will cause Apache to reload its configuration file and resume serving content.
Note: Many background processes like this are managed through system services which provide an additional surface for interacting with them, and it is usually preferable to restart the service itself rather than sending a HUP
signal directly to one running process. If you review the configuration files of various services, you may find that the various service restart
hooks are designed to do exactly that – send signals to specific processes – while also providing logs and other reporting.
You can list all of the signals that are possible to send with kill
with the -l
flag:
- kill -l
Output1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
Although the conventional way of sending signals is through the use of PIDs, there are also methods of doing this with regular process names.
The pkill
command works in almost exactly the same way as kill
, but it operates on a process name instead:
- pkill -9 ping
The above command is the equivalent of:
- kill -9 `pgrep ping`
If you would like to send a signal to every instance of a certain process, you can use the killall
command:
- killall firefox
The above command will send the TERM signal to every instance of firefox
running on the computer.
Often, you will want to adjust which processes are given priority in a server environment.
Some processes might be considered mission critical for your situation, while others may be executed whenever there are leftover resources.
Linux controls priority through a value called niceness.
High priority tasks are considered less nice, because they don’t share resources as well. Low priority processes, on the other hand, are nice because they insist on only taking minimal resources.
When you ran top
at the beginning of the article, there was a column marked “NI”. This is the nice value of the process:
- top
[secondary_label Output]
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 1019600k total, 324496k used, 695104k free, 8512k buffers
Swap: 0k total, 0k used, 0k free, 264812k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1635 root 20 0 17300 1200 920 R 0.3 0.1 0:00.01 top
1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
Nice values can range between -19/-20 (highest priority) and 19/20 (lowest priority) depending on the system.
To run a program with a certain nice value, you can use the nice
command:
- nice -n 15 command_to_execute
This only works when beginning a new program.
To alter the nice value of a program that is already executing, you use a tool called renice
:
- renice 0 PID_to_prioritize
Process management is a fundamental part of Linux that is useful in almost every context. Even if you aren’t performing any hands-on system administration, being able to chase down stuck processes and handle them carefully is very helpful.
Next, you may want to learn how to use netstat
and du
to monitor other server resources.
Adding and removing users on a Linux system is one of the most important system administration tasks to familiarize yourself with. When you create a new system, you are often only given access to the root account by default.
While running as the root user gives you complete control over a system and its users, it is also dangerous and possibly destructive. For common system administration tasks, it’s a better idea to add an unprivileged user and carry out those tasks without root privileges. You can also create additional unprivileged accounts for any other users you may have on your system. Each user on a system should have their own separate account.
For tasks that require administrator privileges, there is a tool installed on Ubuntu systems called sudo
. Briefly, sudo
allows you to run a command as another user, including users with administrative privileges. In this guide, you’ll learn how to create user accounts, assign sudo
privileges, and delete users.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To complete this tutorial, you will need access to a server running Ubuntu 20.04. Ensure that you have root access to the server and firewall enabled. To set this up, follow our Initial Server Setup Guide for Ubuntu 20.04.
If you are signed in as the root user, you can create a new user at any time by running the following:
- adduser newuser
If you are signed in as a non-root user who has been given sudo
privileges, you can add a new user with the following command:
- sudo adduser newuser
Either way, you will be required to respond to a series of questions:
ENTER
if you don’t wish to utilize these fields.Y
to continue.Your new user is now ready for use and can be logged into with the password that you entered.
If you need your new user to have administrative privileges, continue on to the next section.
If your new user should have the ability to execute commands with root (administrative) privileges, you will need to give the new user access to sudo
. Let’s examine two approaches to this task: first, adding the user to a pre-defined sudo user group, and second, specifying privileges on a per-user basis in sudo
’s configuration.
By default, sudo
on Ubuntu 20.04 systems is configured to extend full privileges to any user in the sudo group.
You can view what groups your new user is in with the groups
command:
- groups newuser
Outputnewuser : newuser
By default, a new user is only in their own group because adduser
creates this in addition to the user profile. A user and its own group share the same name. In order to add the user to a new group, you can use the usermod
command:
- usermod -aG sudo newuser
The -aG
option tells usermod
to add the user to the listed groups.
Please note that the usermod
command itself requires sudo
privileges. This means that you can only add users to the sudo
group if you’re signed in as the root user or as another user that has already been added as a member of the sudo
group. In the latter case, you will have to precede this command with sudo
, as in this example:
- sudo usermod -aG sudo newuser
/etc/sudoers
As an alternative to putting your user in the sudo group, you can use the visudo
command, which opens a configuration file called /etc/sudoers
in the system’s default editor, and explicitly specify privileges on a per-user basis.
Using visudo
is the only recommended way to make changes to /etc/sudoers
because it locks the file against multiple simultaneous edits and performs a validation check on its contents before overwriting the file. This helps to prevent a situation where you misconfigure sudo
and cannot fix the problem because you have lost sudo
privileges.
If you are currently signed in as root, run the following:
- visudo
If you are signed in as a non-root user with sudo
privileges, run the same command with the sudo
prefix:
- sudo visudo
Traditionally, visudo
opened /etc/sudoers
in the vi
editor, which can be confusing for inexperienced users. By default on new Ubuntu installations, visudo
will use the nano
text editor, which provides a more convenient and accessible text editing experience. Use the arrow keys to move the cursor, and search for the line that reads like the following:
root ALL=(ALL:ALL) ALL
Below this line, add the following highlighted line. Be sure to change newuser
to the name of the user profile that you would like to grant sudo
privileges:
root ALL=(ALL:ALL) ALL
newuser ALL=(ALL:ALL) ALL
Add a new line like this for each user that should be given full sudo
privileges. When you’re finished, save and close the file by pressing CTRL + X
, followed by Y
, and then ENTER
to confirm.
Now your new user is able to execute commands with administrative privileges.
When signed in as the new user, you can execute commands as your regular user by typing commands as normal:
- some_command
You can execute the same command with administrative privileges by typing sudo
ahead of the command:
- sudo some_command
When doing this, you will be prompted to enter the password of the regular user account you are signed in as.
In the event that you no longer need a user, it’s best to delete the old account.
You can delete the user itself, without deleting any of their files, by running the following command as root:
- deluser newuser
If you are signed in as another non-root user with sudo
privileges, you would use the following:
- sudo deluser newuser
If, instead, you want to delete the user’s home directory when the user is deleted, you can issue the following command as root:
- deluser --remove-home newuser
If you’re running this as a non-root user with sudo
privileges, you would run the same command with the sudo
prefix:
- sudo deluser --remove-home newuser
If you previously configured sudo
privileges for the user you deleted, you may want to remove the relevant line again:
- visudo
Or use the following command if you are a non-root user with sudo
privileges:
- sudo visudo
root ALL=(ALL:ALL) ALL
newuser ALL=(ALL:ALL) ALL # DELETE THIS LINE
This will prevent a new user created with the same name from being accidentally given sudo
privileges.
You should now have a fairly good handle on how to add and remove users from your Ubuntu 20.04 system. Effective user management will allow you to separate users and give them only the access that they are required to do their job.
For more information about how to configure sudo
, check out our guide on how to edit the sudoers file.
FTP, short for File Transfer Protocol, is a network protocol that was once widely used for moving files between a client and server. It has since been replaced by faster, more secure, and more convenient ways of delivering files. Many casual Internet users expect to download directly from their web browser with https
, and command-line users are more likely to use secure protocols such as the scp
or SFTP.
FTP is still used to support legacy applications and workflows with very specific needs. If you have a choice of what protocol to use, consider exploring the more modern options. When you do need FTP, however, vsftpd is an excellent choice. Optimized for security, performance, and stability, vsftpd offers strong protection against many security problems found in other FTP servers and is the default for many Linux distributions.
In this tutorial, you’ll configure vsftpd to allow a user to upload files to his or her home directory using FTP with login credentials secured by SSL/TLS.
To follow along with this tutorial you will need:
Let’s start by updating our package list and installing the vsftpd
daemon:
- sudo apt update
- sudo apt install vsftpd
When the installation is complete, let’s copy the configuration file so we can start with a blank configuration, saving the original as a backup:
- sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.orig
With a backup of the configuration in place, we’re ready to configure the firewall.
Let’s check the firewall status to see if it’s enabled. If it is, we’ll ensure that FTP traffic is permitted so firewall rules don’t block our tests.
Check the firewall status:
- sudo ufw status
In this case, only SSH is allowed through:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
You may have other rules in place or no firewall rules at all. Since only SSH traffic is permitted in this case, we’ll need to add rules for FTP traffic.
Let’s open ports 20
and 21
for FTP, port 990
for when we enable TLS, and ports 40000-50000
for the range of passive ports we plan to set in the configuration file:
- sudo ufw allow 20/tcp
- sudo ufw allow 21/tcp
- sudo ufw allow 990/tcp
- sudo ufw allow 40000:50000/tcp
- sudo ufw status
Our firewall rules should now look like this:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
990/tcp ALLOW Anywhere
20/tcp ALLOW Anywhere
21/tcp ALLOW Anywhere
40000:50000/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
20/tcp (v6) ALLOW Anywhere (v6)
21/tcp (v6) ALLOW Anywhere (v6)
990/tcp (v6) ALLOW Anywhere (v6)
40000:50000/tcp (v6) ALLOW Anywhere (v6)
With vsftpd
installed and the necessary ports open, let’s move on to creating a dedicated FTP user.
We will create a dedicated FTP user, but you may already have a user in need of FTP access. We’ll take care to preserve an existing user’s access to their data in the instructions that follow. Even so, we recommend that you start with a new user until you’ve configured and tested your setup.
First, add a test user:
- sudo adduser sammy
Assign a password when prompted. Feel free to press ENTER
through the other prompts.
FTP is generally more secure when users are restricted to a specific directory. vsftpd
accomplishes this with chroot
jails. When chroot
is enabled for local users, they are restricted to their home directory by default. However, because of the way vsftpd
secures the directory, it must not be writable by the user. This is fine for a new user who should only connect via FTP, but an existing user may need to write to their home folder if they also have shell access.
In this example, rather than removing write privileges from the home directory, let’s create an ftp
directory to serve as the chroot
and a writable files
directory to hold the actual files.
Create the ftp
folder:
- sudo mkdir /home/sammy/ftp
Set its ownership:
- sudo chown nobody:nogroup /home/sammy/ftp
Remove write permissions:
- sudo chmod a-w /home/sammy/ftp
Verify the permissions:
- sudo ls -la /home/sammy/ftp
Outputtotal 8
4 dr-xr-xr-x 2 nobody nogroup 4096 Aug 24 21:29 .
4 drwxr-xr-x 3 sammy sammy 4096 Aug 24 21:29 ..
Next, let’s create the directory for file uploads and assign ownership to the user:
- sudo mkdir /home/sammy/ftp/files
- sudo chown sammy:sammy /home/sammy/ftp/files
A permissions check on the ftp
directory should return the following:
- sudo ls -la /home/sammy/ftp
Outputtotal 12
dr-xr-xr-x 3 nobody nogroup 4096 Aug 26 14:01 .
drwxr-xr-x 3 sammy sammy 4096 Aug 26 13:59 ..
drwxr-xr-x 2 sammy sammy 4096 Aug 26 14:01 files
Finally, let’s add a test.txt
file to use when we test:
- echo "vsftpd test file" | sudo tee /home/sammy/ftp/files/test.txt
Now that we’ve secured the ftp
directory and allowed the user access to the files
directory, let’s modify our configuration.
We’re planning to allow a single user with a local shell account to connect with FTP. The two key settings for this are already set in vsftpd.conf
. Start by opening the config file to verify that the settings in your configuration match those below:
- sudo nano /etc/vsftpd.conf
. . .
# Allow anonymous FTP? (Disabled by default).
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
. . .
Next, let’s enable the user to upload files by uncommenting the write_enable
setting:
. . .
write_enable=YES
. . .
We’ll also uncomment the chroot
to prevent the FTP-connected user from accessing any files or commands outside the directory tree:
. . .
chroot_local_user=YES
. . .
Let’s also add a user_sub_token
to insert the username in our local_root directory
path so our configuration will work for this user and any additional future users. Add these settings anywhere in the file:
. . .
user_sub_token=$USER
local_root=/home/$USER/ftp
Let’s also limit the range of ports that can be used for passive FTP to make sure enough connections are available:
. . .
pasv_min_port=40000
pasv_max_port=50000
Note: In step 2, we opened the ports that we set here for the passive port range. If you change the values, be sure to update your firewall settings.
To allow FTP access on a case-by-case basis, let’s set the configuration so that users have access only when they are explicitly added to a list, rather than by default:
. . .
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
userlist_deny
toggles the logic: When it is set to YES
, users on the list are denied FTP access. When it is set to NO
, only users on the list are allowed access.
When you’re done making the changes, save the file and exit the editor.
Finally, let’s add our user to /etc/vsftpd.userlist
. Use the -a
flag to append to the file:
- echo "sammy" | sudo tee -a /etc/vsftpd.userlist
Check that it was added as you expected:
- cat /etc/vsftpd.userlist
Outputsammy
Restart the daemon to load the configuration changes:
- sudo systemctl restart vsftpd
With the configuration in place, let’s move on to testing FTP access.
We’ve configured the server to allow only the user sammy
to connect via FTP. Let’s make sure that this works as expected.
Anonymous users should fail to connect: We’ve disabled anonymous access. Let’s test that by trying to connect anonymously. If our configuration is set up properly, anonymous users should be denied permission. Open another terminal window and run the following command. Be sure to replace 203.0.113.0
with your server’s public IP address:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): anonymous
530 Permission denied.
ftp: Login failed.
ftp>
Close the connection:
- bye
Users other than sammy
should fail to connect: Next, let’s try connecting as our sudo user. They should also be denied access, and it should happen before they’re allowed to enter their password:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sudo_user
530 Permission denied.
ftp: Login failed.
ftp>
Close the connection:
- bye
The user sammy
should be able to connect, read, and write files: Let’s make sure that our designated user can connect:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
331 Please specify the password.
Password: your_user's_password
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>
Let’s change into the files
directory and use the get
command to transfer the test file we created earlier to our local machine:
- cd files
- get test.txt
Output227 Entering Passive Mode (203,0,113,0,169,12).
150 Opening BINARY mode data connection for test.txt (16 bytes).
226 Transfer complete.
16 bytes received in 0.0101 seconds (1588 bytes/s)
ftp>
Next, let’s upload the file with a new name to test write permissions:
- put test.txt upload.txt
Output227 Entering Passive Mode (203,0,113,0,164,71).
150 Ok to send data.
226 Transfer complete.
16 bytes sent in 0.000894 seconds (17897 bytes/s)
Close the connection:
- bye
Now that we’ve tested our configuration, let’s take steps to further secure our server.
Since FTP does not encrypt any data in transit, including user credentials, we’ll enable TLS/SSL to provide that encryption. The first step is to create the SSL certificates for use with vsftpd
.
Let’s use openssl
to create a new certificate and use the -days
flag to make it valid for one year. In the same command, we’ll add a private 2048-bit RSA key. By setting both the -keyout
and -out
flags to the same value, the private key and the certificate will be located in the same file:
- sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem
You’ll be prompted to provide address information for your certificate. Substitute your own information for the highlighted values below:
OutputGenerating a 2048 bit RSA private key
............................................................................+++
...........+++
writing new private key to '/etc/ssl/private/vsftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:NY
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:DigitalOcean
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: your_server_ip
Email Address []:
For more detailed information about the certificate flags, see OpenSSL Essentials: Working with SSL Certificates, Private Keys and CSRs
Once you’ve created the certificates, open the vsftpd
configuration file again:
- sudo nano /etc/vsftpd.conf
Toward the bottom of the file, you will see two lines that begin with rsa_
. Comment them out so they look like this:
. . .
# rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
# rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
. . .
Below them, add the following lines that point to the certificate and private key we just created:
. . .
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
. . .
After that, we will force the use of SSL, which will prevent clients that can’t deal with TLS from connecting. This is necessary to ensure that all traffic is encrypted, but it may force your FTP user to change clients. Change ssl_enable
to YES
:
. . .
ssl_enable=YES
. . .
After that, add the following lines to explicitly deny anonymous connections over SSL and to require SSL for both data transfer and logins:
. . .
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
. . .
After this, configure the server to use TLS, the preferred successor to SSL, by adding the following lines:
. . .
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
. . .
Finally, we will add two more options. First, we will not require SSL reuse because it can break many FTP clients. We will require “high” encryption cipher suites, which currently means key lengths equal to or greater than 128 bits:
. . .
require_ssl_reuse=NO
ssl_ciphers=HIGH
. . .
The finished file section should look like this:
# This option specifies the location of the RSA certificate to use for SSL
# encrypted connections.
#rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
require_ssl_reuse=NO
ssl_ciphers=HIGH
When you’re done, save and close the file.
Restart the server for the changes to take effect:
- sudo systemctl restart vsftpd
At this point, we will no longer be able to connect with an insecure command-line client. If we tried, we’d see something like:
Outputftp -p 203.0.113.0
Connected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
530 Non-anonymous sessions must use encryption.
ftp: Login failed.
421 Service not available, remote server has closed connection
ftp>
Next, let’s verify that we can connect using a client that supports TLS.
Most modern FTP clients can be configured to use TLS encryption. We will demonstrate how to connect with FileZilla because of its cross-platform support. Consult the documentation for other clients.
When you first open FileZilla, find the Site Manager icon just above the word Host, the left-most icon on the top row. Click it:
A new window will open. Click the New Site button in the bottom right corner:
Under My Sites a new icon with the words New site will appear. You can name it now or return later and use the Rename button.
Fill out the Host field with the name or IP address. Under the Encryption drop down menu, select Require explicit FTP over TLS.
For Logon Type, select Ask for password. Fill in your FTP user in the User field:
Click Connect at the bottom of the interface. You will be asked for the user’s password:
Click OK to connect. You should now be connected with your server with TLS/SSL encryption.
Upon success, you will be presented with a server certificate that looks like this:
When you’ve accepted the certificate, double-click the files
folder and drag upload.txt
to the left to confirm that you’re able to download files:
When you’ve done that, right-click on the local copy, rename it to upload-tls.txt
and drag it back to the server to confirm that you can upload files:
You’ve now confirmed that you can securely and successfully transfer files with SSL/TLS enabled.
If you’re unable to use TLS because of client requirements, you can gain some security by disabling the FTP user’s ability to log in any other way. One relatively straightforward way to prevent it is by creating a custom shell. This will not provide any encryption, but it will limit the access of a compromised account to files accessible by FTP.
First, open a file called ftponly
in the bin
directory:
- sudo nano /bin/ftponly
Add a message telling the user why they are unable to log in:
#!/bin/sh
echo "This account is limited to FTP access only."
Save the file and exit your editor.
Change the permissions to make the file executable:
- sudo chmod a+x /bin/ftponly
Open the list of valid shells:
- sudo nano /etc/shells
At the bottom add:
. . .
/bin/ftponly
Update the user’s shell with the following command:
- sudo usermod sammy -s /bin/ftponly
Now try logging into your server as sammy
:
- ssh sammy@your_server_ip
You should see something like:
OutputThis account is limited to FTP access only.
Connection to 203.0.113.0 closed.
This confirms that the user can no longer ssh
to the server and is limited to FTP access only.
In this tutorial we covered setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd
’s support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.
When you first start using a fresh Linux server, adding and removing users is often one of the first things you’ll need to do. In this guide, you will learn how to create user accounts, assign sudo
privileges, and delete users on a CentOS 7 server.
To complete this tutorial, you will need:
sudo
-enabled user. If you are logged in as root instead, you can drop the sudo portion of all the following commands. For guidance, please see our tutorial Initial Server Setup with CentOS 7.Throughout this tutorial we will be working with the user sammy. Please substitute with the username of your choice.
You can add a new user by typing:
sudo adduser sammy
Next, you’ll need to give your user a password so that they can log in. To do so, use the passwd
command:
sudo passwd sammy
You will be prompted to type in the password twice to confirm it. Now your new user is set up and ready for use! You can now log in as that user, using the password that you set up.
Note: if your SSH server disallows password-based authentication, you will not yet be able to connect with your new username. Details on setting up key-based SSH authentication for the new user can be found in step 4 of Initial Server Setup with CentOS 7.
If your new user should have the ability to execute commands with root (administrative) privileges, you will need to give the new user access to sudo
.
We can do this by adding the user to the wheel group (which gives sudo
access to all of its members by default).
To do this, use the usermod
command:
sudo usermod -aG wheel sammy
Now your new user is able to execute commands with administrative privileges. To do so, simply type sudo
ahead of the command that you want to execute as an administrator:
sudo some_command
You will be prompted to enter the password of your user account (not the root password). Once the correct password has been submitted, the command you entered will be executed with root privileges.
To see which users are part of the wheel group (and thus have sudo
), you can use the lid
function. lid
is normally used to show which groups a user belongs to, but with the -g
flag, you can reverse it and show which users belong in a group:
sudo lid -g wheel
Output sammy(uid=1001)
The output will show you the usernames and UIDs that are associated with the group. This is a good way of confirming that your previous commands were successful, and that the user has the privileges that they need.
If you have a user account that you no longer need, it’s best to delete the old account.
If you want to delete the user without deleting any of their files, type:
sudo userdel sammy
If you want to delete the user’s home directory along with the user account itself, type:
sudo userdel -r sammy
With either command, the user will automatically be removed from any groups that they were added to, including the wheel group if they were given sudo
privileges. If you later add another user with the same name, they will have to be added to the wheel group again to gain sudo
access.
You should now have a good grasp on how to add and remove users from your CentOS 7 server. Effective user management will allow you to separate users and give them only the access that is needed for them to do their job. You can now move on to configuring your CentOS 7 server for whatever software you need, such as a LAMP or LEMP web stack.
For more information about how to configure sudo
, check out our guide on how to edit the sudoers file.
Homebrew is a package manager that was originally developed for macOS to let you install free and open-source software using your terminal. Linux systems all make use of their own built-in package managers, such as apt
on Debian, Ubuntu, and derivatives, and dnf
on Red Hat, Fedora, and Rocky Linux, to install programs and tools from trusted and maintained package repositories.
However, it is not always practical to install all software via apt
or dnf
. For example, some programming languages prefer to use their own package managers, such as Python’s pip
, or Node.js’ npm
to install additional scripts or libraries that are localized to your own user account.
More recently, Homebrew has added native support for Linux. While Homebrew was originally created to install Linux tools on macOS, many Homebrew packages are better maintained or more convenient to use than the equivalent packages available in Linux repositories. Also, since Homebrew packages are designed to only provide per-user functionality, Homebrew can be used alongside your system package manager without creating conflicts.
In this tutorial you’ll install and use Homebrew in a Linux environment. You’ll install system tools and configure your shell environment to use Homebrew from the command line interface.
A Linux server or desktop environment, and a non-root user with sudo privileges. You can learn more about how to set up a user with these privileges in our Initial Server Setup with Ubuntu 20.04 guide.
The version control tool git
installed on your machine. You can refer to How To Install Git on Ubuntu 20.04 on Linux specifically, or follow the official Git documentation on another platform.
Before installing Homebrew, you will need a working compiler so that Homebrew can build packages. While most packages are pre-compiled, some package dependencies will need to be built directly on your machine. Most Linux distributions allow you to install a compiler with a single command, but do not provide one by default.
On Ubuntu, you can install a package called build-essential
that will provide all the packages needed for a modern, well-supported compiler environment. Install the package with apt
:
- sudo apt install build-essential
On Rocky Linux, CentOS, or other RedHat derivatives, you can install a group of packages called Development Tools to provide the same compiler functionality. Install the packages with dnf
:
- dnf groups mark install "Development Tools"
- dnf groupinstall "Development Tools"
You can verify that a compiler is available by checking for the existence of the make
command on your system. In order to do that, use the which
command:
- which make
Output/usr/bin/make
Now that you have a working compiler, you can proceed to install Homebrew.
To install Homebrew, you’ll download an installation script and then execute the script.
First, download the script to your local machine:
- curl -fsSL -o install.sh https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
The command uses curl
to download the Homebrew installation script from Homebrew’s Git repository on GitHub.
Let’s walk through the flags that are associated with the curl
command:
f
or --fail
flag tells the shell to give no HTML document output on server errors.-s
or --silent
flag mutes curl
so that it does not show the progress meter, and combined with the -S
or --show-error
flag it will ensure that curl
shows an error message if it fails.-L
or --location
flag will tell curl
to handle redirects. If the server reports that the requested page has moved to a different location, it’ll automatically execute the request again using the new location.-o
switch specifies a local filename for the file. Rather than displaying the contents to the screen, the -o
switch saves the contents into the file you specify.Before running a script you’ve downloaded from the Internet, you should review its contents so you know what the script will do. Use the less
command to review the installation script so you understand what it will do.
- less install.sh
Once you’re comfortable with the contents of the script, execute the script with the bash
command:
- /bin/bash install.sh
The installation script will explain what it will do and will prompt you to confirm that you want to do it. This lets you know exactly what Homebrew is going to do to your system before you let it proceed. It also ensures you have the prerequisites in place before it continues.
You’ll be prompted to enter your password during the process. If you do not have sudo
privileges, you can press Ctrl+D
instead to bypass this prompt, and Homebrew will be installed with more restrictive permissions. You can review this option in Homebrew’s documentation.
Press the letter y
for “yes” whenever you are prompted to confirm the installation.
When complete, Homebrew’s installer output will also include Next steps
in order to configure your shell environment for working with Homebrew packages. This configuration ensures that Homebrew’s tools will be used in favor of the tools provided by the system package manager. Copy and paste commands from your output, which will detect the correct configuration paths on your system. The below example is from bash
:
Output==> Next steps:
- Run these two commands in your terminal to add Homebrew to your PATH:
echo 'eval "$(/home/sammy/.linuxbrew/bin/brew shellenv)"' >> /home/sammy/.profile
eval "$(/home/sammy/.linuxbrew/bin/brew shellenv)"
Once you run these two commands, the changes you have made to your shell’s PATH
environment variable will take effect. They’ll be set correctly when you log in again in the future, as the configuration file for your shell is run automatically when you open a new session.
Now verify that Homebrew is set up correctly. Run this command:
- brew doctor
If no updates are required at this time, you’ll receive the following output:
OutputYour system is ready to brew.
Otherwise, you may get a warning to run another command such as brew update
to ensure that your installation of Homebrew is up to date. Follow any on-screen instructions to finish configuring your environment before moving on.
Now that Homebrew is installed, use it to download a package. The tree
command lets you see a graphical directory tree and is available via Homebrew.
Install tree
with the brew install
command:
- brew install tree
Homebrew will update its list of packages and then download and install the tree
command:
Output. . .
==> Downloading https://ghcr.io/v2/homebrew/core/tree/manifests/2.0.2
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/tree/blobs/sha256:e1d7569f6930271d694e739e93eb026aac1e8b386
==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:e1d7569f6930271d694e739
######################################################################## 100.0%
==> Pouring tree--2.0.2.x86_64_linux.bottle.tar.gz
🍺 /home/linuxbrew/.linuxbrew/Cellar/tree/2.0.2: 8 files, 162.4KB
==> Running `brew cleanup tree`...
Homebrew installs files to /home/linuxbrew/.linuxbrew/bin/
by default, so they won’t interfere with future Linux updates. Verify that tree
is installed by displaying the command’s location with the which
command:
- which tree
The output shows that tree
is located in /home/linuxbrew/.linuxbrew/bin/
:
Output/home/linuxbrew/.linuxbrew/bin/tree
Run the tree
command to see the version:
- tree --version
The version prints to the screen, indicating it’s installed:
Outputtree v2.0.2 (c) 1996 - 2022 by Steve Baker, Thomas Moore, Francesc Rocher, Florian Sesser, Kyosuke Tokoro
Occasionally, you’ll want to upgrade an existing package. Use the brew upgrade
command, followed by the package name:
- brew upgrade tree
You can run brew upgrade
with no additional arguments to upgrade all programs and packages Homebrew manages.
When you install a new version, Homebrew keeps the older version around. After a while, you might want to reclaim disk space by removing these older copies. Run brew cleanup
to remove all old versions of your Homebrew-managed software.
To remove a package you’re no longer using, use brew uninstall
. To uninstall the tree
command, run this command:
- brew uninstall tree
The output shows that the package was removed:
OutputUninstalling /home/linuxbrew/.linuxbrew/Cellar/tree/2.0.2... (8 files, 162.4KB)
If you no longer need Homebrew, you can use its uninstall script.
Download the uninstall script with curl
:
- curl -fsSL -o uninstall.sh https://raw.githubusercontent.com/Homebrew/install/master/uninstall.sh
As always, review the contents of the script with the less
command to verify the script’s contents:
- less uninstall.sh
Once you’ve verified the script, execute the script with the --help
flag to see the various options you can use:
- bash uninstall.sh --help
The options display on the screen:
OutputHomebrew Uninstaller
Usage: uninstall.sh [options]
-p, --path=PATH Sets Homebrew prefix. Defaults to /usr/local.
--skip-cache-and-logs
Skips removal of HOMEBREW_CACHE and HOMEBREW_LOGS.
-f, --force Uninstall without prompting.
-q, --quiet Suppress all output.
-d, --dry-run Simulate uninstall but don't remove anything.
-h, --help Display this message.
Use the -d
flag to see what the script will do:
- bash uninstall.sh -d
The script will list everything it will delete:
OutputWarning: This script would remove:
/home/linuxbrew/.linuxbrew/Caskroom/
/home/linuxbrew/.linuxbrew/Cellar/
/home/linuxbrew/.linuxbrew/Homebrew/
/home/linuxbrew/.linuxbrew/Homebrew/.dockerignore
/home/linuxbrew/.linuxbrew/Homebrew/.editorconfig
. . .
When you’re ready to remove everything, run the script without any flags:
- bash uninstall.sh
This removes Homebrew and any programs that you’ve installed with it.
In this tutorial you installed and used Homebrew in a Linux environment. You can now use Homebrew to install command line tools, programming languages, and other utilities that you’ll need for software development.
Homebrew has many packages you can install. Visit the official list to search for your favorite programs.
]]>🧨 fwrite(): write of 2268 bytes failed with errno=28 no space left on device
I don’t know what to do, my developer is literally in the military right now for 6 weeks. And I don’t code, I don’t know anything about coding, and I’m freaking out because I have almost a thousand clients who are angry and want their services working.
I know some of them will start to do a chargeback, especially new ones, and a lot of chargebacks will make Paypal, stripe, and stuff suspecting you of something. I googled those writing and it seems like this issue happened to others and they said something about resyncing things, I don’t even know a single thing about coding.
My site is coded in… PHP with some SQL,(that’s all I know)
CAN ANYONE HELP ME FIX THIS? I AM WILLING TO PAY YOU $$/h to fix this. PLEASE GUY HELP ME, I’m literally crying right now, I’m extremely overwhelmed.
]]>Quotas are used to limit the amount of disk space a user or group can use on a filesystem. Without such limits, a user could fill up the machine’s disk and cause problems for other users and services.
In this tutorial you will install command line tools to create and inspect disk quotas, then set a quota for an example user.
To set and check quotas, you first need to install the quota command line tools using apt
. First update the package list, then install the package:
- sudo apt update
- sudo apt install quota
You can verify that the tools are installed by running the quota
command and asking for its version information:
- quota --version
OutputQuota utilities version 4.05.
. . .
It’s fine if your output shows a slightly different version number.
Next make sure you have the appropriate kernel modules for monitoring quotas.
If you are on a cloud-based virtual server, your default Ubuntu Linux installation may not have the kernel modules needed to support quota management. To check, you will use find
to search for the quota_v1
and quota_v2
modules in the /lib/modules/...
directory:
- find /lib/modules/ -type f -name '*quota_v*.ko*'
Output/lib/modules/5.4.0-99-generic/kernel/fs/quota/quota_v2.ko
/lib/modules/5.4.0-99-generic/kernel/fs/quota/quota_v1.ko
Make note of your kernel version – highlighted in the file paths above – as you will need it in a later step. It will likely be different, but as long as the two modules are listed, you’re all set and can skip the rest of this step.
If you get no output from the above command, install the linux-image-extra-virtual
package:
- sudo apt install linux-image-extra-virtual
This will provide the kernel modules necessary for implementing quotas. Run the previous find
command again to verify that the installation was successful.
Next you will update your filesystem mount
options to enable quotas on your root filesystem.
To activate quotas on a particular filesystem, you need to mount it with a few quota-related options specified. You can do this by updating the filesystem’s entry in the /etc/fstab
configuration file. Open that file nano or your preferred text editor:
- sudo nano /etc/fstab
This file’s contents will be similar to the following:
LABEL=cloudimg-rootfs / ext4 defaults 0 0
LABEL=UEFI /boot/efi vfat defaults 0 0
This fstab
file is from a virtual server. A desktop or laptop computer will probably have a slightly different fstab
, but in most cases you’ll have a /
or root filesystem that represents all of your disk space.
The highlighted line indicates the name of the mounted device, the location where it is mounted, the file system type, and the mount options used. The first zero indicates no backups will be made, and the second zero indicates no error-checking will be done on boot.
Update the line pointing to the root filesystem by replacing the defaults
option with the following highlighted options:
LABEL=cloudimg-rootfs / ext4 usrquota,grpquota 0 0
. . .
This change will allow us to enable both user- (usrquota
) and group-based (grpquota
) quotas on the filesystem. If you only need one or the other, you may leave out the unused option. If your fstab
line already had some options listed instead of defaults
, you should add the new options to the end of whatever is already there, being sure to separate all options with a comma and no spaces.
Remount the filesystem to make the new options take effect:
- sudo mount -o remount /
Here, the -o
flag is used to pass the remount
option.
Note: Be certain there are no spaces between the options listed in your /etc/fstab
file. If you put a space after the ,
comma, you will see an error like the following:
Outputmount: /etc/fstab: parse error
If you see this message after running the previous mount
command, reopen the fstab
file, correct any errors, and repeat the mount
command before continuing.
You can verify that the new options were used to mount the filesystem by looking at the /proc/mounts
file. Here, use grep
to show only the root filesystem entry in that file:
- cat /proc/mounts | grep ' / '
Output/dev/vda1 / ext4 rw,relatime,quota,usrquota,grpquota 0 0
Note the two specified options. Now that you have installed your tools and updated your filesystem options, you can turn on the quota system.
Before finally turning on the quota system, you need to manually run the quotacheck
command once:
- sudo quotacheck -ugm /
This command creates the files /aquota.user
and /aquota.group
. These files contain information about the limits and usage of the filesystem, and they need to exist before you turn on quota monitoring. The quotacheck
parameters used are:
u
: specifies that a user-based quota file should be createdg
: indicates that a group-based quota file should be createdm
: disables remounting the filesystem as read-only while performing the initial tallying of quotas. Remounting the filesystem as read-only will give more accurate results in case a user is actively saving files during the process, but is not necessary during this initial setup.If you don’t need to enable user- or group-based quotas, you can leave off the corresponding quotacheck
option.
You can verify that the appropriate files were created by listing the root directory:
- ls /
Outputaquota.group bin dev home initrd.img.old lib64 media opt root sbin srv tmp var vmlinuz.old
aquota.user boot etc initrd.img lib lost+found mnt proc run snap sys usr vmlinuz
If you didn’t include the u
or g
options in the quotacheck
command, the corresponding file will be missing.
Next, you will have to add the quota modules to the Linux kernel. Alternatively, you could reboot your server to accomplish the same task. Otherwise, add them manually, replacing the highlighted kernel version with your version found in Step 2:
- sudo modprobe quota_v1 -S 5.4.0-99-generic
- sudo modprobe quota_v2 -S 5.4.0-99-generic
Now you’re ready to turn on the quota system:
- sudo quotaon -v /
quotaon Output/dev/vda1 [/]: group quotas turned on
/dev/vda1 [/]: user quotas turned on
Your server is now monitoring and enforcing quotas, but we’ve not set any yet! Next you’ll set a disk quota for a single user.
There are a few ways you can set quotas for users or groups. Here, you’ll go over how to set quotas with both the edquota
and setquota
commands.
edquota
to Set a User QuotaUse the edquota
command to edit quotas. Let’s edit your example sammy user’s quota:
- sudo edquota -u sammy
The -u
option specifies that this is a user
quota you’ll be editing. If you’d like to edit a group’s quota instead, use the -g
option in its place.
This will open up a file in your default text editor, similar to how crontab -e
opens a temporary file for you to edit. The file will look similar to this:
Disk quotas for user sammy (uid 1000):
Filesystem blocks soft hard inodes soft hard
/dev/vda1 40 0 0 13 0 0
This lists the username and uid
, the filesystems that have quotas enabled on them, and the block- and inode-based usage and limits. Setting an inode-based quota would limit how many files and directories a user can create, regardless of the amount of disk space they use. Most people will want block-based quotas, which specifically limit disk space usage. This is what you will configure.
Note: The concept of a block is poorly specified and can change depending on many factors, including which command line tool is reporting them. In the context of setting quotas on Ubuntu, it’s fairly safe to assume that 1 block equals 1 kilobyte of disk space.
In the above listing, your user sammy is using 40 blocks, or 40KB of space on the /dev/vda1
drive. The soft
and hard
limits are both disabled with a 0
value.
Each type of quota allows you to set both a soft limit and a hard limit. When a user exceeds the soft limit, they are over quota, but they are not immediately prevented from consuming more space or inodes. Instead, some leeway is given: the user has – by default – seven days to get their disk use back under the soft limit. At the end of the seven day grace period, if the user is still over the soft limit it will be treated as a hard limit. A hard limit is less forgiving: all creation of new blocks or inodes is immediately halted when you hit the specified hard limit. This behaves as if the disk is completely out of space: writes will fail, temporary files will fail to be created, and the user will start to see warnings and errors while performing common tasks.
Let’s update your sammy user to have a block quota with a 100MB soft limit, and a 110MB hard limit:
Disk quotas for user sammy (uid 1000):
Filesystem blocks soft hard inodes soft hard
/dev/vda1 40 100M 110M 13 0 0
Save and close the file. To check the new quota you can use the quota
command:
- sudo quota -vs sammy
OutputDisk quotas for user sammy (uid 1000):
Filesystem space quota limit grace files quota limit grace
/dev/vda1 40K 100M 110M 13 0 0
The command outputs your current quota status, and shows that your quota is 100M
while your limit is 110M
. This corresponds to the soft and hard limits respectively.
Note: If you want your users to be able to check their own quotas without having sudo
access, you’ll need to give them permission to read the quota files you created in Step 4. One way to do this would be to make a users
group, make those files readable by the users
group, and then make sure all your users are also placed in the group.
To learn more about Linux permissions, including user and group ownership, please read An Introduction to Linux Permissions
setquota
to Set a User QuotaUnlike edquota
, setquota
will update your user’s quota information in a single command, without an interactive editing step. You will specify the username and the soft and hard limits for both block- and inode-based quotas, and finally the filesystem to apply the quota to:
- sudo setquota -u sammy 200M 220M 0 0 /
The above command will double sammy’s block-based quota limits to 200 megabytes and 220 megabytes. The 0 0
for inode-based soft and hard limits indicates that they remain unset. This is required even if you’re not setting any inode-based quotas.
Once again, use the quota
command to check your work:
- sudo quota -vs sammy
OutputDisk quotas for user sammy (uid 1000):
Filesystem space quota limit grace files quota limit grace
/dev/vda1 40K 200M 220M 13 0 0
Now that you have set some quotas, let’s find out how to generate a quota report.
To generate a report on current quota usage for all users on a particular filesystem, use the repquota
command:
- sudo repquota -s /
Output*** Report for user quotas on device /dev/vda1
Block grace time: 7days; Inode grace time: 7days
Space limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 1696M 0K 0K 75018 0 0
daemon -- 64K 0K 0K 4 0 0
man -- 1048K 0K 0K 81 0 0
nobody -- 7664K 0K 0K 3 0 0
syslog -- 2376K 0K 0K 12 0 0
sammy -- 40K 200M 220M 13 0 0
In this instance you’re generating a report for the /
root filesystem. The -s
command tells repquota
to use human-readable numbers when possible. There are a few system users listed, which probably have no quotas set by default. Your user sammy is listed at the bottom, with the amounts used and soft and hard limits.
Also note the Block grace time: 7days
callout, and the grace
column. If your user was over the soft limit, the grace
column would show how much time they had left to get back under the limit.
You can configure the period of time where a user is allowed to float above the soft limit. Use the setquota
command to do so:
- sudo setquota -t 864000 864000 /
The above command sets both the block and inode grace times to 864000 seconds, or 10 days. This setting applies to all users, and both values must be provided even if you don’t use both types of quota (block vs. inode).
Note that the values must be specified in seconds.
Run repquota
again to check that the changes took effect:
- sudo repquota -s /
OutputBlock grace time: 10days; Inode grace time: 10days
. . .
The changes should be reflected immediately in the repquota
output.
In this tutorial you installed the quota
command line tools, verified that your Linux kernel can handle monitoring quotas, set up a block-based quota for one user, and generated a report on your filesystem’s quota usage.
The following are some common errors you may see when setting up and manipulating filesystem quotas.
quotaon Outputquotaon: cannot find //aquota.group on /dev/vda1 [/]
quotaon: cannot find //aquota.user on /dev/vda1 [/]
This is an error you might see if you tried to turn on quotas (using quotaon
) before running the initial quotacheck
command. The quotacheck
command creates the aquota
or quota
files needed to turn on the quota system. See Step 4 for more information.
quotaon Outputquotaon: using //aquota.group on /dev/vda1 [/]: No such process
quotaon: Quota format not supported in kernel.
quotaon: using //aquota.user on /dev/vda1 [/]: No such process
quotaon: Quota format not supported in kernel.
This quotaon
error is telling us that your kernel does not support quotas, or at least doesn’t support the correct version (there is both a quota_v1
and quota_v2
version). This means the kernel modules you need are not installed or are not being loaded properly. On Ubuntu Server the most likely cause of this is using a pared-down installation image on a cloud-based virtual server.
If this is the case, it can be fixed by installing the linux-image-extra-virtual
package with apt
. See Step 2 for more details.
If this error persists after installation, review Step 4. Make sure you either properly used the modprobe
commands, or have rebooted your server if that is a viable option for you.
quota Outputquota: Cannot open quotafile //aquota.user: Permission denied
quota: Cannot open quotafile //aquota.user: Permission denied
quota: Cannot open quotafile //quota.user: No such file or directory
This is the error you’ll see if you run quota
and your current user does not have permission to read the quota files for your filesystem. You (or your system administrator) will need to adjust the file permissions appropriately, or use sudo
when running commands that require access to the quota file.
To learn more about Linux permissions, including user and group ownership, please read An Introduction to Linux Permissions.
]]>Rsync, which stands for remote sync, is a remote and local file synchronization tool. It uses an algorithm to minimize the amount of data copied by only moving the portions of files that have changed.
In this tutorial, we’ll define Rsync, review the syntax when using rsync
, explain how to use Rsync to sync with a remote system, and other options available to you.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
In order to practice using rsync
to sync files between a local and remote system, you will need two machines to act as your local computer and your remote machine, respectively. These two machines could be virtual private servers, virtual machines, containers, or personal computers as long as they’ve been properly configured.
If you plan to follow this guide using servers, it would be prudent to set them up with administrative users and to configure a firewall on each of them. To set up these servers, follow our Initial Server Setup Guide.
Regardless of what types of machines you use to follow this tutorial, you will need to have created SSH keys on both of them. Then, copy each server’s public key to the other server’s authorized_keys
file as outlined in Step 2 of that guide.
This guide was validated on machines running Ubuntu 20.04, although it should generally work with any computers running a Linux-based operating system that have rsync
installed.
Rsync is a very flexible network-enabled syncing tool. Due to its ubiquity on Linux and Unix-like systems and its popularity as a tool for system scripts, it’s included on most Linux distributions by default.
The syntax for rsync
operates similar to other tools, such as ssh
, scp
, and cp
.
First, change into your home directory by running the following command:
- cd ~
Then create a test directory:
- mkdir dir1
Create another test directory:
- mkdir dir2
Now add some test files:
- touch dir1/file{1..100}
There’s now a directory called dir1
with 100 empty files in it. Confirm by listing out the files:
- ls dir1
Outputfile1 file18 file27 file36 file45 file54 file63 file72 file81 file90
file10 file19 file28 file37 file46 file55 file64 file73 file82 file91
file100 file2 file29 file38 file47 file56 file65 file74 file83 file92
file11 file20 file3 file39 file48 file57 file66 file75 file84 file93
file12 file21 file30 file4 file49 file58 file67 file76 file85 file94
file13 file22 file31 file40 file5 file59 file68 file77 file86 file95
file14 file23 file32 file41 file50 file6 file69 file78 file87 file96
file15 file24 file33 file42 file51 file60 file7 file79 file88 file97
file16 file25 file34 file43 file52 file61 file70 file8 file89 file98
file17 file26 file35 file44 file53 file62 file71 file80 file9 file99
You also have an empty directory called dir2
. To sync the contents of dir1
to dir2
on the same system, you will run rsync
and use the -r
flag, which stands for “recursive” and is necessary for directory syncing:
- rsync -r dir1/ dir2
Another option is to use the -a
flag, which is a combination flag and stands for “archive”. This flag syncs recursively and preserves symbolic links, special and device files, modification times, groups, owners, and permissions. It’s more commonly used than -r
and is the recommended flag to use. Run the same command as the previous example, this time using the -a
flag:
- rsync -a dir1/ dir2
Please note that there is a trailing slash (/
) at the end of the first argument in the syntax of the the previous two commands and highlighted here:
- rsync -a dir1/ dir2
This trailing slash signifies the contents of dir1
. Without the trailing slash, dir1
, including the directory, would be placed within dir2
. The outcome would create a hierarchy like the following:
~/dir2/dir1/[files]
Another tip is to double-check your arguments before executing an rsync
command. Rsync provides a method for doing this by passing the -n
or --dry-run
options. The -v
flag, which means “verbose”, is also necessary to get the appropriate output. You’ll combine the a
, n
, and v
flags in the following command:
- rsync -anv dir1/ dir2
Outputsending incremental file list
./
file1
file10
file100
file11
file12
file13
file14
file15
file16
file17
file18
. . .
Now compare that output to the one you receive when removing the trailing slash, as in the following:
- rsync -anv dir1 dir2
Outputsending incremental file list
dir1/
dir1/file1
dir1/file10
dir1/file100
dir1/file11
dir1/file12
dir1/file13
dir1/file14
dir1/file15
dir1/file16
dir1/file17
dir1/file18
. . .
This output now demonstrates that the directory itself was transferred, rather than only the files within the directory.
To use rsync
to sync with a remote system, you only need SSH access configured between your local and remote machines, as well as rsync
installed on both systems. Once you have SSH access verified between the two machines, you can sync the dir1
folder from the previous section to a remote machine by using the following syntax. Please note in this case, that you want to transfer the actual directory, so you’ll omit the trailing slash:
- rsync -a ~/dir1 username@remote_host:destination_directory
This process is called a push operation because it “pushes” a directory from the local system to a remote system. The opposite operation is pull, and is used to sync a remote directory to the local system. If the dir1
directory were on the remote system instead of your local system, the syntax would be the following:
- rsync -a username@remote_host:/home/username/dir1 place_to_sync_on_local_machine
Like cp
and similar tools, the source is always the first argument, and the destination is always the second.
Rsync provides many options for altering the default behavior of the utility, such as the flag options you learned about in the previous section.
If you’re transferring files that have not already been compressed, like text files, you can reduce the network transfer by adding compression with the -z
option:
- rsync -az source destination
The -P
flag is also helpful. It combines the flags --progress
and --partial
. This first flag provides a progress bar for the transfers, and the second flag allows you to resume interrupted transfers:
- rsync -azP source destination
Outputsending incremental file list
created directory destination
source/
source/file1
0 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=99/101)
sourcefile10
0 100% 0.00kB/s 0:00:00 (xfr#2, to-chk=98/101)
source/file100
0 100% 0.00kB/s 0:00:00 (xfr#3, to-chk=97/101)
source/file11
0 100% 0.00kB/s 0:00:00 (xfr#4, to-chk=96/101)
source/file12
0 100% 0.00kB/s 0:00:00 (xfr#5, to-chk=95/101)
. . .
If you run the command again, you’ll receive a shortened output since no changes have been made. This illustrates Rsync’s ability to use modification times to determine if changes have been made:
- rsync -azP source destination
Outputsending incremental file list
sent 818 bytes received 12 bytes 1660.00 bytes/sec
total size is 0 speedup is 0.00
Say you were to update the modification time on some of the files with a command like the following:
- touch dir1/file{1..10}
Then, if you were to run rsync
with -azP
again, you’ll notice in the output how Rsync intelligently re-copies only the changed files:
- rsync -azP source destination
Outputsending incremental file list
file1
0 100% 0.00kB/s 0:00:00 (xfer#1, to-check=99/101)
file10
0 100% 0.00kB/s 0:00:00 (xfer#2, to-check=98/101)
file2
0 100% 0.00kB/s 0:00:00 (xfer#3, to-check=87/101)
file3
0 100% 0.00kB/s 0:00:00 (xfer#4, to-check=76/101)
. . .
In order to keep two directories truly in sync, it’s necessary to delete files from the destination directory if they are removed from the source. By default, rsync
does not delete anything from the destination directory.
You can change this behavior with the --delete
option. Before using this option, you can use -n
, the --dry-run
option, to perform a test to prevent unwanted data loss:
- rsync -an --delete source destination
If you prefer to exclude certain files or directories located inside a directory you are syncing, you can do so by specifying them in a comma-separated list following the --exclude=
option:
- rsync -a --exclude=pattern_to_exclude source destination
If you have a specified pattern to exclude, you can override that exclusion for files that match a different pattern by using the --include=
option:
- rsync -a --exclude=pattern_to_exclude --include=pattern_to_include source destination
Finally, Rsync’s --backup
option can be used to store backups of important files. It’s used in conjunction with the --backup-dir
option, which specifies the directory where the backup files should be stored:
- rsync -a --delete --backup --backup-dir=/path/to/backups /path/to/source destination
Rsync can streamline file transfers over networked connections and add robustness to local directory syncing. The flexibility of Rsync makes it a good option for many different file-level operations.
A mastery of Rsync allows you to design complex backup operations and obtain fine-grained control over how and what is transferred.
]]>Is it possible to change the root password with cloud-init?
]]>Adding and removing users on a Linux system is one of the most important system administration tasks to familiarize yourself with. When you create a new system, you are often only given access to the root account by default.
While running as the root user gives you complete control over a system and its users, it is also dangerous and possibly destructive. For common system administration tasks, it’s a better idea to add an unprivileged user and carry out those tasks without root privileges. You can also create additional unprivileged accounts for any other users you may have on your system. Each user on a system should have their own separate account.
For tasks that require administrator privileges, there is a tool installed on Ubuntu systems called sudo
. Briefly, sudo
allows you to run a command as another user, including users with administrative privileges. In this guide, you’ll learn how to create user accounts, assign sudo
privileges, and delete users.
To complete this tutorial, you will need access to a server running Ubuntu 18.04. Ensure that you have root access to the server and firewall enabled. To set this up, follow our Initial Server Setup Guide for Ubuntu 18.04.
If you are signed in as the root user, you can create a new user at any time by running the following:
- adduser newuser
If you are signed in as a non-root user who has been given sudo
privileges, you can add a new user with the following command:
- sudo adduser newuser
Either way, you will be required to respond to a series of questions:
ENTER
if you don’t wish to utilize these fields.Y
to continue.Your new user is now ready for use and can be logged into with the password that you entered.
If you need your new user to have administrative privileges, continue on to the next section.
If your new user should have the ability to execute commands with root (administrative) privileges, you will need to give the new user access to sudo
. Let’s examine two approaches to this task: first, adding the user to a pre-defined sudo user group, and second, specifying privileges on a per-user basis in sudo
’s configuration.
By default, sudo
on Ubuntu 18.04 systems is configured to extend full privileges to any user in the sudo group.
You can view what groups your new user is in with the groups
command:
- groups newuser
Outputnewuser : newuser
By default, a new user is only in their own group because adduser
creates this in addition to the user profile. A user and its own group share the same name. In order to add the user to a new group, you can use the usermod
command:
- usermod -aG sudo newuser
The -aG
option tells usermod
to add the user to the listed groups.
Please note that the usermod
command itself requires sudo
privileges. This means that you can only add users to the sudo
group if you’re signed in as the root user or as another user that has already been added as a member of the sudo
group. In the latter case, you will have to precede this command with sudo
, as in this example:
- sudo usermod -aG sudo newuser
/etc/sudoers
As an alternative to putting your user in the sudo group, you can use the visudo
command, which opens a configuration file called /etc/sudoers
in the system’s default editor, and explicitly specify privileges on a per-user basis.
Using visudo
is the only recommended way to make changes to /etc/sudoers
because it locks the file against multiple simultaneous edits and performs a validation check on its contents before overwriting the file. This helps to prevent a situation where you misconfigure sudo
and cannot fix the problem because you have lost sudo
privileges.
If you are currently signed in as root, run the following:
- visudo
If you are signed in as a non-root user with sudo
privileges, run the same command with the sudo
prefix:
- sudo visudo
Traditionally, visudo
opened /etc/sudoers
in the vi
editor, which can be confusing for inexperienced users. By default on new Ubuntu installations, visudo
will use the nano
text editor, which provides a more convenient and accessible text editing experience. Use the arrow keys to move the cursor, and search for the line that reads like the following:
root ALL=(ALL:ALL) ALL
Below this line, add the following highlighted line. Be sure to change newuser
to the name of the user profile that you would like to grant sudo
privileges:
root ALL=(ALL:ALL) ALL
newuser ALL=(ALL:ALL) ALL
Add a new line like this for each user that should be given full sudo
privileges. When you’re finished, save and close the file by pressing CTRL + X
, followed by Y
, and then ENTER
to confirm.
Now your new user is able to execute commands with administrative privileges.
When signed in as the new user, you can execute commands as your regular user by typing commands as normal:
- some_command
You can execute the same command with administrative privileges by typing sudo
ahead of the command:
- sudo some_command
When doing this, you will be prompted to enter the password of the regular user account you are signed in as.
In the event that you no longer need a user, it’s best to delete the old account.
You can delete the user itself, without deleting any of their files, by running the following command as root:
- deluser newuser
If you are signed in as another non-root user with sudo
privileges, you would use the following:
- sudo deluser newuser
If, instead, you want to delete the user’s home directory when the user is deleted, you can issue the following command as root:
- deluser --remove-home newuser
If you’re running this as a non-root user with sudo
privileges, you would run the same command with the sudo
prefix:
- sudo deluser --remove-home newuser
If you previously configured sudo
privileges for the user you deleted, you may want to remove the relevant line again:
- visudo
Or use the following command if you are a non-root user with sudo
privileges:
- sudo visudo
root ALL=(ALL:ALL) ALL
newuser ALL=(ALL:ALL) ALL # DELETE THIS LINE
This will prevent a new user created with the same name from being accidentally given sudo
privileges.
You should now have a fairly good handle on how to add and remove users from your Ubuntu 18.04 system. Effective user management will allow you to separate users and give them only the access that they are required to do their job.
For more information about how to configure sudo
, check out our guide on how to edit the sudoers file.
Increasingly, Linux distributions are adopting the systemd
init system. This powerful suite of software can manage many aspects of your server, from services to mounted devices and system states.
In systemd
, a unit
refers to any resource that the system knows how to operate on and manage. This is the primary object that the systemd
tools know how to deal with. These resources are defined using configuration files called unit files.
In this guide, we will introduce you to the different units that systemd
can handle. We will also be covering some of the many directives that can be used in unit files in order to shape the way these resources are handled on your system.
Units are the objects that systemd
knows how to manage. These are basically a standardized representation of system resources that can be managed by the suite of daemons and manipulated by the provided utilities.
Units can be said to be similar to services or jobs in other init systems. However, a unit has a much broader definition, as these can be used to abstract services, network resources, devices, filesystem mounts, and isolated resource pools.
Ideas that in other init systems may be handled with one unified service definition can be broken out into component units according to their focus. This organizes by function and allows you to easily enable, disable, or extend functionality without modifying the core behavior of a unit.
Some features that units are able implement easily are:
D-Bus
. A unit can be started when an associated bus is published.inotify
.udev
events.systemd
itself. You can still add dependency and ordering information, but most of the heavy lifting is taken care of for you./tmp
and network access.There are many other advantages that systemd
units have over other init systems’ work items, but this should give you an idea of the power that can be leveraged using native configuration directives.
The files that define how systemd
will handle a unit can be found in many different locations, each of which have different priorities and implications.
The system’s copy of unit files are generally kept in the /lib/systemd/system
directory. When software installs unit files on the system, this is the location where they are placed by default.
Unit files stored here are able to be started and stopped on-demand during a session. This will be the generic, vanilla unit file, often written by the upstream project’s maintainers that should work on any system that deploys systemd
in its standard implementation. You should not edit files in this directory. Instead you should override the file, if necessary, using another unit file location which will supersede the file in this location.
If you wish to modify the way that a unit functions, the best location to do so is within the /etc/systemd/system
directory. Unit files found in this directory location take precedence over any of the other locations on the filesystem. If you need to modify the system’s copy of a unit file, putting a replacement in this directory is the safest and most flexible way to do this.
If you wish to override only specific directives from the system’s unit file, you can actually provide unit file snippets within a subdirectory. These will append or modify the directives of the system’s copy, allowing you to specify only the options you want to change.
The correct way to do this is to create a directory named after the unit file with .d
appended on the end. So for a unit called example.service
, a subdirectory called example.service.d
could be created. Within this directory a file ending with .conf
can be used to override or extend the attributes of the system’s unit file.
There is also a location for run-time unit definitions at /run/systemd/system
. Unit files found in this directory have a priority landing between those in /etc/systemd/system
and /lib/systemd/system
. Files in this location are given less weight than the former location, but more weight than the latter.
The systemd
process itself uses this location for dynamically created unit files created at runtime. This directory can be used to change the system’s unit behavior for the duration of the session. All changes made in this directory will be lost when the server is rebooted.
Systemd
categories units according to the type of resource they describe. The easiest way to determine the type of a unit is with its type suffix, which is appended to the end of the resource name. The following list describes the types of units available to systemd
:
.service
: A service unit describes how to manage a service or application on the server. This will include how to start or stop the service, under which circumstances it should be automatically started, and the dependency and ordering information for related software..socket
: A socket unit file describes a network or IPC socket, or a FIFO buffer that systemd
uses for socket-based activation. These always have an associated .service
file that will be started when activity is seen on the socket that this unit defines..device
: A unit that describes a device that has been designated as needing systemd
management by udev
or the sysfs
filesystem. Not all devices will have .device
files. Some scenarios where .device
units may be necessary are for ordering, mounting, and accessing the devices..mount
: This unit defines a mountpoint on the system to be managed by systemd
. These are named after the mount path, with slashes changed to dashes. Entries within /etc/fstab
can have units created automatically..automount
: An .automount
unit configures a mountpoint that will be automatically mounted. These must be named after the mount point they refer to and must have a matching .mount
unit to define the specifics of the mount..swap
: This unit describes swap space on the system. The name of these units must reflect the device or file path of the space..target
: A target unit is used to provide synchronization points for other units when booting up or changing states. They also can be used to bring the system to a new state. Other units specify their relation to targets to become tied to the target’s operations..path
: This unit defines a path that can be used for path-based activation. By default, a .service
unit of the same base name will be started when the path reaches the specified state. This uses inotify
to monitor the path for changes..timer
: A .timer
unit defines a timer that will be managed by systemd
, similar to a cron
job for delayed or scheduled activation. A matching unit will be started when the timer is reached..snapshot
: A .snapshot
unit is created automatically by the systemctl snapshot
command. It allows you to reconstruct the current state of the system after making changes. Snapshots do not survive across sessions and are used to roll back temporary states..slice
: A .slice
unit is associated with Linux Control Group nodes, allowing resources to be restricted or assigned to any processes associated with the slice. The name reflects its hierarchical position within the cgroup
tree. Units are placed in certain slices by default depending on their type..scope
: Scope units are created automatically by systemd
from information received from its bus interfaces. These are used to manage sets of system processes that are created externally.As you can see, there are many different units that systemd
knows how to manage. Many of the unit types work together to add functionality. For instance, some units are used to trigger other units and provide activation functionality.
We will mainly be focusing on .service
units due to their utility and the consistency in which administrators need to managed these units.
The internal structure of unit files are organized with sections. Sections are denoted by a pair of square brackets “[
” and “]
” with the section name enclosed within. Each section extends until the beginning of the subsequent section or until the end of the file.
Section names are well-defined and case-sensitive. So, the section [Unit]
will not be interpreted correctly if it is spelled like [UNIT]
. If you need to add non-standard sections to be parsed by applications other than systemd
, you can add a X-
prefix to the section name.
Within these sections, unit behavior and metadata is defined through the use of simple directives using a key-value format with assignment indicated by an equal sign, like this:
[Section]
Directive1=value
Directive2=value
. . .
In the event of an override file (such as those contained in a unit.type.d
directory), directives can be reset by assigning them to an empty string. For example, the system’s copy of a unit file may contain a directive set to a value like this:
Directive1=default_value
The default_value
can be eliminated in an override file by referencing Directive1
without a value, like this:
Directive1=
In general, systemd
allows for easy and flexible configuration. For example, multiple boolean expressions are accepted (1
, yes
, on
, and true
for affirmative and 0
, no
off
, and false
for the opposite answer). Times can be intelligently parsed, with seconds assumed for unit-less values and combining multiple formats accomplished internally.
The first section found in most unit files is the [Unit]
section. This is generally used for defining metadata for the unit and configuring the relationship of the unit to other units.
Although section order does not matter to systemd
when parsing the file, this section is often placed at the top because it provides an overview of the unit. Some common directives that you will find in the [Unit]
section are:
Description=
: This directive can be used to describe the name and basic functionality of the unit. It is returned by various systemd
tools, so it is good to set this to something short, specific, and informative.Documentation=
: This directive provides a location for a list of URIs for documentation. These can be either internally available man
pages or web accessible URLs. The systemctl status
command will expose this information, allowing for easy discoverability.Requires=
: This directive lists any units upon which this unit essentially depends. If the current unit is activated, the units listed here must successfully activate as well, else this unit will fail. These units are started in parallel with the current unit by default.Wants=
: This directive is similar to Requires=
, but less strict. Systemd
will attempt to start any units listed here when this unit is activated. If these units are not found or fail to start, the current unit will continue to function. This is the recommended way to configure most dependency relationships. Again, this implies a parallel activation unless modified by other directives.BindsTo=
: This directive is similar to Requires=
, but also causes the current unit to stop when the associated unit terminates.Before=
: The units listed in this directive will not be started until the current unit is marked as started if they are activated at the same time. This does not imply a dependency relationship and must be used in conjunction with one of the above directives if this is desired.After=
: The units listed in this directive will be started before starting the current unit. This does not imply a dependency relationship and one must be established through the above directives if this is required.Conflicts=
: This can be used to list units that cannot be run at the same time as the current unit. Starting a unit with this relationship will cause the other units to be stopped.Condition...=
: There are a number of directives that start with Condition
which allow the administrator to test certain conditions prior to starting the unit. This can be used to provide a generic unit file that will only be run when on appropriate systems. If the condition is not met, the unit is gracefully skipped.Assert...=
: Similar to the directives that start with Condition
, these directives check for different aspects of the running environment to decide whether the unit should activate. However, unlike the Condition
directives, a negative result causes a failure with this directive.Using these directives and a handful of others, general information about the unit and its relationship to other units and the operating system can be established.
On the opposite side of unit file, the last section is often the [Install]
section. This section is optional and is used to define the behavior or a unit if it is enabled or disabled. Enabling a unit marks it to be automatically started at boot. In essence, this is accomplished by latching the unit in question onto another unit that is somewhere in the line of units to be started at boot.
Because of this, only units that can be enabled will have this section. The directives within dictate what should happen when the unit is enabled:
WantedBy=
: The WantedBy=
directive is the most common way to specify how a unit should be enabled. This directive allows you to specify a dependency relationship in a similar way to the Wants=
directive does in the [Unit]
section. The difference is that this directive is included in the ancillary unit allowing the primary unit listed to remain relatively clean. When a unit with this directive is enabled, a directory will be created within /etc/systemd/system
named after the specified unit with .wants
appended to the end. Within this, a symbolic link to the current unit will be created, creating the dependency. For instance, if the current unit has WantedBy=multi-user.target
, a directory called multi-user.target.wants
will be created within /etc/systemd/system
(if not already available) and a symbolic link to the current unit will be placed within. Disabling this unit removes the link and removes the dependency relationship.RequiredBy=
: This directive is very similar to the WantedBy=
directive, but instead specifies a required dependency that will cause the activation to fail if not met. When enabled, a unit with this directive will create a directory ending with .requires
.Alias=
: This directive allows the unit to be enabled under another name as well. Among other uses, this allows multiple providers of a function to be available, so that related units can look for any provider of the common aliased name.Also=
: This directive allows units to be enabled or disabled as a set. Supporting units that should always be available when this unit is active can be listed here. They will be managed as a group for installation tasks.DefaultInstance=
: For template units (covered later) which can produce unit instances with unpredictable names, this can be used as a fallback value for the name if an appropriate name is not provided.Sandwiched between the previous two sections, you will likely find unit type-specific sections. Most unit types offer directives that only apply to their specific type. These are available within sections named after their type. We will cover those briefly here.
The device
, target
, snapshot
, and scope
unit types have no unit-specific directives, and thus have no associated sections for their type.
The [Service]
section is used to provide configuration that is only applicable for services.
One of the basic things that should be specified within the [Service]
section is the Type=
of the service. This categorizes services by their process and daemonizing behavior. This is important because it tells systemd
how to correctly manage the servie and find out its state.
The Type=
directive can be one of the following:
Type=
and Busname=
directives are not set, but the ExecStart=
is set. Any communication should be handled outside of the unit through a second unit of the appropriate type (like through a .socket
unit if this unit must communicate using sockets).systemd
that the process is still running even though the parent exited.systemd
should wait for the process to exit before continuing on with other units. This is the default Type=
and ExecStart=
are not set. It is used for one-off tasks.systemd
will continue to process the next unit.systemd
process will wait for this to happen before proceeding to other units.Some additional directives may be needed when using certain service types. For instance:
RemainAfterExit=
: This directive is commonly used with the oneshot
type. It indicates that the service should be considered active even after the process exits.PIDFile=
: If the service type is marked as “forking”, this directive is used to set the path of the file that should contain the process ID number of the main child that should be monitored.BusName=
: This directive should be set to the D-Bus bus name that the service will attempt to acquire when using the “dbus” service type.NotifyAccess=
: This specifies access to the socket that should be used to listen for notifications when the “notify” service type is selected This can be “none”, “main”, or "all. The default, “none”, ignores all status messages. The “main” option will listen to messages from the main process and the “all” option will cause all members of the service’s control group to be processed.So far, we have discussed some pre-requisite information, but we haven’t actually defined how to manage our services. The directives to do this are:
ExecStart=
: This specifies the full path and the arguments of the command to be executed to start the process. This may only be specified once (except for “oneshot” services). If the path to the command is preceded by a dash “-” character, non-zero exit statuses will be accepted without marking the unit activation as failed.ExecStartPre=
: This can be used to provide additional commands that should be executed before the main process is started. This can be used multiple times. Again, commands must specify a full path and they can be preceded by “-” to indicate that the failure of the command will be tolerated.ExecStartPost=
: This has the same exact qualities as ExecStartPre=
except that it specifies commands that will be run after the main process is started.ExecReload=
: This optional directive indicates the command necessary to reload the configuration of the service if available.ExecStop=
: This indicates the command needed to stop the service. If this is not given, the process will be killed immediately when the service is stopped.ExecStopPost=
: This can be used to specify commands to execute following the stop command.RestartSec=
: If automatically restarting the service is enabled, this specifies the amount of time to wait before attempting to restart the service.Restart=
: This indicates the circumstances under which systemd
will attempt to automatically restart the service. This can be set to values like “always”, “on-success”, “on-failure”, “on-abnormal”, “on-abort”, or “on-watchdog”. These will trigger a restart according to the way that the service was stopped.TimeoutSec=
: This configures the amount of time that systemd
will wait when stopping or stopping the service before marking it as failed or forcefully killing it. You can set separate timeouts with TimeoutStartSec=
and TimeoutStopSec=
as well.Socket units are very common in systemd
configurations because many services implement socket-based activation to provide better parallelization and flexibility. Each socket unit must have a matching service unit that will be activated when the socket receives activity.
By breaking socket control outside of the service itself, sockets can be initialized early and the associated services can often be started in parallel. By default, the socket name will attempt to start the service of the same name upon receiving a connection. When the service is initialized, the socket will be passed to it, allowing it to begin processing any buffered requests.
To specify the actual socket, these directives are common:
ListenStream=
: This defines an address for a stream socket which supports sequential, reliable communication. Services that use TCP should use this socket type.ListenDatagram=
: This defines an address for a datagram socket which supports fast, unreliable communication packets. Services that use UDP should set this socket type.ListenSequentialPacket=
: This defines an address for sequential, reliable communication with max length datagrams that preserves message boundaries. This is found most often for Unix sockets.ListenFIFO
: Along with the other listening types, you can also specify a FIFO buffer instead of a socket.There are more types of listening directives, but the ones above are the most common.
Other characteristics of the sockets can be controlled through additional directives:
Accept=
: This determines whether an additional instance of the service will be started for each connection. If set to false (the default), one instance will handle all connections.SocketUser=
: With a Unix socket, specifies the owner of the socket. This will be the root user if left unset.SocketGroup=
: With a Unix socket, specifies the group owner of the socket. This will be the root group if neither this or the above are set. If only the SocketUser=
is set, systemd
will try to find a matching group.SocketMode=
: For Unix sockets or FIFO buffers, this sets the permissions on the created entity.Service=
: If the service name does not match the .socket
name, the service can be specified with this directive.Mount units allow for mount point management from within systemd
. Mount points are named after the directory that they control, with a translation algorithm applied.
For example, the leading slash is removed, all other slashes are translated into dashes “-”, and all dashes and unprintable characters are replaced with C-style escape codes. The result of this translation is used as the mount unit name. Mount units will have an implicit dependency on other mounts above it in the hierarchy.
Mount units are often translated directly from /etc/fstab
files during the boot process. For the unit definitions automatically created and those that you wish to define in a unit file, the following directives are useful:
What=
: The absolute path to the resource that needs to be mounted.Where=
: The absolute path of the mount point where the resource should be mounted. This should be the same as the unit file name, except using conventional filesystem notation.Type=
: The filesystem type of the mount.Options=
: Any mount options that need to be applied. This is a comma-separated list.SloppyOptions=
: A boolean that determines whether the mount will fail if there is an unrecognized mount option.DirectoryMode=
: If parent directories need to be created for the mount point, this determines the permission mode of these directories.TimeoutSec=
: Configures the amount of time the system will wait until the mount operation is marked as failed.This unit allows an associated .mount
unit to be automatically mounted at boot. As with the .mount
unit, these units must be named after the translated mount point’s path.
The [Automount]
section is pretty simple, with only the following two options allowed:
Where=
: The absolute path of the automount point on the filesystem. This will match the filename except that it uses conventional path notation instead of the translation.DirectoryMode=
: If the automount point or any parent directories need to be created, this will determine the permissions settings of those path components.Swap units are used to configure swap space on the system. The units must be named after the swap file or the swap device, using the same filesystem translation that was discussed above.
Like the mount options, the swap units can be automatically created from /etc/fstab
entries, or can be configured through a dedicated unit file.
The [Swap]
section of a unit file can contain the following directives for configuration:
What=
: The absolute path to the location of the swap space, whether this is a file or a device.Priority=
: This takes an integer that indicates the priority of the swap being configured.Options=
: Any options that are typically set in the /etc/fstab
file can be set with this directive instead. A comma-separated list is used.TimeoutSec=
: The amount of time that systemd
waits for the swap to be activated before marking the operation as a failure.A path unit defines a filesystem path that systmed
can monitor for changes. Another unit must exist that will be be activated when certain activity is detected at the path location. Path activity is determined thorugh inotify
events.
The [Path]
section of a unit file can contain the following directives:
PathExists=
: This directive is used to check whether the path in question exists. If it does, the associated unit is activated.PathExistsGlob=
: This is the same as the above, but supports file glob expressions for determining path existence.PathChanged=
: This watches the path location for changes. The associated unit is activated if a change is detected when the watched file is closed.PathModified=
: This watches for changes like the above directive, but it activates on file writes as well as when the file is closed.DirectoryNotEmpty=
: This directive allows systemd
to activate the associated unit when the directory is no longer empty.Unit=
: This specifies the unit to activate when the path conditions specified above are met. If this is omitted, systemd
will look for a .service
file that shares the same base unit name as this unit.MakeDirectory=
: This determines if systemd
will create the directory structure of the path in question prior to watching.DirectoryMode=
: If the above is enabled, this will set the permission mode of any path components that must be created.Timer units are used to schedule tasks to operate at a specific time or after a certain delay. This unit type replaces or supplements some of the functionality of the cron
and at
daemons. An associated unit must be provided which will be activated when the timer is reached.
The [Timer]
section of a unit file can contain some of the following directives:
OnActiveSec=
: This directive allows the associated unit to be activated relative to the .timer
unit’s activation.OnBootSec=
: This directive is used to specify the amount of time after the system is booted when the associated unit should be activated.OnStartupSec=
: This directive is similar to the above timer, but in relation to when the systemd
process itself was started.OnUnitActiveSec=
: This sets a timer according to when the associated unit was last activated.OnUnitInactiveSec=
: This sets the timer in relation to when the associated unit was last marked as inactive.OnCalendar=
: This allows you to activate the associated unit by specifying an absolute instead of relative to an event.AccuracySec=
: This unit is used to set the level of accuracy with which the timer should be adhered to. By default, the associated unit will be activated within one minute of the timer being reached. The value of this directive will determine the upper bounds on the window in which systemd
schedules the activation to occur.Unit=
: This directive is used to specify the unit that should be activated when the timer elapses. If unset, systemd
will look for a .service
unit with a name that matches this unit.Persistent=
: If this is set, systemd
will trigger the associated unit when the timer becomes active if it would have been triggered during the period in which the timer was inactive.WakeSystem=
: Setting this directive allows you to wake a system from suspend if the timer is reached when in that state.The [Slice]
section of a unit file actually does not have any .slice
unit specific configuration. Instead, it can contain some resource management directives that are actually available to a number of the units listed above.
Some common directives in the [Slice]
section, which may also be used in other units can be found in the systemd.resource-control
man page. These are valid in the following unit-specific sections:
[Slice]
[Scope]
[Service]
[Socket]
[Mount]
[Swap]
We mentioned earlier in this guide the idea of template unit files being used to create multiple instances of units. In this section, we can go over this concept in more detail.
Template unit files are, in most ways, no different than regular unit files. However, these provide flexibility in configuring units by allowing certain parts of the file to utilize dynamic information that will be available at runtime.
Template unit files can be identified because they contain an @
symbol after the base unit name and before the unit type suffix. A template unit file name may look like this:
example@.service
When an instance is created from a template, an instance identifier is placed between the @
symbol and the period signifying the start of the unit type. For example, the above template unit file could be used to create an instance unit that looks like this:
example@instance1.service
An instance file is usually created as a symbolic link to the template file, with the link name including the instance identifier. In this way, multiple links with unique identifiers can point back to a single template file. When managing an instance unit, systemd
will look for a file with the exact instance name you specify on the command line to use. If it cannot find one, it will look for an associated template file.
The power of template unit files is mainly seen through its ability to dynamically substitute appropriate information within the unit definition according to the operating environment. This is done by setting the directives in the template file as normal, but replacing certain values or parts of values with variable specifiers.
The following are some of the more common specifiers will be replaced when an instance unit is interpreted with the relevant information:
%n
: Anywhere where this appears in a template file, the full resulting unit name will be inserted.%N
: This is the same as the above, but any escaping, such as those present in file path patterns, will be reversed.%p
: This references the unit name prefix. This is the portion of the unit name that comes before the @
symbol.%P
: This is the same as above, but with any escaping reversed.%i
: This references the instance name, which is the identifier following the @
in the instance unit. This is one of the most commonly used specifiers because it will be guaranteed to be dynamic. The use of this identifier encourages the use of configuration significant identifiers. For example, the port that the service will be run at can be used as the instance identifier and the template can use this specifier to set up the port specification.%I
: This specifier is the same as the above, but with any escaping reversed.%f
: This will be replaced with the unescaped instance name or the prefix name, prepended with a /
.%c
: This will indicate the control group of the unit, with the standard parent hierarchy of /sys/fs/cgroup/ssytemd/
removed.%u
: The name of the user configured to run the unit.%U
: The same as above, but as a numeric UID
instead of name.%H
: The host name of the system that is running the unit.%%
: This is used to insert a literal percentage sign.By using the above identifiers in a template file, systemd
will fill in the correct values when interpreting the template to create an instance unit.
When working with systemd
, understanding units and unit files can make administration easier. Unlike many other init systems, you do not have to know a scripting language to interpret the init files used to boot services or the system. The unit files use a fairly straightforward declarative syntax that allows you to see at a glance the purpose and effects of a unit upon activation.
Breaking functionality such as activation logic into separate units not only allows the internal systemd
processes to optimize parallel initialization, it also keeps the configuration rather simple and allows you to modify and restart some units without tearing down and rebuilding their associated connections. Leveraging these abilities can give you more flexibility and power during administration.
FTP, the File Transfer Protocol, was a popular, unencrypted method of transferring files between two remote systems. As of 2022, it has been deprecated by most modern software due to a lack of security, and can mostly only be used in legacy applications.
SFTP, which stands for Secure File Transfer Protocol, is a separate protocol packaged built into SSH that can implement FTP commands over a secure connection. Typically, it can act as a drop-in replacement in any contexts where an FTP server is still needed.
In almost all cases, SFTP is preferable to FTP because of its underlying security features and ability to piggy-back on an SSH connection. FTP is an insecure protocol that should only be used in limited cases or on networks you trust.
Although SFTP is integrated into many graphical tools, this guide will demonstrate how to use it through its interactive command line interface.
By default, SFTP uses the SSH protocol to authenticate and establish a secure connection. Because of this, the same authentication methods are available that are present in SSH.
Although you can authenticate with passwords by default, we recommend you create SSH keys and transfer your public key to any system that you need to access. This is much more secure and can save you time in the long run.
Please see this guide to set up SSH keys in order to access your server if you have not done so already.
If you can connect to the machine using SSH, then you have completed all of the necessary requirements necessary to use SFTP to manage files. Test SSH access with the following command:
- ssh sammy@your_server_ip_or_remote_hostname
If that works, exit back out by typing:
- exit
Now we can establish an SFTP session by issuing the following command:
- sftp sammy@your_server_ip_or_remote_hostname
You will connect the the remote system and your prompt will change to an SFTP prompt.
If you are working on a custom SSH port (not the default port 22), then you can open an SFTP session as follows:
- sftp -oPort=custom_port sammy@your_server_ip_or_remote_hostname
This will connect you to the remote system by way of your specified port.
The most useful command to learn first is the help command. This gives you access to a summary of the other SFTP commands. You can call it by typing either of these in the prompt:
- help
or
- ?
This will display a list of the available commands:
OutputAvailable commands:
bye Quit sftp
cd path Change remote directory to 'path'
chgrp grp path Change group of file 'path' to 'grp'
chmod mode path Change permissions of file 'path' to 'mode'
chown own path Change owner of file 'path' to 'own'
df [-hi] [path] Display statistics for current directory or
filesystem containing 'path'
exit Quit sftp
get [-Ppr] remote [local] Download file
help Display this help text
lcd path Change local directory to 'path'
. . .
We will explore some of the commands you see in the following sections.
We can navigate through the remote system’s file hierarchy using a number of commands that function similarly to their shell counterparts.
First, let’s orient ourselves by finding out which directory we are in currently on the remote system. Just like in a typical shell session, we can type the following to get the current directory:
- pwd
OutputRemote working directory: /home/demouser
We can view the contents of the current directory of the remote system with another familiar command:
- ls
OutputSummary.txt info.html temp.txt testDirectory
Note that the commands available within the SFTP interface are not a 1:1 match for typical shell syntax and are not as feature-rich. However, they do implement some of the more important optional flags, such as adding -la
to ls
to view more file metadata and permissions:
- ls -la
Outputdrwxr-xr-x 5 demouser demouser 4096 Aug 13 15:11 .
drwxr-xr-x 3 root root 4096 Aug 13 15:02 ..
-rw------- 1 demouser demouser 5 Aug 13 15:04 .bash_history
-rw-r--r-- 1 demouser demouser 220 Aug 13 15:02 .bash_logout
-rw-r--r-- 1 demouser demouser 3486 Aug 13 15:02 .bashrc
drwx------ 2 demouser demouser 4096 Aug 13 15:04 .cache
-rw-r--r-- 1 demouser demouser 675 Aug 13 15:02 .profile
. . .
To get to another directory, we can issue this command:
- cd testDirectory
We can now traverse the remote file system, but what if we need to access our local file system? We can direct commands towards the local file system by preceding them with an l
for local.
All of the commands discussed so far have local equivalents. We can print the local working directory:
- lpwd
OutputLocal working directory: /Users/demouser
We can list the contents of the current directory on the local machine:
- lls
OutputDesktop local.txt test.html
Documents analysis.rtf zebra.html
We can also change the directory we want to interact with on the local system:
- lcd Desktop
If we want to download files from our remote host, we can do so using the get
command:
- get remoteFile
OutputFetching /home/demouser/remoteFile to remoteFile
/home/demouser/remoteFile 100% 37KB 36.8KB/s 00:01
As you can see, by default, the get
command downloads a remote file to a file with the same name on the local file system.
We can copy the remote file to a different name by specifying the name afterwards:
- get remoteFile localFile
The get
command also accepts some option flags. For instance, we can copy a directory and all of its contents by specifying the recursive option:
- get -r someDirectory
We can tell SFTP to maintain the appropriate permissions and access times by using the -P
or -p
flag:
- get -Pr someDirectory
Transferring files to the remote system works the same way, but with a put
command:
- put localFile
OutputUploading localFile to /home/demouser/localFile
localFile 100% 7607 7.4KB/s 00:00
The same flags that work with get
apply to put
. So to copy an entire local directory, you can run put -r
:
- put -r localDirectory
One familiar tool that is useful when downloading and uploading files is the df
command, which works similarly to the command line version. Using this, you can check that you have enough space to complete the transfers you are interested in:
- df -h
Output Size Used Avail (root) %Capacity
19.9GB 1016MB 17.9GB 18.9GB 4%
Please note, that there is no local variation of this command, but we can get around that by issuing the !
command.
The !
command drops us into a local shell, where we can run any command available on our local system. We can check disk usage by typing:
- !
and then
- df -h
OutputFilesystem Size Used Avail Capacity Mounted on
/dev/disk0s2 595Gi 52Gi 544Gi 9% /
devfs 181Ki 181Ki 0Bi 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% /net
map auto_home 0Bi 0Bi 0Bi 100% /home
Any other local command will work as expected. To return to your SFTP session, type:
- exit
You should now see the SFTP prompt return.
SFTP allows you to perform some kinds of filesystem housekeeping. For instance, you can change the owner of a file on the remote system with:
- chown userID file
Notice how, unlike the system chmod
command, the SFTP command does not accept usernames, but instead uses UIDs. Unfortunately, there is no built-in way to know the appropriate UID from within the SFTP interface.
As a workaround, you can read from the /etc/passwd
file, which associates usernames with UIDs in most Linux environments:
- get /etc/passwd
- !less passwd
Outputroot:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
. . .
Notice how instead of giving the !
command by itself, we’ve used it as a prefix for a local shell command. This works to run any command available on our local machine and could have been used with the local df
command earlier.
The UID will be in the third column of the file, as delineated by colon characters.
Similarly, we can change the group owner of a file with:
- chgrp groupID file
Again, there is no built-in way to get a listing of the remote system’s groups. We can work around it with the following command:
- get /etc/group
- !less group
Outputroot:x:0:
daemon:x:1:
bin:x:2:
sys:x:3:
adm:x:4:
tty:x:5:
disk:x:6:
lp:x:7:
. . .
The third column holds the ID of the group associated with name in the first column. This is what we are looking for.
The chmod
SFTP command works as normal on the remote filesystem:
- chmod 777 publicFile
OutputChanging mode on /home/demouser/publicFile
There is no equivalent command for manipulating local file permissions, but you can set the local umask, so that any files copied to the local system will have their corresponding permissions.
That can be done with the lumask
command:
- lumask 022
OutputLocal umask: 022
Now all regular files downloaded (as long as the -p
flag is not used) will have 644 permissions.
SFTP also allows you to create directories on both local and remote systems with lmkdir
and mkdir
respectively.
The rest of the file commands target only the remote filesystem:
- ln
- rm
- rmdir
These commands replicate the core behavior of their shell equivalents. If you need to perform these actions on the local file system, remember that you can drop into a shell by issuing this command:
- !
Or execute a single command on the local system by prefacing the command with !
like so:
- !chmod 644 somefile
When you are finished with your SFTP session, use exit
or bye
to close the connection.
- bye
Although SFTP syntax is much less comprehensive than modern shell tooling, it can be useful for providing compatibility with legacy FTP syntax or for carefully limiting the functionality available to remote users of some environments.
For example, you can use SFTP to enable particular users to transfer files without SSH access. For more information on this process, check out our tutorial on How To Enable SFTP Without Shell Access.
If you are used to using FTP or SCP to accomplish your transfers, SFTP is a good way to leverage the strengths of both. While it is not appropriate for every situation, it is a flexible tool to have in your repertoire.
]]>One essential tool to master as a system administrator is SSH.
SSH, or Secure Shell, is a protocol used to securely log onto remote systems. It is the most common way to access remote Linux servers.
In this guide, we will discuss how to use SSH to connect to a remote system.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To connect to a remote system using SSH, we’ll use the ssh
command.
If you are using Windows, you’ll need to install a version of OpenSSH in order to be able to ssh
from a terminal. If you prefer to work in PowerShell, you can follow Microsoft’s documentation to add OpenSSH to PowerShell. If you would rather have a full Linux environment available, you can set up WSL, the Windows Subsystem for Linux, which will include ssh
by default. Finally, as a lightweight third option, you can install Git for Windows, which provides a native Windows bash terminal environment that includes the ssh
command. Each of these are well-supported and whichever you decide to use will come down to preference.
If you are using a Mac or Linux, you will already have the ssh
command available in your terminal.
The most straightforward form of the command is:
- ssh remote_host
The remote_host
in this example is the IP address or domain name that you are trying to connect to.
This command assumes that your username on the remote system is the same as your username on your local system.
If your username is different on the remote system, you can specify it by using this syntax:
- ssh remote_username@remote_host
Once you have connected to the server, you may be asked to verify your identity by providing a password. Later, we will cover how to generate keys to use instead of passwords.
To exit the ssh session and return back into your local shell session, type:
- exit
SSH works by connecting a client program to an ssh server, called sshd
.
In the previous section, ssh
was the client program. The ssh server was already running on the remote_host
that we specified.
On nearly all Linux environments, the sshd
server should start automatically. If it is not running for any reason, you may need to temporarily access your server through a web-based console, or local serial console.
The process needed to start an ssh server depends on the distribution of Linux that you are using.
On Ubuntu, you can start the ssh server by typing:
- sudo systemctl start ssh
That should start the sshd server and you can then log in remotely.
When you change the configuration of SSH, you are changing the settings of the sshd server.
In Ubuntu, the main sshd configuration file is located at /etc/ssh/sshd_config
.
Back up the current version of this file before editing:
- sudo cp /etc/ssh/sshd_config{,.bak}
Open it using nano
or your favourite text editor:
- sudo nano /etc/ssh/sshd_config
You will want to leave most of the options in this file alone. However, there are a few you may want to take a look at:
Port 22
The port declaration specifies which port the sshd server will listen on for connections. By default, this is 22
. You should probably leave this setting alone, unless you have specific reasons to do otherwise. If you do change your port, we will show you how to connect to the new port later on.
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
The host keys declarations specify where to look for global host keys. We will discuss what a host key is later.
SyslogFacility AUTH
LogLevel INFO
These two items indicate the level of logging that should occur.
If you are having difficulties with SSH, increasing the amount of logging may be a good way to discover what the issue is.
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes
These parameters specify some of the login information.
LoginGraceTime
specifies how many seconds to keep the connection alive without successfully logging in.
It may be a good idea to set this time just a little bit higher than the amount of time it takes you to log in normally.
PermitRootLogin
selects whether the root user is allowed to log in.
In most cases, this should be changed to no
when you have created a user account that has access to elevated privileges (through su
or sudo
) and can log in through ssh, in order to minimize the risk of anyone gaining root access to your server.
strictModes
is a safety guard that will refuse a login attempt if the authentication files are readable by everyone.
This prevents login attempts when the configuration files are not secure.
X11Forwarding yes
X11DisplayOffset 10
These parameters configure an ability called X11 Forwarding. This allows you to view a remote system’s graphical user interface (GUI) on the local system.
This option must be enabled on the server and given with the SSH client during connection with the -X
option.
After making your changes, save and close the file. If you are using nano
, press Ctrl+X
, then when prompted, Y
and then Enter.
If you changed any settings in /etc/ssh/sshd_config
, make sure you reload your sshd server to implement your modifications:
- sudo systemctl reload ssh
You should thoroughly test your changes to ensure that they operate in the way you expect.
It may be a good idea to have a few terminal sessions open while you are making changes. This will allow you to revert the configuration if necessary without locking yourself out.
While it is helpful to be able to log in to a remote system using passwords, it is faster and more secure to set up key-based authentication.
Key-based authentication works by creating a pair of keys: a private key and a public key.
The private key is located on the client machine and is secured and kept secret.
The public key can be given to anyone or placed on any server you wish to access.
When you attempt to connect using a key-pair, the server will use the public key to create a message for the client computer that can only be read with the private key.
The client computer then sends the appropriate response back to the server and the server will know that the client is legitimate.
This process is performed automatically after you configure your keys.
SSH keys should be generated on the computer you wish to log in from. This is usually your local machine.
Enter the following into the command line:
- ssh-keygen -t rsa
You may be prompted to set a password on the key files themselves, but this is a fairly uncommon practice, and you should press enter through the prompts to accept the defaults. Your keys will be created at ~/.ssh/id_rsa.pub and ~/.ssh/id_rsa.
Change into the .ssh
directory by typing:
- cd ~/.ssh
Look at the permissions of the files:
- ls -l
Output-rw-r--r-- 1 demo demo 807 Sep 9 22:15 authorized_keys
-rw------- 1 demo demo 1679 Sep 9 23:13 id_rsa
-rw-r--r-- 1 demo demo 396 Sep 9 23:13 id_rsa.pub
As you can see, the id_rsa
file is readable and writable only to the owner. This helps to keep it secret.
The id_rsa.pub
file, however, can be shared and has permissions appropriate for this activity.
If you currently have password-based access to a server, you can copy your public key to it by issuing this command:
- ssh-copy-id remote_host
This will start an SSH session. After you enter your password, it will copy your public key to the server’s authorized keys file, which will allow you to log in without the password next time.
There are a number of optional flags that you can provide when connecting through SSH.
Some of these may be necessary to match the settings in the remote host’s sshd
configuration.
For instance, if you changed the port number in your sshd
configuration, you will need to match that port on the client-side by typing:
- ssh -p port_number remote_host
Note: Changing your ssh port is a reasonable way of providing security through obscurity. If you are allowing ssh connections to a widely known server deployment on port 22 as normal, and you have password authentication enabled, you will likely be attacked by many automated login attempts. Exclusively using key-based authentication and running ssh on a nonstandard port is not the most complex security solution you can employ, but should reduce these to a minimum.
If you only want to execute a single command on a remote system, you can specify it after the host like so:
- ssh remote_host command_to_run
You will connect to the remote machine, authenticate, and the command will be executed.
As we said before, if X11 forwarding is enabled on both computers, you can access that functionality by typing:
- ssh -X remote_host
Providing you have the appropriate tools on your computer, GUI programs that you use on the remote system will now open their window on your local system.
If you have created SSH keys, you can enhance your server’s security by disabling password-only authentication. Apart from the console, the only way to log into your server will be through the private key that pairs with the public key you have installed on the server.
Warning: Before you proceed with this step, be sure you have installed a public key to your server. Otherwise, you will be locked out!
As root or user with sudo privileges, open the sshd
configuration file:
- sudo nano /etc/ssh/sshd_config
Locate the line that reads Password Authentication
, and uncomment it by removing the leading #
. You can then change its value to no
:
PasswordAuthentication no
Two more settings that should not need to be modified (provided you have not modified this file before) are PubkeyAuthentication
and ChallengeResponseAuthentication
. They are set by default, and should read as follows:
PubkeyAuthentication yes
ChallengeResponseAuthentication no
After making your changes, save and close the file.
You can now reload the SSH daemon:
- sudo systemctl reload ssh
Password authentication should now be disabled, and your server should be accessible only through SSH key authentication.
Learning your way around SSH will greatly benefit any of your future cloud computing endeavours. As you use the various options, you will discover more advanced functionality that can make your life easier. SSH has remained popular because it is secure, light-weight, and useful in diverse situations.
Next, you may want to learn about working with SFTP to perform command line file transfers.
]]>The Secure Shell Protocol (or SSH) is a cryptographic network protocol that allows users to securely access a remote computer over an unsecured network.
Though SSH supports password-based authentication, it is generally recommended that you use SSH keys instead. SSH keys are a more secure method of logging into an SSH server, because they are not vulnerable to common brute-force password hacking attacks.
Generating an SSH key pair creates two long strings of characters: a public and a private key. You can place the public key on any server, and then connect to the server using an SSH client that has access to the private key.
When the public and private keys match up, the SSH server grants access without the need for a password. You can increase the security of your key pair even more by protecting the private key with an optional (but highly encouraged) passphrase.
Note: If you are looking for information about setting up SSH keys in your DigitalOcean account, please refer to our DigitalOcean product documentation on SSH Keys
The first step is to create a key pair on the client machine. This will likely be your local computer. Type the following command into your local command line:
- ssh-keygen -t ed25519
OutputGenerating public/private ed25519 key pair.
You will see a confirmation that the key generation process has begun, and you will be prompted for some information, which we will discuss in the next step.
Note: if you are on an older system that does not support creating ed25519
key pairs, or the server you’re connecting to does not support them, you should create a strong rsa
keypair instead:
- ssh-keygen -t rsa -b 4096
This changes the -t
“type” flag to rsa
, and adds the -b 4096
“bits” flag to create a 4096 bit key.
The first prompt from the ssh-keygen
command will ask you where to save the keys:
OutputEnter file in which to save the key (/home/sammy/.ssh/id_ed25519):
You can press ENTER
here to save the files to the default location in the .ssh
directory of your home directory.
Alternately, you can choose another file name or location by typing it after the prompt and hitting ENTER
.
The second and final prompt from ssh-keygen
will ask you to enter a passphrase:
OutputEnter passphrase (empty for no passphrase):
It’s up to you whether you want to use a passphrase, but it is strongly encouraged: the security of a key pair, no matter the encryption scheme, still depends on the fact that it is not accessible to anyone else.
Should a private key with no passphrase fall into an unauthorized user’s possession, they will be able to log in to any server you’ve configured with the associated public key.
The main downside to having a passphrase — typing it in — can be mitigated by using an ssh-agent
service, which will temporarily store your unlocked key and make it accessible to the SSH client. Many of these agents are integrated with your operating system’s native keychain, making the unlocking process even more seamless.
To recap, the entire key generation process looks like this:
- ssh-keygen -t ed25519
OutputGenerating public/private ed25519 key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_ed25519):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_ed25519
Your public key has been saved in /home/sammy/.ssh/id_ed25519.pub
The key fingerprint is:
SHA256:EGx5HEXz7EqKigIxHHWKpCZItSj1Dy9Dqc5cYae+1zc sammy@hostname
The key's randomart image is:
+--[ED25519 256]--+
| o+o o.o.++ |
|=oo.+.+.o + |
|*+.oB.o. o |
|*. + B . . |
| o. = o S . . |
|.+ o o . o . |
|. + . ... . |
|. . o. . E |
| .. o. . . |
+----[SHA256]-----+
The public key is now located in /home/sammy/.ssh/id_ed25519.pub
. The private key is now located in /home/sammy/.ssh/id_ed25519
.
Once the key pair is generated, it’s time to place the public key on the server that you want to connect to.
You can copy the public key into the server’s authorized_keys
file with the ssh-copy-id
command. Make sure to replace the example username and address:
- ssh-copy-id sammy@your_server_address
Once the command completes, you will be able to log into the server via SSH without being prompted for a password. However, if you set a passphrase when creating your SSH key, you will be asked to enter the passphrase at that time. This is your local ssh
client asking you to decrypt the private key, it is not the remote server asking for a password.
Once you have copied your SSH keys onto the server, you may want to completely prohibit password logins by configuring the SSH server to disable password-based authentication.
Warning: before you disable password-based authentication, be certain you can successfully log onto the server with your SSH key, and that there are no other users on the server using passwords to log in.
In order to disable password-based SSH authentication, open up the SSH configuration file. It is typically found at the following location:
- sudo nano /etc/ssh/sshd_config
This command will open up the file within the nano
text editor. Find the line in the file that includes PasswordAuthentication
(or create the line if it doesn’t exist), make sure it is not commented out with a #
at the beginning of the line, and change it to no
:
PasswordAuthentication no
Save and close the file when you are finished. In nano
, use CTRL+O
to save, hit ENTER
to confirm the filename, then CTRL+X
to exit.
Reload the sshd
service to put these changes into effect:
- sudo systemctl reload sshd
Before exiting your current SSH session, make a test connection in another terminal to verify you can still connect.
In this tutorial we created an SSH key pair, copied our public key to a server, and (optionally) disabled password-based authentication completely.
For more information about SSH and the SSH service, including how to set up multifactor authentication, please read our related tutorials:
]]>Understanding networking is a fundamental part of configuring complex environments on the internet. This has implications when trying to communicate between servers efficiently, developing secure network policies, and keeping your nodes organized.
In a previous guide, we went over some basic networking terminology. You should look through that guide to make sure you are familiar with the concepts presented there.
In this article, we will discuss some more specific concepts that are involved with designing or interacting with networked computers. Specifically, we will be covering network classes, subnets, and CIDR notation for grouping IP addresses.
Every location or device on a network must be addressable. This means that it can be reached by referencing its designation under a predefined system of addresses. In the normal TCP/IP model of network layering, this is handled on a few different layers, but usually when we refer to an address on a network we are talking about an IP address.
IP addresses allow network resources to be reached through a network interface. If one computer wants to communicate with another computer, it can address the information to the remote computer’s IP address. Assuming that the two computers are on the same network, or that the different computers and devices in between can translate requests across networks, the computers should be able to reach each other and send information.
Each IP address must be unique on its own network. Networks can be isolated from one another, and they can be bridged and translated to provide access between distinct networks. A system called Network Address Translation, allows the addresses to be rewritten when packets traverse network borders to allow them to continue on to their correct destination. This allows the same IP address to be used on multiple, isolated networks while still allowing these to communicate with each other if configured correctly.
There are two revisions of the IP protocol that are widely implemented on systems today: IPv4 and IPv6. IPv6 is slowly replacing IPv4 due to improvements in the protocol and the limitations of IPv4 address space. Simply put, the world now has too many internet-connected devices for the amount of addresses available through IPv4.
IPv4 addresses are 32-bit addresses. Each byte, or 8-bit segment of the address, is divided by a period and typically expressed as a number 0–255. Even though these numbers are typically expressed in decimal to aid in human comprehension, each segment is usually referred to as an octet to express the fact that it is a representation of 8 bits.
A typical IPv4 address looks something like this:
192.168.0.5
The lowest value in each octet is a 0, and the highest value is 255.
We can also express this in binary to get a better idea of how the four octets will look. We will separate each 4 bits by a space for readability and replace the dots with dashes:
1100 0000 - 1010 1000 - 0000 0000 - 0000 0101
Recognizing that these two formats represent the same number will be important for understanding concepts later on.
Although there are some other differences in the protocol and background functionality of IPv4 and IPv6, the most noticeable difference is the address space. IPv6 expresses addresses as an 128-bit number. To put that into perspective, this means that IPv6 has space for more than 7.9×10<sup>28</sup> times the amount of addresses as IPv4.
To express this extended address range, IPv6 is generally written out as eight segments of four hexadecimal digits. Hexadecimal numbers represent the numbers 0–15 by using the digits 0–9, as well as the numbers a–f to express the higher values. A typical IPv6 address might look something like this:
1203:8fe0:fe80:b897:8990:8a7c:99bf:323d
You may also see these addresses written in a compact format. The rules of IPv6 allow you to remove any leading zeros from each octet, and to replace a single range of zeroed groups with a double colon (::).
For instance, if you have one group in an IPv6 address that looks like this:
...:00bc:...
You could instead just type:
...:bc:...
To demonstrate the second case, if you have a range in an IPv6 address with multiple groups as zeroes, like this:
...:18bc:0000:0000:0000:00ff:...
You could compact this like so (also removing the leading zeros of the group like we did above):
...:18bc::ff...
You can do this only once per address, or else the full address will be unable to be reconstructed.
While IPv6 is becoming more common every day, in this guide, we will be exploring the remaining concepts using IPv4 addresses because it is easier to discuss with a smaller address space.
IP addresses are typically made of two separate components. The first part of the address is used to identify the network that the address is a part of. The part that comes afterwards is used to specify a specific host within that network.
Where the network specification ends and the host specification begins depends on how the network is configured. We will discuss this more thoroughly momentarily.
IPv4 addresses were traditionally divided into five different “classes”, named A through E, meant to differentiate segments of the available addressable IPv4 space. These are defined by the first four bits of each address. You can identify what class an IP address belongs to by looking at these bits.
Here is a translation table that defines the addresses based on their leading bits:
Class A
0---
: If the first bit of an IPv4 address is “0”, this means that the address is part of class A. This means that any address from 0.0.0.0
to 127.255.255.255
is in class A.Class B
10--
: Class B includes any address from 128.0.0.0
to 191.255.255.255
. This represents the addresses that have a “1” for their first bit, but don’t have a “1” for their second bit.Class C
110-
: Class C is defined as the addresses ranging from 192.0.0.0
to 223.255.255.255
. This represents all of the addresses with a “1” for their first two bits, but without a “1” for their third bit.Class D
1110
: This class includes addresses that have “111” as their first three bits, but a “0” for the next bit. This address range includes addresses from 224.0.0.0
to 239.255.255.255
.Class E
1111
: This class defines addresses between 240.0.0.0
and 255.255.255.255
. Any address that begins with four “1” bits is included in this class.Class D addresses are reserved for multi-casting protocols, which allow a packet to be sent to a group of hosts in one movement. Class E addresses are reserved for future and experimental use, and are largely not used.
Traditionally, each of the regular classes (A–C) divided the networking and host portions of the address differently to accommodate different sized networks. Class A addresses used the remainder of the first octet to represent the network and the rest of the address to define hosts. This was good for defining a few networks with a lot of hosts each.
The class B addresses used the first two octets (the remainder of the first, and the entire second) to define the network and the rest to define the hosts on each network. The class C addresses used the first three octets to define the network and the last octet to define hosts within that network.
The division of large portions of IP space into classes is now almost a legacy concept. Originally, this was implemented as a stop-gap for the problem of rapidly depleting IPv4 addresses (you can have multiple computers with the same host if they are in separate networks). This was replaced largely by later schemes that we will discuss below.
There are also some portions of the IPv4 space that are reserved for specific uses.
One of the most useful reserved ranges is the loopback range specified by addresses from 127.0.0.0
to 127.255.255.255
. This range is used by each host to test networking to itself. Typically, this is expressed by the first address in this range: 127.0.0.1
.
Each of the normal classes also have a range within them that is used to designate private network addresses. For instance, for class A addresses, the addresses from 10.0.0.0
to 10.255.255.255
are reserved for private network assignment. For class B, this range is 172.16.0.0
to 172.31.255.255
. For class C, the range of 192.168.0.0
to 192.168.255.255
is reserved for private usage.
Any computer that is not hooked up to the internet directly (any computer that goes through a router or other NAT system) can use these addresses at will.
There are additional address ranges reserved for specific use-cases. You can find a summary of reserved addresses here.
The process of dividing a network into smaller network sections is called subnetting. This can be useful for many different purposes and helps isolate groups of hosts from each other to deal with them more easily.
As we discussed above, each address space is divided into a network portion and a host portion. The amount of the address that each of these take up is dependent on the class that the address belongs to. For instance, for class C addresses, the first 3 octets are used to describe the network. For the address 192.168.0.15
, the 192.168.0
portion describes the network and the 15
describes the host.
By default, each network has only one subnet, which contains all of the host addresses defined within. A netmask is basically a specification of the amount of address bits that are used for the network portion. A subnet mask is another netmask within used to further divide the network.
Each bit of the address that is considered significant for describing the network should be represented as a “1” in the netmask.
For instance, the address we discussed above, 192.168.0.15
can be expressed like this, in binary:
1100 0000 - 1010 1000 - 0000 0000 - 0000 1111
As we described above, the network portion for class C addresses is the first 3 octets, or the first 24 bits. Since these are the significant bits that we want to preserve, the netmask would be:
1111 1111 - 1111 1111 - 1111 1111 - 0000 0000
This can be written in the normal IPv4 format as 255.255.255.0
. Any bit that is a “0” in the binary representation of the netmask is considered part of the host portion of the address and can be variable. The bits that are “1” are static, however, for the network or subnetwork that is being discussed.
We determine the network portion of the address by applying a bitwise AND
operation to between the address and the netmask. A bitwise AND
operation will save the networking portion of the address and discard the host portion. The result of this on our above example that represents our network is:
1100 0000 - 1010 1000 - 0000 0000 - 0000 0000
This can be expressed as 192.168.0.0
. The host specification is then the difference between these original value and the host portion. In our case, the host is 0000 1111
or 15
.
The idea of subnetting is to take a portion of the host space of an address, and use it as an additional networking specification to divide the address space again.
For instance, a netmask of 255.255.255.0
as we saw above leaves us with 254 hosts in the network (you cannot end in 0 or 255 because these are reserved). If we wanted to divide this into two subnetworks, we could use one bit of the conventional host portion of the address as the subnet mask.
So, continuing with our example, the networking portion is:
1100 0000 - 1010 1000 - 0000 0000
The host portion is:
0000 1111
We can use the first bit of our host to designate a subnetwork. We can do this by adjusting the subnet mask from this:
1111 1111 - 1111 1111 - 1111 1111 - 0000 0000
To this:
1111 1111 - 1111 1111 - 1111 1111 - 1000 0000
In traditional IPv4 notation, this would be expressed as 192.168.0.128
. What we have done here is to designate the first bit of the last octet as significant in addressing the network. This effectively produces two subnetworks. The first subnetwork is from 192.168.0.1
to 192.168.0.127
. The second subnetwork contains the hosts 192.168.0.129
to 192.168.0.255
. Traditionally, the subnet itself must not be used as an address.
If we use more bits out of the host space for networking, we can get more and more subnetworks.
A system called Classless Inter-Domain Routing, or CIDR, was developed as an alternative to traditional subnetting. The idea is that you can add a specification in the IP address itself as to the number of significant bits that make up the routing or networking portion.
For example, we could express the idea that the IP address 192.168.0.15
is associated with the netmask 255.255.255.0
by using the CIDR notation of 192.168.0.15/24
. This means that the first 24 bits of the IP address given are considered significant for the network routing.
This allows us some interesting possibilities. We can use these to reference “supernets”. In this case, we mean a more inclusive address range that is not possible with a traditional subnet mask. For instance, in a class C network, like above, we could not combine the addresses from the networks 192.168.0.0
and 192.168.1.0
because the netmask for class C addresses is 255.255.255.0
.
However, using CIDR notation, we can combine these blocks by referencing this chunk as 192.168.0.0/23
. This specifies that there are 23 bits used for the network portion that we are referring to.
So the first network (192.168.0.0
) could be represented like this in binary:
1100 0000 - 1010 1000 - 0000 0000 - 0000 0000
While the second network (192.168.1.0
) would be like this:
1100 0000 - 1010 1000 - 0000 0001 - 0000 0000
The CIDR address we specified indicates that the first 23 bits are used for the network block we are referencing. This is equivalent to a netmask of 255.255.254.0
, or:
1111 1111 - 1111 1111 - 1111 1110 - 0000 0000
As you can see, with this block the 24th bit can be either 0 or 1 and it will still match, because the network block only cares about the first 23 digits.
CIDR allows us more control over addressing continuous blocks of IP addresses. This is much more useful than the subnetting we talked about originally.
Hopefully by now, you should have a working understanding of some of the networking implications of the IP protocol. While dealing with this type of networking is not always intuitive, and may be difficult to work with at times, it is important to understand what is going on in order to configure your software and components correctly.
There are various calculators and tools online that will help you understand some of these concepts and get the correct addresses and ranges that you need by typing in certain information. CIDR.xyz provides a translation from decimal-based IP addresses to octets, and lets you visualize different CIDR netmasks.
]]>Note: It’s a screenshot taken during Linux start https://ibb.co/CJ3qvMG
]]>sudo apt-get upgrade
it complains that I have used all my disk space.
I ran df -h
which shows me that my droplet space is all used up but I have over 400GB left on the volume (the blockchain hasn’t fully synced yet).
Did I supposed to install the blockchain on the volume specifically and preserve the droplet storage so I can update packages etc? I just assumed a volume would increase the total storage space and then not need to concern myself with it.
]]>For example i try command ip-copy-id -i id_rsa.pub user@ip-address it will want me my password and than permission denied (public key, password)
anyone have any suggestions the others solution i tried was nano command, keygen etc…
]]>One problem users run into when first learning how to work with Linux is how to find the files they are looking for.
This guide will cover how to use the aptly named find
command. This will help you search for files on your system using a variety of filters and parameters. It will also briefly cover the locate
command, which can be used to search for files in a different way.
To follow along with this guide, you will need access to a computer running a Linux-based operating system. This can either be a virtual private server which you’ve connected to with SSH or your local machine. Note that this tutorial was validated using a Linux server running Ubuntu 20.04, but the examples given should work on a computer running any version of any Linux distribution.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
Note: To illustrate how the find
and locate
commands work, the example commands in this guide search for files stored under /
, or the root directory. Because of this, if you’re logged into the terminal as a non-root user, some of the example commands may include Permission denied
in their output.
This is to be expected, since you’re searching for files within directories that regular users typically don’t have access to. However, these example commands should still work and be useful for understanding how these programs work.
The most obvious way of searching for files is by their name.
To find a file by name with the find
command, you would use the following syntax:
- find -name "query"
This will be case sensitive, meaning a search for query
is different from a search for Query
.
To find a file by name but ignore the case of the query, use the -iname
option:
- find -iname "query"
If you want to find all files that don’t adhere to a specific pattern, you can invert the search with -not
:
- find -not -name "query_to_avoid"
Alternatively, you can invert the search using an exclamation point (!
), like this:
- find \! -name "query_to_avoid"
Note that if you use !
, you must escape the character with a backslash (\
) so that the shell does not try to interpret it before find
can act.
You can specify the type of files you want to find with the -type
parameter. It works like this:
- find -type type_descriptor query
Here are some of the descriptors you can use to specify the type of file:
f
: regular filed
: directoryl
: symbolic linkc
: character devicesb
: block devicesFor instance, if you wanted to find all of the character devices on your system, you could issue this command:
- find /dev -type c
This command specifically only searches for devices within the /dev
directory, the directory where device files are typically mounted in Linux systems:
Output/dev/vcsa5
/dev/vcsu5
/dev/vcs5
/dev/vcsa4
/dev/vcsu4
/dev/vcs4
/dev/vcsa3
/dev/vcsu3
/dev/vcs3
/dev/vcsa2
/dev/vcsu2
/dev/vcs2
. . .
You can search for all files that end in .conf
with a command like the following. This example searches for matching files within the /usr
directory:
- find /usr -type f -name "*.conf"
Output/usr/src/linux-headers-5.4.0-88-generic/include/config/auto.conf
/usr/src/linux-headers-5.4.0-88-generic/include/config/tristate.conf
/usr/src/linux-headers-5.4.0-90-generic/include/config/auto.conf
/usr/src/linux-headers-5.4.0-90-generic/include/config/tristate.conf
/usr/share/adduser/adduser.conf
/usr/share/ufw/ufw.conf
/usr/share/popularity-contest/default.conf
/usr/share/byobu/keybindings/tmux-screen-keys.conf
/usr/share/libc-bin/nsswitch.conf
/usr/share/rsyslog/50-default.conf
. . .
Note: The previous example combines two find
query expressions; namely, -type f
and -name "*.conf"
. For any file to be returned, it must satisfy both of these expressions.
You can combine expressions like this by separating them with the -and
option, but as this example shows the -and
is implied any time you include two expressions. You can also return results that satisfy either expression by separating them with the -or
option:
- find -name query_1 -or -name query_2
This example will find any files whose names match either query_1
or query_2
.
find
gives you a variety of ways to filter results by size and time.
You can filter files by their size using the -size
parameter. To do this, you must add a special suffix to the end of a numerical size value to indicate whether you’re counting the size in terms of bytes, megabytes, gigabytes, or another size. Here are some commonly used size suffixes:
c
: bytesk
: kilobytesM
: megabytesG
: gigabytesb
: 512-byte blocksTo illustrate, the following command will find every file in the /usr
directory that is exactly 50 bytes:
- find /usr -size 50c
To find files that are less than 50 bytes, you can use this syntax instead:
- find /usr -size -50c
To find files in the /usr
directory that are more than 700 Megabytes, you could use this command:
- find /usr -size +700M
For every file on the system, Linux stores time data about access times, modification times, and change times.
Access Time: The last time a file was read or written to.
Modification Time: The last time the contents of the file were modified.
Change Time: The last time the file’s inode metadata was changed.
You can base your find
searches on these parameters using the -atime
, -mtime
, and -ctime
options, respectively. For any of these options, you must pass a value indicating how many days in the past you’d like to search. Similar to the size options outlined previously, you can prepend these options with the plus or minus symbols to specify “greater than” or “less than”.
For example, to find files in the /usr
directory that were modified within the last day, run the following command:
- find /usr -mtime 1
If you want files that were accessed less than a day ago, you could run this command:
- find /usr -atime -1
To find files that last had their meta information changed more than 3 days ago, you might execute the following:
- find /usr -ctime +3
These options also have companion parameters you can use to specify minutes instead of days:
- find /usr -mmin -1
This will give the files that have been modified in the last minute.
find
can also do comparisons against a reference file and return those that are newer:
- find / -newer reference_file
This syntax will return every file on the system that was created or changed more recently than the reference file.
You can also search for files by the user or group that owns the file using the -user
and -group
parameters, respectively. To find every file in the /var
directory that is owned by the syslog user run this command:
- find /var -user syslog
Similarly, you can specify files in the /etc
directory owned by the shadow group by typing:
- find /etc -group shadow
You can also search for files with specific permissions.
If you want to match an exact set of permissions, you use can this syntax specifying the permissions using octal notation:
- find / -perm 644
This will match files with exactly the permissions specified.
If you want to specify anything with at least those permissions, you can precede the permissions notation with a minus sign:
- find / -perm -644
This will match any files that have additional permissions. A file with permissions of 744
would be matched in this instance.
In this section, you will create an example directory structure that you’ll then use to explore filtering files by their depth within the structure.
If you’re following along with the examples in this tutorial, it would be prudent to create these files and directories within the /tmp/
directory. /tmp/
is a temporary directory, meaning that any files and directories within it will be deleted the next time the server boots up. This will be useful for the purposes of this guide, since you can create as many directories, files, and links as you’d like without having to worry about them clogging up your system later on.
After running the commands in this section, your /tmp/
directory will contain three levels of directories, with ten directories at the first level. Each directory (including the temporary directory) will contain ten files and ten subdirectories.
Create the example directory structure within the /tmp/
directory with the following command:
- mkdir -p /tmp/test/level1dir{1..10}/level2dir{1..10}/level3dir{1..10}
Following that, populate these directories with some sample files using the touch
command:
- touch /tmp/test/{file{1..10},level1dir{1..10}/{file{1..10},level2dir{1..10}/{file{1..10},level3dir{1..10}/file{1..10}}}}
With these files and directories in place, go ahead and navigate into the test/
directory you just created:
- cd /tmp/test
To get a baseline understanding of how find
will retrieve files from this structure, begin with a regular name search that matches any files named file1
:
- find -name file1
Output./level1dir7/level2dir8/level3dir9/file1
./level1dir7/level2dir8/level3dir3/file1
./level1dir7/level2dir8/level3dir4/file1
./level1dir7/level2dir8/level3dir1/file1
./level1dir7/level2dir8/level3dir8/file1
./level1dir7/level2dir8/level3dir7/file1
./level1dir7/level2dir8/level3dir2/file1
./level1dir7/level2dir8/level3dir6/file1
./level1dir7/level2dir8/level3dir5/file1
./level1dir7/level2dir8/file1
. . .
This will return a lot of results. If you pipe the output into a counter, you’ll find that there are 1111
total results:
- find -name file1 | wc -l
Output1111
This is probably too many results to be useful to you in most circumstances. To narrow it down, you can specify the maximum depth of the search under the top-level search directory:
- find -maxdepth num -name query
To find file1
only in the level1
directories and above, you can specify a max depth of 2 (1 for the top-level directory, and 1 for the level1
directories):
- find -maxdepth 2 -name file1
Output./level1dir7/file1
./level1dir1/file1
./level1dir3/file1
./level1dir8/file1
./level1dir6/file1
./file1
./level1dir2/file1
./level1dir9/file1
./level1dir4/file1
./level1dir5/file1
./level1dir10/file1
That is a much more manageable list.
You can also specify a minimum directory if you know that all of the files exist past a certain point under the current directory:
- find -mindepth num -name query
You can use this to find only the files at the end of the directory branches:
- find -mindepth 4 -name file1
Output./level1dir7/level2dir8/level3dir9/file1
./level1dir7/level2dir8/level3dir3/file1
./level1dir7/level2dir8/level3dir4/file1
./level1dir7/level2dir8/level3dir1/file1
./level1dir7/level2dir8/level3dir8/file1
./level1dir7/level2dir8/level3dir7/file1
./level1dir7/level2dir8/level3dir2/file1
. . .
Again, because of the branching directory structure, this will return a large number of results (1000).
You can combine the min and max depth parameters to focus in on a narrow range:
- find -mindepth 2 -maxdepth 3 -name file1
Output./level1dir7/level2dir8/file1
./level1dir7/level2dir5/file1
./level1dir7/level2dir7/file1
./level1dir7/level2dir2/file1
./level1dir7/level2dir10/file1
./level1dir7/level2dir6/file1
./level1dir7/level2dir3/file1
./level1dir7/level2dir4/file1
./level1dir7/file1
. . .
Combining these options like this narrows down the results significantly, with only 110 lines returned instead of the previous 1000.
find
ResultsYou can execute an arbitrary helper command on everything that find
matches by using the -exec
parameter using the following syntax:
- find find_parameters -exec command_and_options {} \;
The {}
is used as a placeholder for the files that find
matches. The \;
lets find
know where the command ends.
For instance, assuming you’re still in the test/
directory you created within the /tmp/
directory in the previous step, you could find the files in the previous section that had 644
permissions and modify them to have 664
permissions:
- find . -type f -perm 644 -exec chmod 664 {} \;
You could also change the directory permissions in a similar way:
- find . -type d -perm 755 -exec chmod 700 {} \;
This example finds every directory with permissions set to 755
and then modifies the permissions to 700
.
locate
An alternative to using find
is the locate
command. This command is often quicker and can search the entire file system with ease.
You can install the command on Debian or Ubuntu with apt
by updating your package lists and then installing the mlocate
package:
- sudo apt update
- sudo apt install mlocate
On Rocky Linux, CentOS, and other RedHat derived distributions, you can instead use the dnf
command to install mlocate
:
- sudo dnf install mlocate
The reason locate
is faster than find
is because it relies on a database that lists all the files on the filesystem. This database is usually updated once a day with a cron script, but you can update it manually with the updatedb
command. Run this command now with sudo
privileges:
- sudo updatedb
Remember, the locate
database must always be up-to-date if you want to find new files. If you add new files before the cron script is executed or before you run the updatedb
command, they will not appear in your query results.
locate
allows you to filter results in a number of ways. The most fundamental way you can use it to find files is to use this syntax:
- locate query
This will match any files and directories that contain the string query
anywhere in their file path. To return only files whose names contain the query itself, instead of every file that has the query in the directories leading up to it, you can include the -b
flag to search only for files whose “basename” matches the query:
- locate -b query
To have locate
only return results that still exist in the filesystem (meaning files that were not removed between the last updatedb
call and the current locate
call), use the -e
flag:
- locate -e query
You can retrieve statistics about the information that locate
has cataloged using the -S
option:
- locate -S
OutputDatabase /var/lib/mlocate/mlocate.db:
21015 directories
136787 files
7727763 bytes in file names
3264413 bytes used to store database
This can be useful for getting a high-level understanding of how many files and directories exist on your system.
Both the find
and locate
commands are useful tools for finding files on your system. Both are powerful commands that can be strengthened by combining them with other utilities through pipelines, but it’s up to you to decide which tool is appropriate for your given situation.
From here, we encourage you to continue experimenting with find
and locate
. You can read their respective man
pages to learn about other options not covered in this guide, and you can analyze and manipulate search results by piping them into other commands like wc
, sort
, and grep
.
While working in a server environment, you’ll spend a lot of your time on the command line. Most likely, you’ll be using the bash shell, which is the default of most distributions.
During a terminal session, you’ll likely be repeating some commands often, and typing variations on those commands even more frequently. While typing each command repeatedly can be good practice in the beginning, at some point, it crosses the line into being disruptive and an annoyance.
Luckily, the bash shell has some fairly well-developed history functions. Learning how to effectively use and manipulate your bash history will allow you to spend less time typing and more time getting actual work done. Many developers are familiar with the DRY philosophy of Don’t Repeat Yourself. Effective use of bash’s history allows you to operate closer to this principle and will speed up your workflow.
To follow along with this guide, you will need access to a computer running a Linux-based operating system. This can either be a virtual private server which you’ve connected to with SSH or your local machine. Note that this tutorial was validated using a Linux server running Ubuntu 20.04, but the examples given should work on a computer running any version of any Linux distribution.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
Before you begin actually using your command history, it can be helpful for you to adjust some bash settings to make it more useful. These steps are not necessary, but they can make it easier to find and execute commands that you’ve run previously.
Bash allows you to adjust the number of commands that it stores in history. It actually has two separate options for this: the HISTFILESIZE
parameter configures how many commands are kept in the history file, while the HISTSIZE
controls the number stored in memory for the current session.
This means you can set a reasonable cap for the size of history in memory for the current session, and have an even larger history saved to disk that you can examine at a later time. By default, bash sets very conservative values for these options, but you can expand these to take advantage of a larger history. Some distributions already increase the default bash history settings with slightly more generous values.
Open your ~/.bashrc
file with your preferred text editor to change these settings. Here, we’ll use nano
:
- nano ~/.bashrc
Search for both the HISTSIZE
and HISTFILESIZE
parameters. If they are set, feel free to modify the values. If these parameters aren’t in your file, add them now. For the purposes of this guide, saving 10000 lines to disk and loading the last 5000 lines into memory will work fine. This is a conservative estimate for most systems, but you can adjust these numbers down if you notice a performance impact:
. . .
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=5000
HISTFILESIZE=10000
. . .
By default, bash writes its history at the end of each session, overwriting the existing file with an updated version. This means that if you are logged in with multiple bash sessions, only the last one to exit will have its history saved.
You can work around this by setting the histappend
setting, which will append instead of overwriting the history. This may be set already, but if it is not, you can enable this by adding this line:
. . .
shopt -s histappend
. . .
If you want to have bash immediately add commands to your history instead of waiting for the end of each session (to enable commands in one terminal to be instantly be available in another), you can also set or append the history -a
command to the PROMPT_COMMAND
parameter, which contains commands that are executed before each new command prompt.
To configure this correctly, you’ll need to customize bash’s PROMPT_COMMAND
to change the way commands are recorded in the history file and in the current shell session’s memory:
history -a
.history -c
.history -r
command.Putting all these commands together in order in the PROMPT_COMMAND
shell variable will result in the the following, which you can paste into your .bashrc
file:
. . .
export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"
. . .
When you are finished, save the file and exit. If you edited your .bashrc
file with nano
, do so by pressing CTRL + X
, Y
, and then ENTER
.
To implement your changes, either log out and back in again, or source
the file by running:
- source ~/.bashrc
With that, you’ve adjusted how your shell handles your command history. You can now get some practice finding your previous commands with the history
command.
The way to review your bash history is to use the history
command. This will print out our recent commands, one command per line. This should output, at most, the number of lines you selected for the HISTSIZE
variable. It will probably be fewer at this point:
- history
Output . . .
43 man bash
44 man fc
45 man bash
46 fc -l -10
47 history
48 ls -a
49 vim .bash_history
50 history
51 man history
52 history 10
53 history
Each command history
returns is associated with a number for easy reference. This guide will go over how this can be useful later on.
You can truncate the output by specifying a number after the command. For instance, if you want to only return the last 5 commands that were executed, you can type:
- history 5
Output 50 history
51 man history
52 history 10
53 history
54 history 5
To find all of the history
commands that contain a specific string, you can pipe the results into a grep
command that searches each line for a given string. For example, you can search for the lines that have cd
by typing:
- history | grep cd
Output 33 cd Pictures/
37 cd ..
39 cd Desktop/
61 cd /usr/bin/
68 cd
83 cd /etc/
86 cd resolvconf/
90 cd resolv.conf.d/
There are many situations where being able to retrieve a list of commands you’ve previously ran can be helpful. If you want to run one of those commands again, your instinct may be to copy one of the commands from your output and paste it into your prompt. This works, but bash comes with a number of shortcuts that allow you to retrieve and then automatically execute commands from your history.
Printing your command history can be useful, but it doesn’t really help you access those commands other than as reference.
You can recall and immediately execute any of the commands returned by a history
operation by its number preceded by an exclamation point (!
). Assuming your history
results aligned with those from the previous section, you could check out the man page for the history
command quickly by typing:
- !51
This will immediately recall and execute the command associated with the history number 51.
You can also execute commands relative to your current position by using the !-n
syntax, where “n” is replaced by the number of previous commands you want to recall.
As an example, say you ran the following two commands:
- ls /usr/share/common-licenses
- echo hello
If you wanted to recall and execute the command you ran before your most recent one (the ls
command), you could type !-2
:
- !-2
To re-execute the last command you ran, you could run !-1
. However, bash provides a shortcut consisting of two exclamation points which will substitute the most recent command and execute it:
- !!
Many people use this when they type a command but forgot that they needed sudo
privileges for it to execute. Typing sudo !!
will re-execute the command with sudo
in front of it:
- touch /etc/hello
Outputtouch: cannot touch `/etc/hello': Permission denied
- sudo !!
Outputsudo touch /etc/hello
[sudo] password for sammy:
This demonstrates another property of this syntax: these shortcuts are pure substitutions, and can be incorporated within other commands at will.
There are a few ways that you can scroll through your bash history, putting each successive command on the command line to edit.
The most common way of doing this is to press the up arrow key at the command prompt. Each additional press of the up arrow key will take you further back in your command line history.
If you need to go the other direction, the down arrow key traverses the history in the opposite direction, finally bringing you back to your current empty prompt.
If moving your hand all the way over to the arrow keys seems like a big hassle, you can move backwards in your command history using the CTRL + P
combination and use the CTRL + N
combination to move forward through your history again.
If you want to jump back to the current command prompt, you can do so by pressing META + >
. In most cases, the “meta” key is the ALT
key, so META + >
will mean pressing ALT + SHIFT + .
. This is useful if you find yourself far back in your history and want to get back to your empty prompt.
You can also go to the first line of your command history by doing the opposite maneuver and typing META + <
. This typically means pressing ALT + SHIFT + ,
.
To summarize, these are some keys and key combinations you can use to scroll through the history and jump to either end:
UP arrow key
: Scroll backwards in historyCTRL + P
: Scroll backwards in historyDOWN arrow key
: Scroll forwards in historyCTRL + N
: Scroll forwards in historyALT + SHIFT + .
: Jump to the end of the history (most recent)ALT+ SHIFT + ,
: Jump to the beginning of the history (most distant)Although piping the history
command through grep
can be a useful way to narrow down the results, it isn’t ideal in many situations.
Bash includes search functionality for its history. The typical way of using this is through searching backwards in history (most recent results returned first) using the CTRL + R
key combination.
For instance, you can type CTRL + R
, and begin typing part of the previous command. You only have to type out part of the command. If it matches an unwanted command instead, you can press CTRL + R
again for the next result.
If you accidentally pass the command you wanted, you can move in the opposite direction by typing CTRL + S
. This also can be useful if you’ve moved to a different point in your history using the keys in the last section and wish to search forward.
Be aware that, in many terminals, the CTRL + S
combination is mapped to suspend the terminal session. This will intercept any attempts to pass CTRL + S
to bash, and will “freeze” your terminal. To unfreeze, type CTRL + Q
to unsuspend the session.
This suspend and resume feature is not needed in most modern terminals, and you can turn it off without any problem by running the following command:
- stty -ixon
stty
is a utility that allows you to change your terminal’s settings from the command line. You could add this stty -ixon
command to the end of your ~/.bashrc
file to make this change permanent as well.
If you try searching with CTRL + S
again now, it should work as expected to allow you to search forwards.
A common scenario to find yourself in is to type in part of a command, only to then realize that you have executed it previously and can search the history for it.
The correct way of searching using what is already on your command line is to move your cursor to the beginning of the line with CTRL + A
, call the reverse history with CTRL + R
, paste the current line into the search with CTRL + Y
, and then using the CTRL + R
again to search in reverse.
For instance, suppose you want to update your package cache on an Ubuntu system. You’ve already typed this out recently, but you didn’t think about that until after you’ve typed the sudo
in the prompt again:
- sudo
At this point, you realize that this is an operation you’ve definitely done in the past day or so. You can press CTRL + A
to move your cursor to the beginning of the line. Then, press CTRL + R
to call your reverse incremental history search. This has a side effect of copying all of the content on the command line that was after our cursor position and putting it into your clipboard.
Next, press CTRL + Y
to paste the command segments that you just copied from the command line into the search. Lastly, press CTRL + R
to move backwards in your history, searching for commands containing the content you’ve just pasted.
Using shortcuts like this may seem tedious at first, but it can be quite useful when you get used to it. It is extremely helpful when you find yourself in a position where you’ve typed out half of a complex command and know you’re going to need the history to finish the rest.
Rather than thinking of these as separate key combinations, it may help you to think of them as a single compound operation. You can just hold the CTRL
key down and then press A
, R
, Y
, and then the R
key down in succession.
This guide has already touched on some of the most fundamental history expansion techniques that bash provides. Some of the ones we’ve covered so far are:
!!
: Expand to the last command!n
: Expand the command with history number “n”.!-n
: Expand to the command that was “n” number of commands before the current command in history.The above three examples are instances of event designators. These generally are ways of recalling previous history commands using certain criteria. They are the selection portion of your available operations.
For example, you can execute the last ssh
command that you ran by typing something like:
- !ssh
This searches for lines in your command history that begin with ssh
. If you want to search for a string that isn’t at the beginning of the command, you can surround it with ?
characters. For instance, to repeat a previous apt-cache search
command, you could likely run the following command to find and execute it:
- !?search?
Another event designator you can try involves substituting a string within your last command for another. To do this, enter a caret symbol (^
) followed by the string you want to replace, then immediately follow that with another caret, the replacement string, and a final caret at the end. Don’t include any spaces unless they’re part of the string you want to replace or part of the string you want to use as the replacement:
- ^original^replacement^
This will recall the previous command (just like !!
), search for an instance of original
within the command string, and replace it with replacement
. It will then execute the command using the replacement string.
This is useful for dealing with things like misspellings. For instance, say you mistakenly run this command when trying to read the contents of the /etc/hosts
file:
- cat /etc/hosst
Outputcat: /etc/hosst: No such file or directory
Rather then rewriting the entire command, you could run the following instead:
- ^hosst^hosts^
This will fix the error in the previous command and execute it successfully.
After event designators, you can add a colon (:
) followed by a word designator to select a portion of the matched command.
It does this by dividing the command into “words”, which are defined as any chunk separated by whitespace. This allows you some interesting opportunities to interact with your command parameters.
The word numbering starts at the initial command as “0”, the first argument as “1”, and continues on from there.
For instance, you could list the contents of a directory and then decide you want to navigate into that same directory. You could do so by running the following operations back to back:
- ls /usr/share/common-licenses
- cd !!:1
In cases like this where you are operating on the last command, you can shorten this by removing the second !
as well as the colon:
- cd !1
This will operate in the same way.
You can refer to the first argument with a caret (^
) and the final argument with a dollar sign ($
) if that makes sense for your purposes. These are more helpful when using ranges instead of specific numbers. For instance, you have three ways you can get all of the arguments from a previous command into a new command:
- !!:1*
- !!:1-$
- !!:*
The lone *
expands to all portions of the command being recalled other than the initial command. Similarly, you can use a number followed by *
to mean that everything after the specified word should be included.
Another thing you can do to augment the behavior of the history line you’re recalling is to modify the behavior of the recall to manipulate the text itself. To do this, you can add modifiers after a colon (:
) character at the end of an expansion.
For instance, you can chop off the path leading up to a file by using the h
modifier (it stands for “head”), which removes the path up until the final slash (/
) character. Be aware that this won’t work the way you want it to if you are using this to truncate a directory path and the path ends with a trailing slash.
A common use case for this is if you are modifying a file and realize you’d like to change to the file’s directory to do operations on related files.
For instance, say you run this command to print the contents of an open-source software license to your output:
- cat /usr/share/common-licenses/Apache-2.0
After being satisfied that the license suits your needs, you may want to change into the directory where it’s held. You can do this by calling the cd
command on the argument chain and chopping off the filename at the end:
- cd !!:$:h
If you run pwd
, which prints your current working directory, you’ll find that you’ve navigated to the directory included in the previous command:
- pwd
Output/usr/share/common-licenses
Once you’re there, you may want to open that license file again to double check, this time in a pager like less
.
To do this, you could perform the reverse of the previous manipulation by chopping off the path and using only the filename with the t
modifier, which stands for “tail”. You can search for your last cat
operation and use the t
flag to pass only the file name:
- less !cat:$:t
You could just as easily keep the full absolute path name and this command would work correctly in this instance. However, there may be other times when this isn’t true. For instance, you could be looking at a file nested within a few subdirectories below your current working directory using a relative path and then change to the subdirectory using the “h” modifier. In this case, you wouldn’t be able to rely on the relative path name to reach the file any more.
Another extremely helpful modifier is the r
modifier which strips the trailing extension. This could be useful if you are using tar
to extract a file and want to change into the resulting directory afterwards. Assuming the directory produced is the same name as the file, you could do something like:
- tar xzvf long-project-name.tgz
- cd !!:$:r
If your tarball uses the tar.gz
extension instead of tgz
, you can just pass the modifier twice:
- tar xzvf long-project-name.tgz
- cd !!:$:r:r
A similar modifier, e
, removes everything besides the trailing extension.
If you do not want to execute the command that you are recalling and only want to find it, you can use the p
modifier to have bash echo the command instead of executing it.
This is useful if you are unsure of if you’re selecting the correct piece. This not only prints it, but also puts it into your history for further editing if you’d like to modify it.
For instance, imagine you ran a find
command on your home directory and then realized that you wanted to run it from the root (/
) directory. You could check that you’re making the correct substitutions like this (assuming the original command is associated with the number 119 in your history):
- find ~ -name "file1"
- !119:0:p / !119:2*:p
Outputfind / -name "file1"
If the returned command is correct, you can execute it with the CTRL + P
key combination.
You can also make substitutions in your command by using the s/original/new/
syntax.
For instance, you could have accomplished that by typing:
- !119:s/~/\//
This will substitute the first instance of the search pattern (~
).
You can substitute every match by also passing the g
flag with the s
. For instance, if you want to create files named file1
, file2
, and file3
, and then want to create directories called dir1
, dir2
, dir3
, you could do this:
- touch file1 file2 file3
- mkdir !!:*:gs/file/dir/
Of course, it may be more intuitive to just run mkdir dir1 dir2 dir3
in this case. However, as you become comfortable using modifiers and the other bash history expansion tools, you can greatly expand your capabilities and productivity on the command line.
By reading this guide, you should now have a good idea of how you can leverage the history operations available to you. Some of these will probably be more useful than others, but it is good to know that bash has these capabilities in case you find yourself in a position where it would be helpful to dig them up.
If nothing else, the history
command alone, the reverse search, and the basic history expansions can do a lot to help you speed up your workflow.
Symbolic links allow you to link files and directories to other files and directories. They go by many names including symlinks, shell links, soft links, shortcuts, and aliases. From the user’s perspective, symbolic links are very similar to normal files and directories. However, when you interact with them, they will actually interact with the target at the other end. Think of them like wormholes for your file system.
This guide provides an overview of what symbolic links are and how to to create them from a Linux command line using the ln
command.
To follow along with this guide, you will need access to a computer running a Linux-based operating system. This can either be a virtual private server which you’ve connected to with SSH or your local machine. Note that this tutorial was validated using a Linux server running Ubuntu 20.04, but the examples given should work on a computer running any version of any Linux distribution.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
The system call necessary to create symbolic links tends to be readily available on Unix-like and POSIX-compliant operating systems. The command we’ll be using to create the links is the ln
command.
You’re welcome to use existing files on your system to practice making symbolic links, but this section provides a few commands that will set up a practice environment you can use to follow along with this guide’s examples.
Begin by creating a couple directories within the /tmp/
directory. /tmp/
is a temporary directory, meaning that any files and directories within it will be deleted the next time the server boots up. This will be useful for the purposes of this guide, since you can create as many directories, files, and links as you’d like without having to worry about them clogging up your system later on.
The following mkdir
command creates three directories at once. It creates a directory named symlinks/
within the /tmp/
directory, and two directories (one named one/
and another named two/
) within symlinks/
:
- mkdir -p /tmp/symlinks/{one,two}
Navigate into the new symlinks/
directory:
- cd /tmp/symlinks
From there, create a couple sample files, one for both of the subdirectories within symlinks/
. The following command creates a file named one.txt
within the one/
subdirectory whose only contents are a single line reading one
:
- echo "one" > ./one/one.txt
Similarly, this command creates a file named two.txt
within the two/
subdirectory whose only contents are a single line reading two
:
- echo "two" > ./two/two.txt
If you were to run tree
at this point to display the contents of the entire /tmp/symlinks
directory and any nested subdirectories, its output would look like this:
- tree
Output.
├── one
│ └── one.txt
└── two
└── two.txt
2 directories, 2 files
Note: If tree
isn’t installed on your machine by default, you can install it using your system’s package manager. On Ubuntu, for example, you can install it with apt
:
- sudo apt install tree
With these sample documents in place, you’re ready to practice making symbolic links.
By default, the ln
command will make hard links instead of symbolic, or soft, links.
Say you have a text file. If you make a symbolic link to that file, the link is only a pointer to the original file. If you delete the original file, the link will be broken as it no longer has anything to point to.
A hard link is instead a mirror copy of an original file with the exact same contents. Like symbolic links, if you edit the contents of the original file those changes will be reflected in the hard link. If you delete the original file, though, the hard link will still work, and you can view and edit it as you would a normal copy of the original file.
Hard links serve their purpose in the world, but they should be avoided entirely in some cases. For instance, you should avoid using hard links when linking inside of a git
repository as they can cause confusion.
To ensure that you’re creating symbolic links, you can pass the -s
or --symbolic
option to the ln
command.
Note: Because symbolic links are typically used more frequently than hard links, some may find it beneficial to alias ln
to ln -s
:
- alias ln="ln -s"
This may save only a few keystrokes, but if you find yourself making a lot of symbolic links this could add up significantly.
As mentioned previously, symbolic linking is essentially like creating a file that contains the target’s filename and path. Because a symbolic link is just a reference to the original file, any changes that are made to the original will be immediately available in the symbolic link.
One potential use for symbolic links is to create local directories in a user’s home directory pointing to files being synchronized to an external application, like Dropbox. Another might be to create a symbolic link that points to the latest build of a project that resides in a dynamically-named directory.
Using the example files and directories from the first section, go ahead and try creating a symbolic link named three
that points to the one
directory you created previously:
- ln -s one three
Now you should have 3 directories, one of which is pointing back to another. To get a more detailed overview of the current directory structure, you can use the ls
command to print the contents of the current working directory:
- ls
Outputone three two
There are now three directories within the symlinks/
directory. Depending on your system, it may signify that three
is in fact a symbolic link. This is sometimes done by rendering the name of the link in a different color, or appending it with an @
symbol.
For even greater detail, you can pass the -l
argument to ls
to determine where the symbolic link is actually pointing:
- ls -l
Outputtotal 8
drwxrwxr-x 2 sammy sammy 4096 Oct 30 19:51 one
lrwxrwxrwx 1 sammy sammy 3 Oct 30 19:55 three -> one
drwxrwxr-x 2 sammy sammy 4096 Oct 30 19:51 two
Notice that the three
link is pointing to the one
directory as expected. Also, it begins with an l
, which indicates it’s a link. The other two begin with d
, meaning that they are regular directories.
Symbolic links can also contain symbolic links. As an example, link the one.txt
file from three
to the two
directory:
- ln -s three/one.txt two/one.txt
You should now have a file named one.txt
inside of the two
directory. You can check with the following ls
command:
- ls -l two/
Outputtotal 4
lrwxrwxrwx 1 sammy sammy 13 Oct 30 19:58 one.txt -> three/one.txt
-rw-rw-r-- 1 sammy sammy 4 Oct 30 19:51 two.txt
Depending on your terminal configuration, the link (highlighted in this example output) may be rendered in red text, indicating a broken link. Although the link was created, the way this example specified the path was relative. The link is broken because the two
directory doesn’t contain a three
directory with the one.txt
file in it.
Fortunately, you can remedy this situation by telling ln
to create the symbolic link relative to the link location using the -r
or --relative
argument.
Even with the -r
flag, though, you won’t be able to fix the broken symbolic link. The reason for this is the symbolic link already exists, and you won’t be able to overwrite it without including the -f
or --force
argument as well:
- ln -srf three/one.txt two/one.txt
With that, you now have two/one.txt
which was linked to three/one.txt
which is a link to one/one.txt
.
Nesting symbolic links like this can quickly get confusing, but many applications are equipped to make such linking structures more understandable. For instance, if you were to run the tree
command, the link target being shown is actually that of the original file location and not the link itself:
- tree
Output.
├── one
│ └── one.txt
├── three -> one
└── two
├── one.txt -> ../one/one.txt
└── two.txt
3 directories, 3 files
Now that things are linked up nicely, you can begin exploring how symbolic links work with files by altering the contents of these sample files.
To get a sense of what your files contain, run the following cat
command to print the contents of the one.txt
file in each of the three directories you’ve created in this guide:
- cat {one,two,three}/one.txt
Outputone
one
one
Next, update the contents of the original one.txt
file from the one/
directory:
- echo "1. One" > one/one.txt
Then check the contents of each file again:
- cat {one,two,three}/one.txt
Output1. One
1. One
1. One
As this output indicates, any changes you make to the original file will be reflected in any of its symbolic links.
Now try out the reverse. Run the following command to change the contents of one of the symbolic links. This example changes the contents of the one.txt
file within the three/
directory:
- echo "One and done" > three/one.txt
Then check the contents of each file once again:
- cat {one,two,three}/one.txt
OutputOne and done
One and done
One and done
Because the symbolic link you changed is just a pointer to another file, any change you make to the link will be immediately reflected in the original file as well as any of its other symbolic links.
Symbolic links can be incredibly useful, but they do have certain limitations. Keep in mind that if you were to move or delete the original file or directory, all of your existing symbolic links pointed to it will become broken. There’s no automatic updating in that scenario. As long as you’re careful, though, you can find many uses for symbolic links as you continue working with the command line.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a Debian server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for a vanilla Debian 11 installation. SSH keys provide an easy, secure way of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your computer):
- ssh-keygen
By default ssh-keygen
will create a 3072-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following output:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press enter to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
Warning: If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you optionally may enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.
You should then see the following output:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:5E2BtTN9FHPBNoRXAB/EdjtHNYOHzTBzG5qUv7S3hyM root@debian-suricata
The key's randomart image is:
+---[RSA 3072]----+
| oo .O^XB|
| . +.BO%B|
| . = .+B+o|
| o o o . =.|
| S . . =|
| o.|
| .o|
| E o..|
| . ..|
+----[SHA256]-----+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Debian host is to use a utility called ssh-copy-id
. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you simply need to specify the remote host that you would like to connect to and the user account that you have password SSH access to. This is the account to which your public SSH key will be copied.
The syntax is:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.
We can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, we can make sure that the ~/.ssh
directory exists and has the correct permissions under the account we’re using.
We can then output the content we piped over into a file called authorized_keys
within this directory. We’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the above process manually.
We will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgkLJ8d2gGEJCN7xdyVaDqk8qgeZqQ0MlfoPK3TzWI5dkG0WiZ16jrkiW/h6lhO9K1w89VDMnmNN9ULOWHrZMNs//Qyv/oN+FLIgK2CkKXRxTmbh/ZGnqOm3Zo2eU+QAmjb8hSsstQ3DiuGu8tbiWmsa3k3jKbWNWpXqY3Q88t+bM1DZrHwYzaIZ1BSA1ghqHCvIZqeP9IUL2l2DUfSCT9LXJEgMQhgjakJnzEGPgd5VHMR32rVrbIbbDzlyyoZ7SpCe5y0vYvbV2JKWI/8SEOmwehEHJ9RBZmciwc+1sdEcAJVMDujb9p5rX4hyvFpG0KGhZesB+/s7PdOa8zlIg4TZhXUHl4t1jpPC83Y9KEwS/Ni4dhaxlnr3T6l5hUX2cD+eWl1vVpogBqKNGBMrVR4dWs3Z4BVUf9exqTRRYOfgo0UckULqW5pmLW07JUuGo1kpFAxpDBPFWoPsg08CGRdEUS7ScRnMK1KdcH54kUZr0O88SZOsv9Zily/A5GyNM= demo@test
Access your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, we’ll ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root
account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root
:
- chown -R sammy:sammy ~/.ssh
In this tutorial our user is named sammy but you should substitute the appropriate username into the above command.
You can now attempt passwordless authentication with your Debian server.
If you have successfully completed one of the procedures above, you should be able to log into the remote host without the remote account’s password.
The general process is the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type “yes” and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be prompted to enter it now (note that your keystrokes will not display in the terminal session for security). After authenticating, a new shell session should open for you with the configured account on the Debian server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.
If you were able to log into your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo nano /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out. Uncomment the line and set the value to “no”. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
Save and close the file when you are finished by pressing CTRL
+ X
, then Y
to confirm saving the file, and finally ENTER
to exit nano. To actually implement these changes, we need to restart the sshd
service:
- sudo systemctl restart ssh
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing this session:
- ssh username@remote_host
Once you have verified your SSH service, you can safely close all current server sessions.
The SSH daemon on your Debian server now only responds to SSH keys. Password-based authentication has successfully been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>Im trying to increase my system. I use cpanel wehere I have many websites and I need more storage. I know it is not possible to merge vda and sda but it is possible to increase with LVM.
I use Centos 8.
]]>Error screen shot > https://prnt.sc/1vue9tq
]]>After asking for a support on GitHub they advise me to run The best method is by using NGINX as per the instructions here: https://proxy-bay.net/setup.html. Instead of using TPB as the domain, you can use any other website you wish to unblock. You can modify the subs_filter setting in the configuration to search and replace the content served by the site.
As per a hosting plan, you would need a VPS/Dedicated/Cloud Linux server that provides SSH access. I just bought a server on the digital ocean and successfully installed the Nginx.
https://proxy-bay.dev/setup.html#Nginx
What change do I make in the config file? Can you please provide me server configuration?
I just place the nigix config file and place the same thing currently I am using the IP address instead of the domain name.
]]>FTP, which is short for File Transfer Protocol, is a network protocol that was once widely used for moving files between a client and server. FTP is still used to support legacy applications and workflows with very specific needs. If you have a choice on protocol, consider modern options that are more efficient, secure, and convenient for delivering files. For example, Internet users who download directly from their web browser with https
, and command line users who use secure protocols such as the scp
or SFTP.
vsftpd, very secure FTP daemon, is an FTP server for many Unix-like systems, including Linux, and is often the default FTP server for many Linux distributions as well. vsftpd is beneficial for optimizing security, performance, and stability. It also provides strong protection against security problems found in other FTP servers. vsftpd can handle virtual IPD configurations, encryption support with SSL integration, and more.
In this tutorial, you’ll configure vsftpd to allow a user to upload files to their home directory using FTP with login credentials secured by SSL/TLS. You’ll also connect your server using FileZilla, an open-source FTP client, to test the TLS encryption.
To follow along with this tutorial you will need:
The first thing you need is an Ubuntu 20.04 server, a non-root user with sudo privileges, and an enabled firewall. You can learn more about how to do this in our Initial Server Setup with Ubuntu 20.04 guide.
The second thing you need is FileZilla, an open-source FTP client, installed and configured on your local machine. This will allow you to test whether the client can connect to your server over TLS. You can find instructions for installing FileZilla on Debian and Ubuntu systems from this tutorial, along with links to instructions for installing it on other systems.
Start by updating your package list:
- sudo apt update
Next, install the vsftpd
daemon:
- sudo apt install vsftpd
When the installation is complete, copy the configuration file so you can start with a blank configuration, while also saving the original as a backup:
- sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.orig
With a backup of the configuration in place, you’re ready to configure the firewall.
First, check the firewall status to see if it’s enabled. If it is, then you’ll make adjustments to ensure that FTP traffic is permitted so firewall rules don’t block the tests.
Check the firewall status:
- sudo ufw status
This output reveals that the firewall is active and only SSH is allowed through:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
You may have other rules in place or no firewall rules at all. Since only SSH traffic is permitted, you’ll need to add rules for FTP traffic.
Start by opening ports 20
, 21
, and 990
so they’re ready when you enable TLS:
- sudo ufw allow 20,21,990/tcp
Next, open ports 40000-50000
for the range of passive ports you will be setting in the configuration file:
- sudo ufw allow 40000:50000/tcp
Check the status of your firewall:
- sudo ufw status
The output of your firewall rules should now appear as the following:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
20,21,990/tcp ALLOW Anywhere
40000:50000/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
20,21,990/tcp (v6) ALLOW Anywhere (v6)
40000:50000/tcp (v6) ALLOW Anywhere (v6)
With vsftpd
installed and the necessary ports open, now it’s time to create a dedicated FTP user.
In this step, you will create a dedicated FTP user. However, you may already have a user in need of FTP access. This guide outlines how to preserve an existing user’s access to their data, but, even so, we recommend that you start with a new dedicated FTP user until you’ve configured and tested your setup before reconfiguring any existing users.
Start by adding a test user:
- sudo adduser sammy
Assign a password when prompted. Feel free to press ENTER
to skip through the following prompts, as those details aren’t important for the purposes of this step.
FTP is generally more secure when users are restricted to a specific directory. vsftpd
accomplishes this with chroot
jails. When chroot
is enabled for local users, they are restricted to their home directory by default. Since vsftpd
secures the directory in a specific way, it must not be writable by the user. This is fine for a new user who should only connect via FTP, but an existing user may need to write to their home folder if they also have shell access.
In this example, rather than removing write privileges from the home directory, create an ftp
directory to serve as the chroot
and a writable files
directory to hold the actual files.
Create the ftp
folder:
- sudo mkdir /home/sammy/ftp
Set its ownership:
- sudo chown nobody:nogroup /home/sammy/ftp
Remove write permissions:
- sudo chmod a-w /home/sammy/ftp
Verify the permissions:
- sudo ls -la /home/sammy/ftp
Outputtotal 8
dr-xr-xr-x 2 nobody nogroup 4096 Sep 14 20:28 .
drwxr-xr-x 3 sammy sammy 4096 Sep 14 20:28 ..
Next, create the directory for file uploads:
- sudo mkdir /home/sammy/ftp/files
Then assign ownership to the user:
- sudo chown sammy:sammy /home/sammy/ftp/files
A permissions check on the ftp
directory should return the following output:
- sudo ls -la /home/sammy/ftp
Outputtotal 12
dr-xr-xr-x 3 nobody nogroup 4096 Sep 14 20:30 .
drwxr-xr-x 3 sammy sammy 4096 Sep 14 20:28 ..
drwxr-xr-x 2 sammy sammy 4096 Sep 14 20:30 files
Finally, add a test.txt
file to use for testing:
- echo "vsftpd test file" | sudo tee /home/sammy/ftp/files/test.txt
Outputvsftpd test file
Now that you’ve secured the ftp
directory and allowed the user access to the files
directory, next you will modify our configuration.
In this step, you will allow a single user with a local shell account to connect with FTP. The two key settings for this are already set in vsftpd.conf
. Open this file using your preferred text editor. Here, we’ll use nano
:
- sudo nano /etc/vsftpd.conf
Once you’ve opened the file, confirm that the anonymous_enable
directive is set to NO
and the local_enable
directive is set to YES
:
. . .
# Allow anonymous FTP? (Disabled by default).
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
. . .
These settings prevent anonymous logins and permit local logins, respectively. Keep in mind that enabling local logins means that any normal user listed in the /etc/passwd
file can be used to log in.
Some FTP commands allow users to add, change, or remove files and directories on the filesystem. Enable these commands by uncommenting the write_enable
setting. You can do this by removing the pound sign (#
) preceding this directive:
. . .
write_enable=YES
. . .
Uncomment the chroot
to prevent the FTP-connected user from accessing any files or commands outside the directory tree:
. . .
chroot_local_user=YES
. . .
Next, add a user_sub_token
directive whose value is the $USER
environment variable. Then add a local_root
directive and set it to the path shown, which also includes the $USER
environment variable. This setup ensures that the configuration will allow for this user and future users to be routed to the appropriate user’s home directory when logging in. Add these settings anywhere in the file:
. . .
user_sub_token=$USER
local_root=/home/$USER/ftp
Limit the range of ports that can be used for passive FTP to ensure enough connections are available:
. . .
pasv_min_port=40000
pasv_max_port=50000
Note: In Step 2, you opened the ports that are set here for the passive port range. If you change these values, be sure to update your firewall settings.
To allow FTP access on a case-by-case basis, set the configuration so that users have access only when they are explicitly added to a list, rather than by default:
. . .
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
userlist_deny
toggles the logic: when it is set to YES
, users on the list are denied FTP access; when it is set to NO
, only users on the list are allowed access.
When you’re done making the changes, save the file and exit the editor. If you used nano
to edit the file, you can do so by pressing CTRL + X
, Y
, then ENTER
.
Finally, add your user to /etc/vsftpd.userlist
. Use the -a
flag to append to the file:
- echo "sammy" | sudo tee -a /etc/vsftpd.userlist
Check that it was added as you expected:
- cat /etc/vsftpd.userlist
Outputsammy
Restart the daemon to load the configuration changes:
- sudo systemctl restart vsftpd
With the configuration in place, now you can test FTP access.
We’ve configured the server to allow only the user sammy to connect via FTP. Now we will make sure that this works as expected.
Since you’ve disabled anonymous access, you can test it by trying to connect anonymously. If the configuration is set up properly, anonymous users should be denied permission. Open another terminal window and run the following command. Be sure to replace 203.0.113.0
with your server’s public IP address:
- ftp -p 203.0.113.0
When prompted for a username, try logging in as a nonexistent user such as anonymous and you will receive the following output:
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): anonymous
530 Permission denied.
ftp: Login failed.
ftp>
Close the connection:
- bye
Users other than sammy should also fail to connect. Try connecting as your sudo user. They should also be denied access, and it should happen before they’re allowed to enter their password:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sudo_user
530 Permission denied.
ftp: Login failed.
ftp>
Close the connection:
- bye
The user sammy, on the other hand, should be able to connect, read, and write files. Make sure that your designated FTP user can connect:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
331 Please specify the password.
Password: your_user's_password
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>
Now change into the files
directory:
- cd files
Output250 Directory successfully changed.
Next, run get
to transfer the test file you created earlier to your local machine:
- get test.txt
Output227 Entering Passive Mode (203,0,113,0,169,12).
150 Opening BINARY mode data connection for test.txt (17 bytes).
226 Transfer complete.
17 bytes received in 0.00 secs (4.5496 kB/s)
ftp>
Next, upload the file with a new name to test write permissions:
- put test.txt upload.txt
Output227 Entering Passive Mode (203,0,113,0,164,71).
150 Ok to send data.
226 Transfer complete.
17 bytes sent in 0.00 secs (5.3227 kB/s)
Close the connection:
- bye
Now that you’ve tested your configuration, next you’ll take steps to further secure your server.
Since FTP does not encrypt any data in transit, including user credentials, you can enable TLS/SSL to provide that encryption. The first step is to create the SSL certificates for use with vsftpd
.
Use openssl
to create a new certificate and use the -days
flag to make it valid for one year. In the same command, add a private 2048-bit RSA key. By setting both the -keyout
and -out
flags to the same value, the private key and the certificate will be located in the same file:
- sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem
You’ll be prompted to provide address information for your certificate. Substitute your own information for the highlighted values:
OutputGenerating a 2048 bit RSA private key
............................................................................+++
...........+++
writing new private key to '/etc/ssl/private/vsftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:NY
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:DigitalOcean
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: your_server_ip
Email Address []:
For more detailed information about the certificate flags, read OpenSSL Essentials: Working with SSL Certificates, Private Keys and CSRs.
Once you’ve created the certificates, open the vsftpd
configuration file again:
- sudo nano /etc/vsftpd.conf
Toward the bottom of the file, there will be two lines that begin with rsa_
. Comment them out by preceding each line with a pound sign (#
):
. . .
# rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
# rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
. . .
After those lines, add the following lines that point to the certificate and private key you created:
. . .
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
. . .
Now you will force the use of SSL, which will prevent clients that can’t handle TLS from connecting. This is necessary to ensure that all traffic is encrypted, but it may force your FTP user to change clients. Change ssl_enable
to YES
:
. . .
ssl_enable=YES
. . .
Next, add the following lines to explicitly deny anonymous connections over SSL and require SSL for both data transfer and logins:
. . .
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
. . .
Then configure the server to use TLS, the preferred successor to SSL, by adding the following lines:
. . .
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
. . .
Lastly, add two final options. The first will not require SSL reuse because it can break many FTP clients. The second will require “high” encryption cipher suites, which currently means key lengths equal to or greater than 128 bits:
. . .
require_ssl_reuse=NO
ssl_ciphers=HIGH
. . .
Here is how this section of the file should appear after all of these changes have been made:
# This option specifies the location of the RSA certificate to use for SSL
# encrypted connections.
#rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
require_ssl_reuse=NO
ssl_ciphers=HIGH
When you’re done, save and close the file. If you used nano
, you can exit by pressing CTRL + X
, Y
, then ENTER
.
Restart the server for the changes to take effect:
- sudo systemctl restart vsftpd
At this point, you’ll no longer be able to connect with an insecure command line client. If you tried, you’d get the following message:
Outputftp -p 203.0.113.0
Connected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
530 Non-anonymous sessions must use encryption.
ftp: Login failed.
421 Service not available, remote server has closed connection
ftp>
Next, verify that you can connect using a client that supports TLS, such as FileZilla.
Most modern FTP clients can be configured to use TLS encryption. For our purposes, we will demonstrate how to connect with FileZilla because of its cross-platform support. Consult the documentation for other clients.
When you first open FileZilla, find the Site Manager icon located above the word Host, the leftmost icon on the top row. Click this button:
A new window will open. Click the New Site button in the bottom right corner:
Under My Sites a new icon with the words New Site will appear. You can name it now or return later and use the Rename button.
Fill out the Host field with the name or IP address. Under the Encryption drop-down menu, select Require explicit FTP over TLS.
For Logon Type, select Ask for password. Fill in your FTP user in the User field:
Click the Connect button at the bottom of the interface. You will be asked for the user’s password:
Select OK to connect. You should now be connected to your server with TLS/SSL encryption.
Next, you will be presented with a server certificate that looks like the following:
When you’ve accepted the certificate, double-click the files
folder and drag upload.txt
to the left to confirm that you’re able to download files:
When you’ve done that, right-click on the local copy, rename it to upload-tls.txt
and drag it back to the server to confirm that you can upload files:
You’ve now confirmed that you can securely and successfully transfer files with SSL/TLS enabled.
If you’re unable to use TLS because of client requirements, you can gain some security by disabling the FTP user’s ability to log in any other way. One way to prevent it is by creating a custom shell. Although this will not provide any encryption, it may be worth doing so as to limit the access of a compromised account to files accessible by FTP.
First, open a file called ftponly
in the bin
directory:
- sudo nano /bin/ftponly
Add a message telling the user why they are unable to log in:
#!/bin/sh
echo "This account is limited to FTP access only."
Save the file and exit your editor. If you used nano
, you can exit by pressing CTRL + X
, Y
, then ENTER
.
Then, change the permissions to make the file executable:
- sudo chmod a+x /bin/ftponly
Open the list of valid shells:
- sudo nano /etc/shells
At the bottom add:
. . .
/bin/ftponly
Update the user’s shell with the following command:
- sudo usermod sammy -s /bin/ftponly
Now, try logging into your server as sammy:
- ssh sammy@your_server_ip
You will receive the following message :
OutputThis account is limited to FTP access only.
Connection to 203.0.113.0 closed.
This confirms that the user can no longer ssh
to the server and is limited to FTP access only. Please note, if you received an error message when logging into your server, this could mean that your server does not accept password authentication. Using password-based authentication can leave your server vulnerable to attacks, and this is why you may want to consider disabling password authentication. If you’ve already configured SSH-key-based authentication, you can learn more about how to disable password authentication on your server on Step 4 of this tutorial.
In this tutorial, we explained how to set up FTP for users with a local account. If you need to use an external authentication source, you might want to explore vsftpd
’s support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos. You can also read about vsftpd features, latest releases, and updates to learn more.
Privilege separation is one of the fundamental security paradigms implemented in Linux and Unix-like operating systems. Regular users operate with limited privileges in order to reduce the scope of their influence to their own environment, and not the wider operating system.
A special user, called root, has super-user privileges. This is an administrative account without the restrictions that are present on normal users. Users can execute commands with super-user or root privileges in a number of different ways.
In this article, we will discuss how to correctly and securely obtain root privileges, with a special focus on editing the /etc/sudoers
file.
We will be completing these steps on an Ubuntu 20.04 server, but most modern Linux distributions such as Debian and CentOS should operate in a similar manner.
This guide assumes that you have already completed the initial server setup discussed here. Log into your server as regular, non-root user and continue below.
Note: This tutorial goes into depth about privilege escalation and the sudoers
file. If you just want to add sudo
privileges to a user, check out our How To Create a New Sudo-enabled User quickstart tutorials for Ubuntu and CentOS.
There are three basic ways to obtain root privileges, which vary in their level of sophistication.
The simplest and most straightforward method of obtaining root privileges is to directly log into your server as the root user.
If you are logging into a local machine (or using an out-of-band console feature on a virtual server), enter root
as your username at the login prompt and enter the root password when asked.
If you are logging in through SSH, specify the root user prior to the IP address or domain name in your SSH connection string:
- ssh root@server_domain_or_ip
If you have not set up SSH keys for the root user, enter the root password when prompted.
su
to Become RootLogging in directly as root is usually not recommended, because it is easy to begin using the system for non-administrative tasks, which is dangerous.
The next way to gain super-user privileges allows you to become the root user at any time, as you need it.
We can do this by invoking the su
command, which stands for “substitute user”. To gain root privileges, type:
- su
You will be prompted for the root user’s password, after which, you will be dropped into a root shell session.
When you have finished the tasks which require root privileges, return to your normal shell by typing:
- exit
sudo
to Execute Commands as RootThe final, way of obtaining root privileges that we will discuss is with the sudo
command.
The sudo
command allows you to execute one-off commands with root privileges, without the need to spawn a new shell. It is executed like this:
- sudo command_to_execute
Unlike su
, the sudo
command will request the password of the current user, not the root password.
Because of its security implications, sudo
access is not granted to users by default, and must be set up before it functions correctly. Check out our How To Create a New Sudo-enabled User quickstart tutorials for Ubuntu and CentOS to learn how to set up a sudo
-enabled user.
In the following section, we will discuss how to modify the sudo
configuration in greater detail.
The sudo
command is configured through a file located at /etc/sudoers
.
Warning: Never edit this file with a normal text editor! Always use the visudo
command instead!
Because improper syntax in the /etc/sudoers
file can leave you with a broken system where it is impossible to obtain elevated privileges, it is important to use the visudo
command to edit the file.
The visudo
command opens a text editor like normal, but it validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo
operations, which may be your only way of obtaining root privileges.
Traditionally, visudo
opens the /etc/sudoers
file with the vi
text editor. Ubuntu, however, has configured visudo
to use the nano
text editor instead.
If you would like to change it back to vi
, issue the following command:
- sudo update-alternatives --config editor
OutputThere are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim.basic 30 manual mode
4 /usr/bin/vim.tiny 10 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Select the number that corresponds with the choice you would like to make.
On CentOS, you can change this value by adding the following line to your ~/.bashrc
:
- export EDITOR=`which name_of_editor`
Source the file to implement the changes:
- . ~/.bashrc
After you have configured visudo
, execute the command to access the /etc/sudoers
file:
- sudo visudo
You will be presented with the /etc/sudoers
file in your selected text editor.
I have copied and pasted the file from Ubuntu 20.04, with comments removed. The CentOS /etc/sudoers
file has many more lines, some of which we will not discuss in this guide.
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
#includedir /etc/sudoers.d
Let’s take a look at what these lines do.
The first line, Defaults env_reset
, resets the terminal environment to remove any user variables. This is a safety measure used to clear potentially harmful environmental variables from the sudo
session.
The second line, Defaults mail_badpass
, tells the system to mail notices of bad sudo
password attempts to the configured mailto
user. By default, this is the root account.
The third line, which begins with Defaults secure_path=...
, specifies the PATH
(the places in the filesystem the operating system will look for applications) that will be used for sudo
operations. This prevents using user paths which may be harmful.
The fourth line, which dictates the root user’s sudo
privileges, is different from the preceding lines. Let’s take a look at what the different fields mean:
root ALL=(ALL:ALL) ALL
The first field indicates the username that the rule will apply to (root).
root ALL=(ALL:ALL) ALL
The first “ALL” indicates that this rule applies to all hosts.
root ALL=(ALL:ALL) ALL
This “ALL” indicates that the root user can run commands as all users.
root ALL=(ALL:ALL) ALL
This “ALL” indicates that the root user can run commands as all groups.
root ALL=(ALL:ALL) ALL
The last “ALL” indicates these rules apply to all commands.
This means that our root user can run any command using sudo
, as long as they provide their password.
The next two lines are similar to the user privilege lines, but they specify sudo
rules for groups.
Names beginning with a %
indicate group names.
Here, we see the admin group can execute any command as any user on any host. Similarly, the sudo group has the same privileges, but can execute as any group as well.
The last line might look like a comment at first glance:
. . .
#includedir /etc/sudoers.d
It does begin with a #
, which usually indicates a comment. However, this line actually indicates that files within the /etc/sudoers.d
directory will be sourced and applied as well.
Files within that directory follow the same rules as the /etc/sudoers
file itself. Any file that does not end in ~
and that does not have a .
in it will be read and appended to the sudo
configuration.
This is mainly meant for applications to alter sudo
privileges upon installation. Putting all of the associated rules within a single file in the /etc/sudoers.d
directory can make it easy to see which privileges are associated with which accounts and to reverse credentials easily without having to try to manipulate the /etc/sudoers
file directly.
As with the /etc/sudoers
file itself, you should always edit files within the /etc/sudoers.d
directory with visudo
. The syntax for editing these files would be:
- sudo visudo -f /etc/sudoers.d/file_to_edit
The most common operation that users want to accomplish when managing sudo
permissions is to grant a new user general sudo
access. This is useful if you want to give an account full administrative access to the system.
The easiest way of doing this on a system set up with a general purpose administration group, like the Ubuntu system in this guide, is actually to add the user in question to that group.
For example, on Ubuntu 20.04, the sudo
group has full admin privileges. We can grant a user these same privileges by adding them to the group like this:
- sudo usermod -aG sudo username
The gpasswd
command can also be used:
- sudo gpasswd -a username sudo
These will both accomplish the same thing.
On CentOS, this is usually the wheel
group instead of the sudo
group:
- sudo usermod -aG wheel username
Or, using gpasswd
:
- sudo gpasswd -a username wheel
On CentOS, if adding the user to the group does not work immediately, you may have to edit the /etc/sudoers
file to uncomment the group name:
- sudo visudo
. . .
%wheel ALL=(ALL) ALL
. . .
Now that we have gotten familiar with the general syntax of the file, let’s create some new rules.
The sudoers
file can be organized more easily by grouping things with various kinds of “aliases”.
For instance, we can create three different groups of users, with overlapping membership:
. . .
User_Alias GROUPONE = abby, brent, carl
User_Alias GROUPTWO = brent, doris, eric,
User_Alias GROUPTHREE = doris, felicia, grant
. . .
Group names must start with a capital letter. We can then allow members of GROUPTWO
to update the apt
database by creating a rule like this:
. . .
GROUPTWO ALL = /usr/bin/apt-get update
. . .
If we do not specify a user/group to run as, as above, sudo
defaults to the root user.
We can allow members of GROUPTHREE
to shutdown and reboot the machine by creating a “command alias” and using that in a rule for GROUPTHREE
:
. . .
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart
GROUPTHREE ALL = POWER
. . .
We create a command alias called POWER
that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE
to execute these commands.
We can also create “Run as” aliases, which can replace the portion of the rule that specifies the user to execute the command as:
. . .
Runas_Alias WEB = www-data, apache
GROUPONE ALL = (WEB) ALL
. . .
This will allow anyone who is a member of GROUPONE
to execute commands as the www-data
user or the apache
user.
Just keep in mind that later rules will override earlier rules when there is a conflict between the two.
There are a number of ways that you can achieve more control over how sudo
reacts to a call.
The updatedb
command associated with the mlocate
package is relatively harmless on a single-user system. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this:
. . .
GROUPONE ALL = NOPASSWD: /usr/bin/updatedb
. . .
NOPASSWD
is a “tag” that means no password will be requested. It has a companion command called PASSWD
, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its “twin” tag later down the line.
For instance, we can have a line like this:
. . .
GROUPTWO ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill
. . .
Another helpful tag is NOEXEC
, which can be used to prevent some dangerous behavior in certain programs.
For example, some programs, like less
, can spawn other commands by typing this from within their interface:
!command_to_run
This basically executes any command the user gives it with the same permissions that less
is running under, which can be quite dangerous.
To restrict this, we could use a line like this:
. . .
username ALL = NOEXEC: /usr/bin/less
. . .
There are a few more pieces of information that may be useful when dealing with sudo
.
If you specified a user or group to “run as” in the configuration file, you can execute commands as those users by using the -u
and -g
flags, respectively:
- sudo -u run_as_user command
- sudo -g run_as_group command
For convenience, by default, sudo
will save your authentication details for a certain amount of time in one terminal. This means you won’t have to type your password in again until that timer runs out.
For security purposes, if you wish to clear this timer when you are done running administrative commands, you can run:
- sudo -k
If, on the other hand, you want to “prime” the sudo
command so that you won’t be prompted later, or to renew your sudo
lease, you can always type:
- sudo -v
You will be prompted for your password, which will be cached for later sudo
uses until the sudo
time frame expires.
If you are simply wondering what kind of privileges are defined for your username, you can type:
- sudo -l
This will list all of the rules in the /etc/sudoers
file that apply to your user. This gives you a good idea of what you will or will not be allowed to do with sudo
as any user.
There are many times when you will execute a command and it will fail because you forgot to preface it with sudo
. To avoid having to re-type the command, you can take advantage of a bash functionality that means “repeat last command”:
- sudo !!
The double exclamation point will repeat the last command. We preceded it with sudo
to quickly change the unprivileged command to a privileged command.
For some fun, you can add the following line to your /etc/sudoers
file with visudo
:
- sudo visudo
. . .
Defaults insults
. . .
This will cause sudo
to return a silly insult when a user types in an incorrect password for sudo
. We can use sudo -k
to clear the previous sudo
cached password to try it out:
- sudo -k
- sudo ls
Output[sudo] password for demo: # enter an incorrect password here to see the results
Your mind just hasn't been the same since the electro-shock, has it?
[sudo] password for demo:
My mind is going. I can feel it.
You should now have a basic understanding of how to read and modify the sudoers
file, and a grasp on the various methods that you can use to obtain root privileges.
Remember, super-user privileges are not given to regular users for a reason. It is essential that you understand what each command does that you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use-case, and lock down any functionality that is not needed.
]]>The grep
command is one of the most useful commands in a Linux terminal environment. The name grep
stands for “global regular expression print”. This means that you can use grep
to check whether the input it receives matches a specified pattern. This seemingly trivial program is extremely powerful; its ability to sort input based on complex rules makes it a popular link in many command chains.
In this tutorial, you will explore the grep
command’s options, and then you’ll dive into using regular expressions to do more advanced searching.
To follow along with this guide, you will need access to a computer running a Linux-based operating system. This can either be a virtual private server which you’ve connected to with SSH or your local machine. Note that this tutorial was validated using a Linux server running Ubuntu 20.04, but the examples given should work on a computer running any version of any Linux distribution.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
In this tutorial, you’ll use grep
to search the GNU General Public License version 3 for various words and phrases.
If you’re on an Ubuntu system, you can find the file in the /usr/share/common-licenses
folder. Copy it to your home directory:
- cp /usr/share/common-licenses/GPL-3 .
If you’re on another system, use the curl
command to download a copy:
- curl -o GPL-3 https://www.gnu.org/licenses/gpl-3.0.txt
You’ll also use the BSD license file in this tutorial. On Linux, you can copy that to your home directory with the following command:
- cp /usr/share/common-licenses/BSD .
If you’re on another system, create the file with the following command:
- cat << 'EOF' > BSD
- Copyright (c) The Regents of the University of California.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
- 1. Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- 2. Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in the
- documentation and/or other materials provided with the distribution.
- 3. Neither the name of the University nor the names of its contributors
- may be used to endorse or promote products derived from this software
- without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
- ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
- FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- SUCH DAMAGE.
- EOF
Now that you have the files, you can start working with grep
.
In the most basic form, you use grep
to match literal patterns within a text file. This means that if you pass grep
a word to search for, it will print out every line in the file containing that word.
Execute the following command to use grep
to search for every line that contains the word GNU
:
- grep "GNU" GPL-3
The first argument, GNU
, is the pattern you’re searching for, while the second argument, GPL-3
, is the input file you wish to search.
The resulting output will be every line containing the pattern text:
Output GNU GENERAL PUBLIC LICENSE
The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
Developers that use the GNU GPL protect your rights with two steps:
"This License" refers to version 3 of the GNU General Public License.
13. Use with the GNU Affero General Public License.
under version 3 of the GNU Affero General Public License into a single
...
...
On some systems, the pattern you searched for will be highlighted in the output.
By default, grep
will search for the exact specified pattern within the input file and return the lines it finds. You can make this behavior more useful though by adding some optional flags to grep
.
If you want grep
to ignore the “case” of your search parameter and search for both upper- and lower-case variations, you can specify the -i
or --ignore-case
option.
Search for each instance of the word license
(with upper, lower, or mixed cases) in the same file as before with the following command:
- grep -i "license" GPL-3
The results contain: LICENSE
, license
, and License
:
Output GNU GENERAL PUBLIC LICENSE
of this license document, but changing it is not allowed.
The GNU General Public License is a free, copyleft license for
The licenses for most software and other practical works are designed
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
price. Our General Public Licenses are designed to make sure that you
(1) assert copyright on the software, and (2) offer you this License
"This License" refers to version 3 of the GNU General Public License.
"The Program" refers to any copyrightable work licensed under this
...
...
If there was an instance with LiCeNsE
, that would have been returned as well.
If you want to find all lines that do not contain a specified pattern, you can use the -v
or --invert-match
option.
Search for every line that does not contain the word the
in the BSD license with the following command:
- grep -v "the" BSD
You’ll receive this output:
OutputAll rights reserved.
Redistribution and use in source and binary forms, with or without
are met:
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
...
...
Since you did not specify the “ignore case” option, the last two items were returned as not having the word the
.
It is often useful to know the line number that the matches occur on. You can do this by using the -n
or --line-number
option. Re-run the previous example with this flag added:
- grep -vn "the" BSD
This will return the following text:
Output2:All rights reserved.
3:
4:Redistribution and use in source and binary forms, with or without
6:are met:
13: may be used to endorse or promote products derived from this software
14: without specific prior written permission.
15:
16:THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17:ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
...
...
Now you can reference the line number if you want to make changes to every line that does not contain the
. This is especially handy when working with source code.
In the introduction, you learned that grep
stands for “global regular expression print”. A “regular expression” is a text string that describes a particular search pattern.
Different applications and programming languages implement regular expressions slightly differently. In this tutorial you will only be exploring a small subset of the way that grep
describes its patterns.
In the previous examples in this tutorial, when you searched for the words GNU
and the
, you were actually searching for basic regular expressions which matched the exact string of characters GNU
and the
. Patterns that exactly specify the characters to be matched are called “literals” because they match the pattern literally, character-for-character.
It is helpful to think of these as matching a string of characters rather than matching a word. This will become a more important distinction as you learn more complex patterns.
All alphabetical and numerical characters (as well as certain other characters) are matched literally unless modified by other expression mechanisms.
Anchors are special characters that specify where in the line a match must occur to be valid.
For instance, using anchors, you can specify that you only want to know about the lines that match GNU
at the very beginning of the line. To do this, you could use the ^
anchor before the literal string.
Run the following command to search the GPL-3
file and find lines where GNU
occurs at the very beginning of a line:
- grep "^GNU" GPL-3
This command will return the following two lines:
OutputGNU General Public License for most of our software; it applies also to
GNU General Public License, you may choose any version ever published
Similarly, you use the $
anchor at the end of a pattern to indicate that the match will only be valid if it occurs at the very end of a line.
This command will match every line ending with the word and
in the GPL-3
file:
- grep "and$" GPL-3
You’ll receive this output:
Outputthat there is no warranty for this free software. For both users' and
The precise terms and conditions for copying, distribution and
License. Each licensee is addressed as "you". "Licensees" and
receive it, in any medium, provided that you conspicuously and
alternative is allowed only occasionally and noncommercially, and
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
provisionally, unless and until the copyright holder explicitly and
receives a license from the original licensors, to run, modify and
make, use, sell, offer for sale, import and otherwise run, modify and
The period character (.) is used in regular expressions to mean that any single character can exist at the specified location.
For example, to match anything in the GPL-3
file that has two characters and then the string cept
, you would use the following pattern:
- grep "..cept" GPL-3
This command returns the following output:
Outputuse, which is precisely where it is most unacceptable. Therefore, we
infringement under applicable copyright law, except executing it on a
tells the user that there is no warranty for the work (except to the
License by making exceptions from one or more of its conditions.
form of a separately written license, or stated as exceptions;
You may not propagate or modify a covered work except as expressly
9. Acceptance Not Required for Having Copies.
...
...
This output has instances of both accept
and except
and variations of the two words. The pattern would also have matched z2cept
if that was found as well.
By placing a group of characters within brackets (\[
and \]
), you can specify that the character at that position can be any one character found within the bracket group.
For example, to find the lines that contain too
or two
, you would specify those variations succinctly by using the following pattern:
- grep "t[wo]o" GPL-3
The output shows that both variations exist in the file:
Outputyour programs, too.
freedoms that you received. You must make sure that they, too, receive
Developers that use the GNU GPL protect your rights with two steps:
a computer network, with no transfer of a copy, is not conveying.
System Libraries, or general-purpose tools or generally available free
Corresponding Source from a network server at no charge.
...
...
Bracket notation gives you some interesting options. You can have the pattern match anything except the characters within a bracket by beginning the list of characters within the brackets with a ^
character.
This example is like the pattern .ode
, but will not match the pattern code
:
- grep "[^c]ode" GPL-3
Here’s the output you’ll receive:
Output 1. Source Code.
model, to give anyone who possesses the object code either (1) a
the only significant mode of use of the product.
notice like this when it starts in an interactive mode:
Notice that in the second line returned, there is, in fact, the word code
. This is not a failure of the regular expression or grep. Rather, this line was returned because earlier in the line, the pattern mode
, found within the word model
, was found. The line was returned because there was an instance that matched the pattern.
Another helpful feature of brackets is that you can specify a range of characters instead of individually typing every available character.
This means that if you want to find every line that begins with a capital letter, you can use the following pattern:
- grep "^[A-Z]" GPL-3
Here’s the output this expression returns:
OutputGNU General Public License for most of our software; it applies also to
States should not allow patents to restrict development and use of
License. Each licensee is addressed as "you". "Licensees" and
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
System Libraries, or general-purpose tools or generally available free
Source.
User Product is transferred to the recipient in perpetuity or for a
...
...
Due to some legacy sorting issues, it is often more accurate to use POSIX character classes instead of character ranges like you just used.
To discuss every POSIX character class would be beyond the scope of this guide, but an example that would accomplish the same procedure as the previous example uses the \[:upper:\]
character class within a bracket selector:
- grep "^[[:upper:]]" GPL-3
The output will be the same as before.
Finally, one of the most commonly used meta-characters is the asterisk, or *
, which means “repeat the previous character or expression zero or more times”.
To find each line in the GPL-3
file that contains an opening and closing parenthesis, with only letters and single spaces in between, use the following expression:
- grep "([A-Za-z ]*)" GPL-3
You’ll get the following output:
Output Copyright (C) 2007 Free Software Foundation, Inc.
distribution (with or without modification), making available to the
than the work as a whole, that (a) is included in the normal form of
Component, and (b) serves only to enable use of the work with that
(if any) on which the executable work runs, or a compiler used to
(including a physical distribution medium), accompanied by the
(including a physical distribution medium), accompanied by a
place (gratis or for a charge), and offer equivalent access to the
...
...
So far you’ve used periods, asterisks, and other characters in your expressions, but sometimes you need to search for those characters specifically.
There are times where you’ll need to search for a literal period or a literal opening bracket, especially when working with source code or configuration files. Because these characters have special meaning in regular expressions, you need to “escape” these characters to tell grep
that you do not wish to use their special meaning in this case.
You escape characters by using the backslash character (\
) in front of the character that would normally have a special meaning.
For instance, to find any line that begins with a capital letter and ends with a period, use the following expression which escapes the ending period so that it represents a literal period instead of the usual “any character” meaning:
- grep "^[A-Z].*\.$" GPL-3
This is the output you’ll see:
OutputSource.
License by making exceptions from one or more of its conditions.
License would be to refrain entirely from conveying the Program.
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
SUCH DAMAGES.
Also add information on how to contact you by electronic and paper mail.
Now let’s look at other regular expression options.
The grep
command supports a more extensive regular expression language by using the -E
flag or by calling the egrep
command instead of grep
.
These options open up the capabilities of “extended regular expressions”. Extended regular expressions include all of the basic meta-characters, along with additional meta-characters to express more complex matches.
One of the most useful abilities that extended regular expressions open up is the ability to group expressions together to manipulate or reference as one unit.
To group expressions together, wrap them in parentheses. If you would like to use parentheses without using extended regular expressions, you can escape them with the backslash to enable this functionality. This means that the following three expressions are functionally equivalent:
- grep "\(grouping\)" file.txt
- grep -E "(grouping)" file.txt
- egrep "(grouping)" file.txt
Similar to how bracket expressions can specify different possible choices for single character matches, alternation allows you to specify alternative matches for strings or expression sets.
To indicate alternation, use the pipe character |
. These are often used within parenthetical grouping to specify that one of two or more possibilities should be considered a match.
The following will find either GPL
or General Public License
in the text:
- grep -E "(GPL|General Public License)" GPL-3
The output looks like this:
Output The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
price. Our General Public Licenses are designed to make sure that you
Developers that use the GNU GPL protect your rights with two steps:
For the developers' and authors' protection, the GPL clearly explains
authors' sake, the GPL requires that modified versions be marked as
have designed this version of the GPL to prohibit the practice for those
...
...
Alternation can select between more than two choices by adding additional choices within the selection group separated by additional pipe (|
) characters.
Like the *
meta-character that matched the previous character or character set zero or more times, there are other meta-characters available in extended regular expressions that specify the number of occurrences.
To match a character zero or one times, you can use the ?
character. This makes character or character sets that came before optional, in essence.
The following matches copyright
and right
by putting copy
in an optional group:
- grep -E "(copy)?right" GPL-3
You’ll receive this output:
Output Copyright (C) 2007 Free Software Foundation, Inc.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
"Copyright" also means copyright-like laws that apply to other kinds of
...
The +
character matches an expression one or more times. This is almost like the *
meta-character, but with the +
character, the expression must match at least once.
The following expression matches the string free
plus one or more characters that are not white space characters:
- grep -E "free[^[:space:]]+" GPL-3
You’ll see this output:
Output The GNU General Public License is a free, copyleft license for
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
When we speak of free software, we are referring to freedom, not
have the freedom to distribute copies of free software (and charge for
you modify it: responsibilities to respect the freedom of others.
freedomss that you received. You must make sure that they, too, receive
protecting users' freedom to change the software. The systematic
of the GPL, as needed to protect the freedom of users.
patents cannot be used to render the program non-free.
To specify the number of times that a match is repeated, use the brace characters ({
and }
). These characters let you specify an exact number, a range, or an upper or lower bounds to the amount of times an expression can match.
Use the following expression to find all of the lines in the GPL-3
file that contain triple-vowels:
- grep -E "[AEIOUaeiou]{3}" GPL-3
Each line returned has a word with three vowels:
Outputchanged, so that their problems will not be attributed erroneously to
authors of previous versions.
receive it, in any medium, provided that you conspicuously and
give under the previous paragraph, plus a right to possession of the
covered work so as to satisfy simultaneously your obligations under this
To match any words that have between 16 and 20 characters, use the following expression:
- grep -E "[[:alpha:]]{16,20}" GPL-3
Here’s this command’s output:
Output certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
c) Prohibiting misrepresentation of the origin of that material, or
Only lines containing words within that length are displayed.
grep
is useful in finding patterns within files or within the file system hierarchy, so it’s worth spending time getting comfortable with its options and syntax.
Regular expressions are even more versatile, and can be used with many popular programs. For instance, many text editors implement regular expressions for searching and replacing text.
Furthermore, most modern programming languages use regular expressions to perform procedures on specific pieces of data. Once you understand regular expressions, you’ll be able to transfer that knowledge to many common computer-related tasks, from performing advanced searches in your text editor to validating user input.
]]>In a previous tutorial, we discussed how the ps
, kill
, and nice
commands can be used to control processes on your system. This guide highlights how bash
, the Linux system, and your terminal come together to offer process and job control.
This article will focus on managing foreground and background processes and will demonstrate how to leverage your shell’s job control functions to gain more flexibility in how you run commands.
To follow along with this guide, you will need access to a computer running the bash
shell interface. bash
is the default shell on many Linux-based operating systems, and it is available on many Unix-like operating systems, including macOS. Note that this tutorial was validated using a Linux virtual private server running Ubuntu 20.04.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
Most processes that you start on a Linux machine will run in the foreground. The command will begin execution, blocking use of the shell for the duration of the process. The process may allow user interaction or may just run through a procedure and then exit. Any output will be displayed in the terminal window by default. We’ll discuss the basic way to manage foreground processes in the following subsections.
By default, processes are started in the foreground. This means that until the program exits or changes state, you will not be able to interact with the shell.
Some foreground commands exit very quickly and return you to a shell prompt almost immediately. For instance, the following command will print Hello World
to the terminal and then return you to your command prompt:
- echo "Hello World"
OutputHello World
Other foreground commands take longer to execute, blocking shell access for their duration. This might be because the command is performing a more extensive operation or because it is configured to run until it is explicitly stopped or until it receives other user input.
A command that runs indefinitely is the top
utility. After starting, it will continue to run and update its display until the user terminates the process:
- top
You can quit top
by pressing q
, but some other processes don’t have a dedicated quit function. To stop those, you’ll have to use another method.
Suppose you start a simple bash
loop on the command line. As an example, the following command will start a loop that prints Hello World
every ten seconds. This loop will continue forever, until explicitly terminated:
- while true; do echo "Hello World"; sleep 10; done
Unlike top
, loops like this have no “quit” key. You will have to stop the process by sending it a signal. In Linux, the kernel can send signals to running processes as a request that they exit or change states. Linux terminals are usually configured to send the “SIGINT” signal (short for “signal interrupt”) to current foreground process when the user presses the CTRL + C
key combination. The SIGINT signal tells the program that the user has requested termination using the keyboard.
To stop the loop you’ve started, hold the CTRL
key and press the C
key:
CTRL + C
The loop will exit, returning control to the shell.
The SIGINT signal sent by the CTRL + C
combination is one of many signals that can be sent to programs. Most signals do not have keyboard combinations associated with them and must instead be sent using the kill
command, which will be covered later on in this guide.
As mentioned previously, foreground process will block access to the shell for the duration of their execution. What if you start a process in the foreground, but then realize that you need access to the terminal?
Another signal that you can send is the “SIGTSTP” signal. SIGTSTP is short for “signal terminal stop”, and is usually represented as signal number 20. When you press CTRL + Z
, your terminal registers a “suspend” command, which then sends the SIGTSTP signal to the foreground process. Essentially, this will pause the execution of the command and return control to the terminal.
To illustrate, use ping
to connect to google.com
every 5 seconds. The following command precedes the ping
command with command
, which will allow you to bypass any shell aliases that artificially set a maximum count on the command:
- command ping -i 5 google.com
Instead of terminating the command with CTRL + C
, press CTRL + Z
instead. Doing so will return output like this:
Output[1]+ Stopped ping -i 5 google.com
The ping
command has been temporarily stopped, giving you access to a shell prompt again. You can use the ps
process tool to show this:
- ps T
Output PID TTY STAT TIME COMMAND
26904 pts/3 Ss 0:00 /bin/bash
29633 pts/3 T 0:00 ping -i 5 google.com
29643 pts/3 R+ 0:00 ps t
This output indicates that the ping
process is still listed, but that the “STAT” column has a “T” in it. Per the ps
man page, this means that a job that has been “stopped by [a] job control signal”.
This guide will outline how to change process states in greater depth, but for now you can resume execution of the command in the foreground again by typing:
- fg
Once the process has resumed, terminate it with CTRL + C
:
The main alternative to running a process in the foreground is to allow it to execute in the background. A background process is associated with the specific terminal that started it, but does not block access to the shell. Instead, it executes in the background, leaving the user able to interact with the system while the command runs.
Because of the way that a foreground process interacts with its terminal, there can be only a single foreground process for every terminal window. Because background processes return control to the shell immediately without waiting for the process to complete, many background processes can run at the same time.
You can start a background process by appending an ampersand character (&
) to the end of your commands. This tells the shell not to wait for the process to complete, but instead to begin execution and to immediately return the user to a prompt. The output of the command will still display in the terminal (unless redirected), but you can type additional commands as the background process continues.
For instance, you can start the same ping
process from the previous section in the background by typing:
- command ping -i 5 google.com &
The bash
job control system will return output like this:
Output[1] 4287
You’ll then receive the normal output from the ping
command:
OutputPING google.com (74.125.226.71) 56(84) bytes of data.
64 bytes from lga15s44-in-f7.1e100.net (74.125.226.71): icmp_seq=1 ttl=55 time=12.3 ms
64 bytes from lga15s44-in-f7.1e100.net (74.125.226.71): icmp_seq=2 ttl=55 time=11.1 ms
64 bytes from lga15s44-in-f7.1e100.net (74.125.226.71): icmp_seq=3 ttl=55 time=9.98 ms
However, you can also type commands at the same time. The background process’s output will be mixed among the input and output of your foreground processes, but it will not interfere with the execution of the foreground processes.
To list all stopped or backgrounded processes, you can use the jobs
command:
- jobs
If you still have the previous ping
command running in the background, the jobs
command’s output will be similar to this:
Output[1]+ Running command ping -i 5 google.com &
This indicates that you currently have a single background process running. The [1]
represents the command’s job spec or job number. You can reference this with other job and process control commands, like kill
, fg
, and bg
by preceding the job number with a percentage sign. In this case, you’d reference this job as %1
.
You can stop the current background process in a few ways. The most straightforward way is to use the kill
command with the associated job number. For instance, you can kill your running background process by typing:
- kill %1
Depending on how your terminal is configured, either immediately or the next time you hit ENTER
, the job termination status will appear in your output:
Output[1]+ Terminated command ping -i 5 google.com
If you check the jobs
command again, there won’t be any current jobs.
Now that you know how to start and stop processes in the background, you can learn about changing their state.
This guide already outlined one way to change a process’s state: stopping or suspending a process with CTRL + Z
. When processes are in this stopped state, you can move a foreground process to the background or vice versa.
If you forget to end a command with &
when you start it, you can still move the process to the background.
The first step is to stop the process with CTRL + Z
again. Once the process is stopped, you can use the bg
command to start it again in the background:
- bg
You will receive the job status line again, this time with the ampersand appended:
Output[1]+ ping -i 5 google.com &
By default, the bg
command operates on the most recently-stopped process. If you’ve stopped multiple processes in a row without starting them again, you can reference a specific process by its job number to move the correct process to the background.
Note that not all commands can be backgrounded. Some processes will automatically terminate if they detect that they have been started with their standard input and output directly connected to an active terminal.
You can also move background processes to the foreground by typing fg
:
- fg
This operates on your most recently backgrounded process (indicated by the +
in the jobs
command’s output). It immediately suspends the process and puts it into the foreground. To specify a different job, use its job number:
- fg %2
Once a job is in the foreground, you can kill it with CTRL + C
, let it complete, or suspend and move it to the background again.
Whether a process is in the background or in the foreground, it is rather tightly tied with the terminal instance that started it. When a terminal closes, it typically sends a SIGHUP signal to all of the processes (foreground, background, or stopped) that are tied to the terminal. This signals for the processes to terminate because their controlling terminal will shortly be unavailable.
There may be times, though, when you want to close a terminal but keep the background processes running. There are a number of ways of accomplishing this. One of the more flexible ways is to use a terminal multiplexer like screen
or tmux
. Another solution is to use a utility that provides the detach functionality of screen
and tmux
, like dtach
.
However, this isn’t always an option. Sometimes these programs aren’t available or you’ve already started the process you need to continue running. Sometimes these could even be overkill for what you need to accomplish.
nohup
If you know when starting the process that you will want to close the terminal before the process completes, you can start it using the nohup
command. This makes the started process immune to the SIGHUP signal. It will continue running when the terminal closes and will be reassigned as a child of the init system:
- nohup ping -i 5 google.com &
This will return a line like the following, indicating that the output of the command will be written to a file called nohup.out
:
Outputnohup: ignoring input and appending output to ‘nohup.out’
This file will be placed in your current working directory if writeable, but otherwise it will be placed in your home directory. This is to ensure that output is not lost if the terminal window is closed.
If you close the terminal window and open another one, the process will still be running. You will not find it in the output of the jobs
command because each terminal instance maintains its own independent job queue. Closing the terminal will cause the ping
job to be destroyed even though the ping
process is still running.
To kill the ping
process, you’ll have to find its process ID (or “PID”). You can do that with the pgrep
command (there is also a pkill
command, but this two-part method ensures that you are only killing the intended process). Use pgrep
and the -a
flag to search for the executable:
- pgrep -a ping
Output7360 ping -i 5 google.com
You can then kill the process by referencing the returned PID, which is the number in the first column:
- kill 7360
You may wish to remove the nohup.out
file if you don’t need it anymore.
disown
The nohup
command is helpful, but only if you know you will need it at the time you start the process. The bash
job control system provides other methods of achieving similar results with the built-in disown
command.
The disown
command, in its default configuration, removes a job from the jobs queue of a terminal. This means that it can no longer be managed using the job control mechanisms discussed previously in this guide, like fg
, bg
, CTRL + Z
, CTRL + C
. Instead, the job will immediately be removed from the list in the jobs
output and no longer associated with the terminal.
The command is called by specifying a job number. For instance, to immediately disown job 2, you could type:
- disown %2
This leaves the process in a state not unlike that of a nohup
process after the controlling terminal has been closed. The exception is that any output will be lost when the controlling terminal closes if it is not being redirected to a file.
Usually, you don’t want to remove the process completely from job control if you aren’t immediately closing your terminal window. You can pass the -h
flag to the disown
process instead in order to mark the process to ignore SIGHUP signals, but to otherwise continue on as a regular job:
- disown -h %1
In this state, you could use normal job control mechanisms to continue controlling the process until closing the terminal. Upon closing the terminal, you will, once again, be stuck with a process with nowhere to output if you didn’t redirect to a file when starting it.
To work around that, you can try to redirect the output of your process after it is already running. This is outside the scope of this guide, but this post provides an explanation of how you could do that.
huponexit
Shell Optionbash
has another way of avoiding the SIGHUP problem for child processes. The huponexit
shell option controls whether bash
will send its child processes the SIGHUP signal when it exits.
Note: The huponexit
option only affects the SIGHUP behavior when a shell session termination is initiated from within the shell itself. Some examples of when this applies are when the exit
command or CTRL + D
is pressed within the session.
When a shell session is ended through the terminal program itself (through closing the window, etc.), the command huponexit
will have no affect. Instead of bash
deciding on whether to send the SIGHUP signal, the terminal itself will send the SIGHUP signal to bash
, which will then correctly propagate the signal to its child processes.
Despite the aforementioned caveats, the huponexit
option is perhaps one of the easiest to manage. You can determine whether this feature is on or off by typing:
- shopt huponexit
To turn it on, type:
- shopt -s huponexit
Now, if you exit your session by typing exit
, your processes will all continue to run:
- exit
This has the same caveats about program output as the last option, so make sure you have redirected your processes’ output prior to closing your terminal if this is important.
Learning job control and how to manage foreground and background processes will give you greater flexibility when running programs on the command line. Instead of having to open up many terminal windows or SSH sessions, you can often get by with stopping processes early or moving them to the background as needed.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a Rocky Linux server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for a Rocky Linux 8 server. SSH keys provide a straightforward, secure method of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your local computer):
- ssh-keygen
By default, ssh-keygen
will create a 2048-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following prompt:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press ENTER
to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you may optionally enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to your key, to prevent unauthorized users from logging in.
You should then see the following output:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
You now have a public and private key that you can use to authenticate. The next step is to get the public key onto your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Rocky Linux host is to use a utility called ssh-copy-id
. This method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods that follow (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you need only specify the remote host that you would like to connect to and the user account that you have password SSH access to. This is the account to which your public SSH key will be copied:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into the remote account’s ~/.ssh/authorized_keys
file.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a more conventional SSH method.
We can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, we can make sure that the ~/.ssh
directory exists and has the correct permissions under the account we’re using.
We can then output the content we piped over into a file called authorized_keys
within this directory. We’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying any previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the above process manually.
We will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== sammy@host
Log in to your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, we’ll ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root
account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root
:
- chown -R sammy:sammy ~/.ssh
In this tutorial our user is named sammy but you should substitute the appropriate username into the above command.
We can now attempt key-based authentication with our Rocky Linux server.
If you have successfully completed one of the procedures above, you should now be able to log into the remote host without the remote account’s password.
The initial process is the same as with password-based authentication:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type yes
and then press ENTER
to continue.
If you did not supply a passphrase when creating your key pair in step 1, you will be logged in immediately. If you supplied a passphrase you will be prompted to enter it now. After authenticating, a new shell session should open for you with the configured account on the Rocky Linux server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling your SSH server’s password-based authentication.
If you were able to log in to your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo vi /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out with a #
hash. Press i
to put vi
into insertion mode, and then uncomment the line and set the value to no
. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
When you are finished making changes, press ESC
and then :wq
to write the changes to the file and quit. To actually implement these changes, we need to restart the sshd
service:
- sudo systemctl restart sshd
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing your current session:
- ssh username@remote_host
Once you have verified your SSH service is still working properly, you can safely close all current server sessions.
The SSH daemon on your Rocky Linux server now only responds to SSH keys. Password-based authentication has successfully been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>When you first create a new Rocky Linux 8 server, there are a few configuration steps that you should take early on as part of the basic setup. This will increase the security and usability of your server and will give you a solid foundation for subsequent actions.
To log into your server, you will need to know your server’s public IP address. You will also need the password or, if you installed an SSH key for authentication, the private key for the root user’s account. If you have not already logged into your server, you may want to follow our documentation on how to connect to your Droplet with SSH, which covers this process in detail.
If you are not already connected to your server, log in as the root user now using the following command (substitute the highlighted portion of the command with your server’s public IP address):
- ssh root@your_server_ip
Accept the warning about host authenticity if it appears. If you are using password authentication, provide your root password to log in. If you are using an SSH key that is passphrase protected, you may be prompted to enter the passphrase the first time you use the key each session. If this is your first time logging into the server with a password, you may also be prompted to change the root password.
The root user is the administrative user in a Linux environment, and it has very broad privileges. Because of the heightened privileges of the root account, you are discouraged from using it on a regular basis. This is because part of the power inherent with the root account is the ability to make very destructive changes, even by accident.
As such, the next step is to set up an alternative user account with a reduced scope of influence for day-to-day work. This account will still be able to gain increased privileges when necessary.
Once you are logged in as root, you can create the new user account that we will use to log in from now on.
This example creates a new user called sammy, but you should replace it with any username that you prefer:
- adduser sammy
Next, set a strong password for the sammy
user:
- passwd sammy
You will be prompted to enter the password twice. After doing so, your user will be ready to use, but first we’ll give this user additional privileges to use the sudo
command. This will allow us to run commands as root when necessary.
Now, we have a new user account with regular account privileges. However, we may sometimes need to perform administrative tasks.
To avoid having to log out of our regular user and log back in as the root account, we can set up what is known as “superuser” or root privileges for our regular account. This will allow our regular user to run commands with administrative privileges by putting the word sudo
before each command.
To add these privileges to our new user, we need to add the new user to the wheel group. By default, on Rocky Linux 8, users who belong to the wheel group are allowed to use the sudo
command.
As root, run this command to add your new user to the wheel group (substitute the highlighted word with your new username):
- usermod -aG wheel sammy
Now, when logged in as your regular user, you can type sudo
before commands to perform actions with superuser privileges.
Firewalls provide a basic level of security for your server. These applications are responsible for denying traffic to every port on your server, except for those ports/services you have explicitly approved. Rocky Linux has a service called firewalld
to perform this function. A tool called firewall-cmd
is used to configure firewalld
firewall policies.
Note: If your servers are running on DigitalOcean, you can optionally use DigitalOcean Cloud Firewalls instead of firewalld
. We recommend using only one firewall at a time to avoid conflicting rules that may be difficult to debug.
First install firewalld
:
- dnf install firewalld -y
The default firewalld
configuration allows ssh
connections, so we can turn the firewall on immediately:
- systemctl start firewalld
Check the status of the service to make sure it started:
- systemctl status firewalld
Output● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-02-06 16:39:40 UTC; 3s ago
Docs: man:firewalld(1)
Main PID: 13180 (firewalld)
Tasks: 2 (limit: 5059)
Memory: 22.4M
CGroup: /system.slice/firewalld.service
└─13180 /usr/libexec/platform-python -s /usr/sbin/firewalld --nofork --nopid
Note that it is both active
and enabled
, meaning it will start by default if the server is rebooted.
Now that the service is up and running, we can use the firewall-cmd
utility to get and set policy information for the firewall.
First let’s list which services are already allowed:
- firewall-cmd --permanent --list-all
Outputpublic (active)
target: default
icmp-block-inversion: no
interfaces: eth0 eth1
sources:
services: cockpit dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
To see the additional services that you can enable by name, type:
- firewall-cmd --get-services
To add a service that should be allowed, use the --add-service
flag:
- firewall-cmd --permanent --add-service=http
This would add the http
service and allow incoming TCP traffic to port 80
. The configuration will update after you reload the firewall:
- firewall-cmd --reload
Remember that you will have to explicitly open the firewall (with services or ports) for any additional services that you may configure later.
Now that we have a regular non-root user for daily use, we need to make sure we can use it to SSH into our server.
Note: Until verifying that you can log in and use sudo
with your new user, we recommend staying logged in as root. This way, if you have problems, you can troubleshoot and make any necessary changes as root. If you are using a DigitalOcean Droplet and experience problems with your root SSH connection, you can log into the Droplet using the DigitalOcean Console.
The process for configuring SSH access for your new user depends on whether your server’s root account uses a password or SSH keys for authentication.
If you logged in to your root account using a password, then password authentication is enabled for SSH. You can SSH to your new user account by opening up a new terminal session and using SSH with your new username:
- ssh sammy@your_server_ip
After entering your regular user’s password, you will be logged in. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
To enhance your server’s security, we strongly recommend setting up SSH keys instead of using password authentication. Follow our guide on setting up SSH keys on Rocky Linux 8 to learn how to configure key-based authentication.
If you logged in to your root account using SSH keys, then password authentication is disabled for SSH. You will need to add a copy of your public key to the new user’s ~/.ssh/authorized_keys
file to log in successfully.
Since your public key is already in the root account’s ~/.ssh/authorized_keys
file on the server, we can copy that file and directory structure to our new user account.
The simplest way to copy the files with the correct ownership and permissions is with the rsync
command. This will copy the root user’s .ssh
directory, preserve the permissions, and modify the file owners, all in a single command. Make sure to change the highlighted portions of the command below to match your regular user’s name:
Note: The rsync
command treats sources and destinations that end with a trailing slash differently than those without a trailing slash. When using rsync
below, be sure that the source directory (~/.ssh
) does not include a trailing slash (check to make sure you are not using ~/.ssh/
).
If you accidentally add a trailing slash to the command, rsync
will copy the contents of the root account’s ~/.ssh
directory to the sudo
user’s home directory instead of copying the entire ~/.ssh
directory structure. The files will be in the wrong location and SSH will not be able to find and use them.
- rsync --archive --chown=sammy:sammy ~/.ssh /home/sammy
Now, back in a new terminal on your local machine, open up a new SSH session with your non-root user:
- ssh sammy@your_server_ip
You should be logged in to the new user account without using a password. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
At this point, you have a solid foundation for your server. You can install any of the software you need on your server now.
]]>I wanted to create this question/answer about a really popular topic - about:blank, what it is, is it harmful, should and could I get rid of it?
]]>I am totally lost, I have the WordPress cluster and I am asked to edit some files wp-config.php
; .htaccess
and/or php.ini
.
I understand that I need to connect via FTP via Filezilla and then modify the files with the text editor, but I am not able to connect via FTP, not sure if the address is the one from the WordPress database or from my Droplet, and what passwords to use.
Any help would be super appreciated!
Best regards,
Miguel
]]>Accurate timekeeping is integral to modern software deployments. Without it, you may encounter data corruption, errors, and other issues that are difficult to debug. Time synchronization can help ensure your logs are being recorded in the correct order, and that database updates are appropriately applied.
Fortunately, Ubuntu 20.04 has time synchronization built-in and activated by default using systemd
’s timesyncd
service. In this article, you will practice some general time-related commands, verify that timesyncd
is active, and install an alternate network time service.
Before starting this tutorial, you will need an Ubuntu 20.04 server with a non-root, sudo-enabled user and a firewall, as described in this Ubuntu 20.04 server setup tutorial.
To view the time on your server, you will use the command date
. Any user can run this command to print out the date and time:
- date
Typically, your server will generate an output with the default UTC time zone.
OutputThu Aug 5 15:55:20 UTC 2021
UTC is Coordinated Universal Time, the time at zero degrees longitude. While this may not reflect your current time zone, using Universal Time prevents confusion when your infrastructure spans multiple time zones.
If you want to change your time zone, however, you can use the timedatectl
command.
First, run this command to generate a list of available time zones:
- timedatectl list-timezones
A list of time zones will print to your screen. You can press SPACE
to page down, and b
to page up. Once you find the correct time zone, make note of it then type q
to exit the list.
Next, you can set the time zone with timedatectl set-timezone
by replacing the highlighted portion with the time zone you found in the list. You’ll need to use sudo
with timedatectl
to make this change:
- sudo timedatectl set-timezone America/New_York
You can verify your changes by running date
again:
- date
OutputThu Aug 5 11:56:01 EDT 2021
The time zone abbreviation will reflect the newly chosen value.
Now that you’ve practiced checking the clock and setting time zones, you can confirm that your time is being synchronized properly in the next section.
timesyncd
with timedatectl
Previously, most network time synchronization was handled by the Network Time Protocol daemon or ntpd
. This service connects to a pool of other NTP servers that provide it with constant and accurate time updates.
But now with Ubuntu’s default install, you can use timesyncd
instead of ntpd
. timesyncd
works similarly by connecting to the same time servers, but is llightweight and more closely integrated with systemd
on Ubuntu.
You can query the status of timesyncd
by running timedatectl
with no arguments. You don’t need to use sudo
in this case:
- timedatectl
Output
Local time: Thu 2021-08-05 11:56:40 EDT
Universal time: Thu 2021-08-05 15:56:40 UTC
RTC time: Thu 2021-08-05 15:56:41
Time zone: America/New_York (EDT, -0400)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
This command prints out the local time, universal time (which may be the same as local time, if you didn’t switch from the UTC time zone), and some network time status information. System clock synchronized: yes
reflects that the time is successfully synced, and NTP service: active
means that timesyncd
is up and running.
If your output shows that NTP service isn’t active, turn it on with timedatectl
:
- sudo timedatectl set-ntp on
After this, run timedatectl
again to confirm the network time status. It may take a minute for the sync to happen, but eventually System clock synchronized:
will read yes
and NTP service:
will show as active
.
ntpd
timesyncd
will work in most circumstances. There are instances, however, when an application may be sensitive to any disturbance with time. In this case, ntpd
is an alternative network time service you can use. ntpd
uses sophisticated techniques to constantly and gradually keep the system time on track.
Before installing ntpd
, you need to turn off timesyncd
in order to prevent the two services from conflicting with one another. You can do this by disabling network time synchronization with the following command:
- sudo timedatectl set-ntp no
Verify that time synchronization is disabled:
- timedatectl
Check that your output reads NTP service: inactive
. This means timesyncd
has stopped. Now you’re ready to install the ntp
package with apt
.
First, run apt update
to refresh your local package index:
- sudo apt update
Then, run apt install ntp
to install this package:
- sudo apt install ntp
ntpd
will begin automatically after your installation completes. You can verify that everything is working correctly by querying ntpd
for status information:
- ntpq -p
Output remote refid st t when poll reach delay offset jitter
==============================================================================
0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 0.000 0.000
+t1.time.bf1.yah 129.6.15.28 2 u 16 64 1 61.766 -20.068 1.964
+puppet.kenyonra 80.72.67.48 3 u 16 64 1 2.622 -18.407 2.407
*ntp3.your.org .GPS. 1 u 15 64 1 50.303 -17.499 2.708
+time.cloudflare 10.4.1.175 3 u 15 64 1 1.488 -18.295 2.670
+mis.wci.com 216.218.254.202 2 u 15 64 1 21.527 -18.377 2.414
+ipv4.ntp1.rbaum 69.89.207.99 2 u 12 64 1 49.741 -17.897 3.417
+time.cloudflare 10.4.1.175 3 u 15 64 1 1.039 -16.692 3.378
+108.61.73.243 129.6.15.29 2 u 14 64 1 70.060 -16.993 3.363
+ny-time.gofile. 129.6.15.28 2 u 21 64 1 75.349 -18.333 2.763
golem.canonical 17.253.34.123 2 u 28 64 1 134.482 -21.655 0.000
ntp3.junkemailf 216.218.254.202 2 u 19 64 1 2.632 -16.330 4.387
clock.xmission. .XMIS. 1 u 18 64 1 24.927 -16.712 3.415
alphyn.canonica 142.3.100.2 2 u 26 64 1 73.612 -19.371 0.000
strongbad.voice 192.5.41.209 2 u 17 64 1 70.766 -18.159 3.481
chilipepper.can 17.253.34.123 2 u 25 64 1 134.982 -19.848 0.000
pugot.canonical 145.238.203.14 2 u 28 64 1 135.694 -21.075 0.000
ntpq
is a query tool for ntpd
. The -p
flag requests information about the NTP servers (or peers) ntpd
is connected to. Your output will be slightly different but will list the default Ubuntu pool servers plus a few others. Remember, it can take a few minutes for ntpd
to establish connections.
In this article, you’ve successfully viewed the system time, changed time zones, worked with Ubuntu’s default timesyncd
service, and installed ntpd
. If you have advanced timekeeping needs, you can reference the official NTP documentation, and also take a look at the NTP Pool Project, a global group of volunteers providing much of the world’s NTP infrastructure.
SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with an Ubuntu server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for an Ubuntu 16.04 installation. SSH keys provide a secure way of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your computer):
- ssh-keygen
By default ssh-keygen
will create a 2048-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following output:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press ENTER
to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you optionally may enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.
You should then see the following output:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:xlHONMLGuFsFrpCaZyxmv50WQ4YpCi63M2HYPOiPW+M username@remote_host
The key's randomart image is:
+---[RSA 2048]----+
| +o.+ |
| ...+*.. |
| oo oo.o |
|. .+o.+o.. |
|o**.+o.oS |
|=+B= +. |
|oo *. o |
| .B .o.. |
| ooE..o |
+----[SHA256]-----+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Ubuntu host is to use a utility called ssh-copy-id
. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you need to specify the remote host that you would like to connect to and the user account that you have password SSH access to. This is the account to which your public SSH key will be copied.
The syntax is:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.
We can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, we can make sure that the ~/.ssh
directory exists and has the correct permissions under the account we’re using.
We can then output the content we piped over into a file called authorized_keys
within this directory. We’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the above process manually.
We will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Access your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, we’ll ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root:
- chown -R sammy:sammy ~/.ssh
In this tutorial our user is named sammy but you should substitute the appropriate username into the above command.
We can now attempt passwordless authentication with our Ubuntu server.
If you have successfully completed one of the procedures above, you should be able to log into the remote host without the remote account’s password.
The basic process is the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type “yes” and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be prompted to enter it now (note that your keystrokes will not display in the terminal session for security). After authenticating, a new shell session should open for you with the configured account on the Ubuntu server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.
If you were able to log into your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo nano /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out. Uncomment the line and set the value to “no”. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
Save and close the file when you are finished by pressing CTRL+X
, then Y
to confirm saving the file, and finally ENTER
to exit nano. To actually implement these changes, we need to restart the sshd
service:
- sudo systemctl restart ssh
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing this session:
- ssh username@remote_host
Once you have verified your SSH service, you can safely close all current server sessions.
The SSH daemon on your Ubuntu server now only responds to SSH keys. Password-based authentication has successfully been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>do-release-upgrade
It kept giving me this message: Please install all available updates for your release before upgrading.
I have done all the updates command mentioned in the Community.
sudo apt-get update sudo apt-get upgrade -y sudo apt-get dist-upgrade
and rebooted also.
Thanks.
]]>Wget is a networking command-line tool that lets you download files and interact with REST APIs. It supports the HTTP
,HTTPS
, FTP
, and FTPS
internet protocols. Wget can deal with unstable and slow network connections. In the event of a download failure, Wget keeps trying until the entire file has been retrieved. Wget also lets you resume a file download that was interrupted without starting from scratch.
You can also use Wget to interact with REST APIs without having to install any additional external programs. You can make GET
, POST
, PUT
, and DELETE
HTTP
requests with single and multiple headers right in the terminal.
In this tutorial, you will use Wget to download files, interact with REST API endpoints, and create and manage a Droplet in your DigitalOcean account.
You can use your local system or a remote server to open a terminal and run the commands there.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To complete this tutorial, you will need:
Wget installed. Most Linux distributions have Wget installed by default. To check, type wget
in your terminal and press ENTER
. If it is not installed, it will display: command not found
. You can install it by running the following command: sudo apt-get install wget
.
A DigitalOcean account. If you do not have one, sign up for a new account.
A DigitalOcean Personal Access Token, which you can create via the DigitalOcean control panel. Instructions to do that can be found here: How to Generate a Personal Access Token.
In this section, you will use Wget to customize your download experience. For example, you will learn to download a single file and multiple files, handle file downloads in unstable network conditions, and, in the case of a download interruption, resume a download.
First, create a directory to save the files that you will download throughout this tutorial:
- mkdir -p DigitalOcean-Wget-Tutorial/Downloads
With the command above, you have created a directory named DigitalOcean-Wget-Tutorial
, and inside of it, you created a subdirectory named Downloads
. This directory and its subdirectory will be where you will store the files you download.
Navigate to the DigitalOcean-Wget-Tutorial
directory:
- cd DigitalOcean-Wget-Tutorial
You have successfully created the directory where you will store the files you download.
Downloading a file
In order to download a file using Wget, type wget
followed by the URL of the file that you wish to download. Wget will download the file in the given URL and save it in the current directory.
Let’s download a minified version of jQuery using the following command:
- wget https://code.jquery.com/jquery-3.6.0.min.js
Don’t worry if you don’t know what jQuery is – you could have downloaded any file available on the internet. All you need to know is that you successfully used Wget to download a file from the internet.
The output will look similar to this:
Output--2021-07-21 16:25:11-- https://code.jquery.com/jquery-3.6.0.min.js
Resolving code.jquery.com (code.jquery.com)... 69.16.175.10, 69.16.175.42, 2001:4de0:ac18::1:a:1a, ...
Connecting to code.jquery.com (code.jquery.com)|69.16.175.10|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 89501 (87K) [application/javascript]
Saving to: ‘jquery-3.6.0.min.js’
jquery-3.6.0.min.js 100%[===================>] 87.40K 114KB/s in 0.8s
2021-07-21 16:25:13 (114 KB/s) - ‘jquery-3.6.0.min.js’ saved [89501/89501]
According to the output above, you have successfully downloaded and saved a file named jquery-3.6.0.min.js
to your current directory.
You can check the contents of the current directory using the following command:
- ls
The output will look similar to this:
OutputDownloads jquery-3.6.0.min.js
Specifying the filename for the downloaded file
When downloading a file, Wget defaults to storing it using the name that the file has on the server. You can change that by using the -O
option to specify a new name.
Download the jQuery file you downloaded previously, but this time save it under a different name:
- wget -O jquery.min.js https://code.jquery.com/jquery-3.6.0.min.js
With the command above, you set the jQuery file to be saved as jquery.min.js
instead of jquery-3.6.0.min.js
The output will look similar to this:
Output--2021-07-21 16:27:01-- https://code.jquery.com/jquery-3.6.0.min.js
Resolving code.jquery.com (code.jquery.com)... 69.16.175.10, 69.16.175.42, 2001:4de0:ac18::1:a:2b, ...
Connecting to code.jquery.com (code.jquery.com)|69.16.175.10|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 89501 (87K) [application/javascript]
Saving to: ‘jquery.min.js’
jquery.min.js 100%[==================================>] 87.40K 194KB/s in 0.4s
2021-07-21 16:27:03 (194 KB/s) - ‘jquery.min.js’ saved [89501/89501]
According to the output above, you have successfully downloaded the jQuery file and saved it as jquery.min.js
.
You can use the ls
command to list the contents of your current directory, and you will see the jquery.min.js
file there:
- ls
The output will look similar to this:
OutputDownloads jquery-3.6.0.min.js jquery.min.js
So far, you have used wget
to download files to the current directory. Next, you will download to a specific directory.
Downloading a file to a specific directory
When downloading a file, Wget stores it in the current directory by default. You can change that by using the -P
option to specify the name of the directory where you want to save the file.
Download the jQuery file you downloaded previously, but this time save it in the Downloads
subdirectory.
- wget -P Downloads/ https://code.jquery.com/jquery-3.6.0.min.js
The output will look similar to this:
Output--2021-07-21 16:28:50-- https://code.jquery.com/jquery-3.6.0.min.js
Resolving code.jquery.com (code.jquery.com)... 69.16.175.42, 69.16.175.10, 2001:4de0:ac18::1:a:2b, ...
Connecting to code.jquery.com (code.jquery.com)|69.16.175.42|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 89501 (87K) [application/javascript]
Saving to: ‘Downloads/jquery-3.6.0.min.js’
jquery-3.6.0.min.js 100%[==================================>] 87.40K 43.6KB/s in 2.0s
2021-07-21 16:28:53 (43.6 KB/s) - ‘Downloads/jquery-3.6.0.min.js’ saved [89501/89501]
Notice the last line where it says that the jquery-3.6.0.min.js
file was saved in the Downloads
directory.
If you use the ls Downloads
command to list the contents of the Downloads
directory, you will see the jQuery file there:
Run the ls
command:
- ls Downloads
The output will look similar to this:
Outputjquery-3.6.0.min.js
Turning Wget’s output off
By default, Wget outputs a lot of information to the terminal when you download a file. You can use the -q
option to turn off all output.
Download the jQuery file, but this time without showing any output:
- wget -q https://code.jquery.com/jquery-3.6.0.min.js
You won’t see any output, but if you use the ls
command to list the contents of the current directory you will find a file named jquery-3.6.0.min.js.1
:
- ls
The output will look similar to this:
OutputDownloads jquery-3.6.0.min.js jquery-3.6.0.min.js.1 jquery.min.js
Before saving a file, Wget checks whether the file exists in the desired directory. If it does, Wget adds a number to the end of the file. If you ran the command above one more time, Wget would create a file named jquery-3.6.0.min.js.2
. This number increases every time you download a file to a directory that already has a file with the same name.
You have successfully turned off Wget’s output, but now you can’t monitor the download progress. Let’s look at how to show the download progress bar.
Showing the download progress bar
Wget lets you show the download progress bar but hide any other output by using the -q
option alongside the --show-progress
option.
Download the jQuery file, but this time only show the download progress bar:
- wget -q --show-progress https://code.jquery.com/jquery-3.6.0.min.js
The output will look similar to this:
Outputjquery-3.6.0.min.js.2 100%[================================================>] 87.40K 207KB/s in 0.4s
Use the ls
command to check the contents of the current directory and you will find the file you have just downloaded with the name jquery-3.6.0.min.js.2
From this point forward you will be using the -q
and --show-progress
options in most of the subsequent Wget commands.
So far you have only downloaded a single file. Next, you will download multiple files.
Downloading multiple files
In order to download multiples files using Wget, you need to create a .txt
file and insert the URLs of the files you wish to download. After inserting the URLs inside the file, use the wget
command with the -i
option followed by the name of the .txt
file containing the URLs.
Create a file named images.txt
:
- nano images.txt
In images.txt
, add the following URLs:
https://cdn.pixabay.com/photo/2016/12/13/05/15/puppy-1903313__340.jpg
https://cdn.pixabay.com/photo/2016/01/05/17/51/maltese-1123016__340.jpg
https://cdn.pixabay.com/photo/2020/06/30/22/34/dog-5357794__340.jpg
The URLs link to three random images of dogs found on Pixabay. After you have added the URLs, save and close the file.
Now you will use the -i
option alongside the -P
,-q
and --show-progress
options that you learned earlier to download all three images to the Downloads
directory:
- wget -i images.txt -P Downloads/ -q --show-progress
The output will look similar to this:
Outputpuppy-1903313__340.jp 100%[=========================>] 26.44K 93.0KB/s in 0.3s
maltese-1123016__340. 100%[=========================>] 50.81K --.-KB/s in 0.06s
dog-5357794__340.jpg 100%[=========================>] 30.59K --.-KB/s in 0.07s
If you use the ls Downloads
command to list the contents of the Downloads
directory, you will find the names of the three images you have just downloaded:
- ls Downloads
The output will look similar to this:
Outputdog-5357794__340.jpg jquery-3.6.0.min.js maltese-1123016__340.jpg puppy-1903313__340.jpg
Limiting download speed
So far, you have download files with the maximum available download speed. However, you might want to limit the download speed to preserve resources for other tasks. You can limit the download speed by using the --limit-rate
option followed by the maximum speed allowed in kiloBits per second
and the letter k
.
Download the first image in the images.txt
file with a speed of 15 kB/S
to the Downloads
directory:
- wget --limit-rate 15k -P Downloads/ -q --show-progress https://cdn.pixabay.com/photo/2016/12/13/05/15/puppy-1903313__340.jpg
The output will look similar to this:
Outputpuppy-1903313__340.jpg.1 100%[====================================================>] 26.44K 16.1KB/s in 1.6s
If you use the ls Downloads
command to check the contents of the Downloads
directory, you will see the file you have just downloaded with the name puppy-1903313__340.jpg.1
.
When downloading a file that already exists, Wget creates a new file instead of overwriting the existing file. Next, you will overwrite a downloaded file.
Overwriting a downloaded file
You can overwrite a file you have downloaded by using the -O
option alongside the name of the file. In the code below, you will first download the second image listed in the images.txt
file to the current directory and then you will overwrite it.
First, download the second image to the current directory and set the name to image2.jpg
:
- wget -O image2.jpg -q --show-progress https://cdn.pixabay.com/photo/2016/12/13/05/15/puppy-1903313__340.jpg
The output will look similar to this::
Outputimage2.jpg 100%[====================================================>] 26.44K --.-KB/s in 0.04s
If you use the ls
command to check the contents of the current directory, you will see the file you have just downloaded with the name image2.jpg
.
If you wish to overwrite this image2.jpg
file, you can run the same command you ran earlier :
- wget -O image2.jpg -q --show-progress https://cdn.pixabay.com/photo/2016/12/13/05/15/puppy-1903313__340.jpg
You can run the command above as many times as you like and Wget will download the file and overwrite the existing one. If you run the command above without the -O
option, Wget will create a new file each time you run it.
Resuming a download
Thus far, you have successfully downloaded multiple files without interruption. However, if the download was interrupted, you can resume it by using the -c
option.
Run the following command to download a random image of a dog found on Pixabay. Note that in the command, you have set the maximum speed to 1 KB/S
. Before the image finishes downloading, press Ctrl+C
to cancel the download:
- wget --limit-rate 1k -q --show-progress https://cdn.pixabay.com/photo/2018/03/07/19/51/grass-3206938__340.jpg
To resume the download, pass the -c
option. Note that this will only work if you run this command in the same directory as the incomplete file:
- wget -c --limit-rate 1k -q --show-progress https://cdn.pixabay.com/photo/2018/03/07/19/51/grass-3206938__340.jpg
Up until now, you have only downloaded files in the foreground. Next, you will download files in the background.
Downloading in the background
You can download files in the background by using the -b
option.
Run the command below to download a random image of a dog from Pixabay in the background:
- wget -b https://cdn.pixabay.com/photo/2018/03/07/19/51/grass-3206938__340.jpg
When you download files in the background, Wget creates a file named wget-log
in the current directory and redirects all output to this file. If you wish to watch the status of the download, you can use the following command:
- tail -f wget-log
The output will look similar to this:
OutputResolving cdn.pixabay.com (cdn.pixabay.com)... 104.18.20.183, 104.18.21.183, 2606:4700::6812:14b7, ...
Connecting to cdn.pixabay.com (cdn.pixabay.com)|104.18.20.183|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 33520 (33K) [image/jpeg]
Saving to: ‘grass-3206938__340.jpg’
0K .......... .......... .......... .. 100% 338K=0.1s
2021-07-20 23:49:52 (338 KB/s) - ‘grass-3206938__340.jpg’ saved [33520/33520]
Setting a timeout
Until this point, we have assumed that the server that you are trying to download files from is working properly. However, let’s assume that the server is not working properly. You can use Wget to first limit the amount of time that you wait for the server to respond and then limit the number of times that Wget tries to reach the server.
If you wish to download a file but you are unsure if the server is working properly, you can set a timeout by using the -T
option followed by the time in seconds.
In the following command, you are setting the timeout to 5
seconds:
- wget -T 5 -q --show-progress https://cdn.pixabay.com/photo/2016/12/13/05/15/puppy-1903313__340.jpg
Setting maximum number of tries
You can also set how many times Wget attempts to download a file after being interrupted by passing the --tries
option followed by the number of tries.
By running the command below, you are limiting the number of tries to 3
:
- wget --tries=3 -q --show-progress https://cdn.pixabay.com/photo/2018/03/07/19/51/grass-3206938__340.jpg
If you would like to try indefinitely you can pass inf
alongside the --tries
option:
- wget --tries=inf -q --show-progress https://cdn.pixabay.com/photo/2018/03/07/19/51/grass-3206938__340.jpg
In this section, you used Wget to download a single file and multiple files, resume downloads, and handle network issues. In the next section, you will learn to interact with REST API endpoints.
In this section, you will use Wget to interact with REST APIs without having to install an external program. You will learn the syntax to send the most commonly used HTTP
methods: GET
, POST
, PUT
, and DELETE
.
We are going to use JSONPlaceholder as the mock REST API. JSONPlaceholder is a free online REST API that you can use for fake data. (The requests you send to it won’t affect any databases and the data won’t be saved.)
Sending GET requests
Wget lets you send GET
requests by running a command that looks like the following:
- wget -O- [ URL ]
In the command above, the -
after the -O
option means standard output, so Wget will send the output of the URL to the terminal instead of sending it to a file as you did in the previous section. GET
is the default HTTP
method that Wget uses.
Run the following command in the terminal window:
- wget -O- https://jsonplaceholder.typicode.com/posts?_limit=2
In the command above, you used wget
to send a GET
request to JSON Placeholder in order to retrieve two posts from the REST API
.
The output will look similar to this:
Output--2021-07-21 16:52:51-- https://jsonplaceholder.typicode.com/posts?_limit=2
Resolving jsonplaceholder.typicode.com (jsonplaceholder.typicode.com)... 104.21.10.8, 172.67.189.217, 2606:4700:3032::6815:a08, ...
Connecting to jsonplaceholder.typicode.com (jsonplaceholder.typicode.com)|104.21.10.8|:443... connected.
HTTP request sent, awaiting response... 200 OK'
Length: 600 [application/json]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s [
{
"userId": 1,
"id": 1,
"title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
"body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
},
{
"userId": 1,
"id": 2,
"title": "qui est esse",
"body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla"
}
- 100%[==================================>] 600 --.-KB/s in 0s
2021-07-21 16:52:53 (4.12 MB/s) - written to stdout [600/600]
Notice the line where it says HTTP request sent, awaiting response... 200 OK
, which means that you have successfully sent a GET
request to JSONPlaceholder.
If that is too much output you can use the -q
option that you learned in the previous section to restrict the output to the results of the GET
request:
- wget -O- -q https://jsonplaceholder.typicode.com/posts?_limit=2
The output will look similar to this:
Output[
{
"userId": 1,
"id": 1,
"title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
"body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
},
{
"userId": 1,
"id": 2,
"title": "qui est esse",
"body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla"
}
]
Sending POST requests
Wget lets you send POST
requests by running a command that looks like the following:
- wget --method==[post] -O- --body-data=[ body in json format ] --header=[ String ] [ URL ]
Run the following command:
- wget --method=post -O- -q --body-data='{"title": "Wget POST","body": "Wget POST example body","userId":1}' --header=Content-Type:application/json https://jsonplaceholder.typicode.com/posts
In the command above, you used wget
to send a POST
request to JSON Placeholder to create a new post. You set the method
to post
, the Header
to Content-Type:application/json
and sent the following request body
to it :{"title": "Wget POST","body": "Wget POST example body","userId":1}
.
The output will look similar to this:
Output{
"title": "Wget POST",
"body": "Wget POST example body",
"userId": 1,
"id": 101
}
Sending PUT requests
Wget lets you send PUT
requests by running a command that looks like the following:
- wget --method==[put] -O- --body-data=[ body in json format ] --header=[ String ] [ URL ]
Run the following command:
- wget --method=put -O- -q --body-data='{"title": "Wget PUT", "body": "Wget PUT example body", "userId": 1, "id":1}' --header=Content-Type:application/json https://jsonplaceholder.typicode.com/posts/1
In the command above you used wget
to send a PUT
request to JSON Placeholder to edit the first post in this REST API. You set the method
to put
, the Header
to Content-Type:application/json
and sent the following request body
to it :{"title": "Wget PUT", "body": "Wget PUT example body", "userId": 1, "id":1}
.
The output will look similar to this:
Output{
"body": "Wget PUT example body",
"title": "Wget PUT",
"userId": 1,
"id": 1
}
Sending DELETE requests
Wget lets you send DELETE
requests by running a command that looks like the following:
- wget --method==[delete] -O- [ URL ]
Run the following command:
- wget --method=delete -O- -q --header=Content-Type:application/json https://jsonplaceholder.typicode.com/posts/1
In the command above you used wget
to send a DELETE
request to JSON Placeholder to delete the first post in this REST API. You set the method
to delete
, and set the post you want to delete to 1
in the URL.
The output will look similar to this:
Output{}
In this section, you learned how to use Wget to send GET
, POST
, PUT
and DELETE
requests with only one header field. In the next section, you will learn how to send multiple header fields in order to create and manage a Droplet in your DigitalOcean account.
In this section, you will apply what you learned in the previous section and use Wget to create and manage a Droplet in your DigitalOcean account. But before you do that, you will learn how to send multiple headers
fields in a HTTP method.
The syntax for a command to send multiple headers looks like this:
- wget --header=[ first header ] --header=[ second header] --header=[ N header] [ URL ]
You can have as many headers
fields as you like by repeating the --header
option as many times as you need.
To create a Droplet or interact with any other resource in the DigitalOcean API, you will need to send two request headers:
Content-Type: application/json
Authorization: Bearer your_personal_access_token
You already saw the first header in the previous section. The second header is what lets you authenticate your account. It has the String named Bearer
followed by your DigitalOcean account Personal Access Token.
Run the following command, replacing your_personal_access_token
with your DigitalOcean Personal Access Token:
- wget --method=post -O- -q --header="Content-Type: application/json" --header="Authorization: Bearer your_personal_access_token" --body-data='{"name":"Wget-example","region":"nyc1","size":"s-1vcpu-1gb","image":"ubuntu-20-04-x64","tags": ["Wget-tutorial"]}' https://api.digitalocean.com/v2/droplets
With the command above, you have created an ubuntu-20-04-x64
Droplet in the nyc1
region named Wget-example
with 1vcpu
and 1gb
of memory, and you have set the tag to Wget-tutorial
. For more information about the attributes in the body-data
field, see the DigitalOcean API documentation.
The output will look similar to this:
Output{"droplet":{"id":237171073,"name":"Wget-example","memory":1024,"vcpus":1,"disk":25,"locked":false,"status":"new","kernel":null,"created_at":"2021-03-16T12:38:59Z","features":[],"backup_ids":[],"next_backup_window":null,"snapshot_ids":[],"image":{"id":72067660,"name":"20.04 (LTS) x64","distribution":"Ubuntu","slug":"ubuntu-20-04-x64","public":true,"regions":["nyc3","nyc1","sfo1","nyc2","ams2","sgp1","lon1","ams3","fra1","tor1","sfo2","blr1","sfo3"],"created_at":"2020-10-20T16:34:30Z","min_disk_size":15,"type":"base","size_gigabytes":0.52,"description":"Ubuntu 20.04 x86","tags":[],"status":"available"},"volume_ids":[],"size":{"slug":"s-1vcpu-1gb","memory":1024,"vcpus":1,"disk":25,"transfer":1.0,"price_monthly":5.0,"price_hourly":0.00744,"regions":["ams2","ams3","blr1","fra1","lon1","nyc1","nyc2","nyc3","sfo1","sfo3","sgp1","tor1"],"available":true,"description":"Basic"},"size_slug":"s-1vcpu-1gb","networks":{"v4":[],"v6":[]},"region":{"name":"New York 1","slug":"nyc1","features":["backups","ipv6","metadata","install_agent","storage","image_transfer"],"available":true,"sizes":["s-1vcpu-1gb","s-1vcpu-1gb-intel","s-1vcpu-2gb","s-1vcpu-2gb-intel","s-2vcpu-2gb","s-2vcpu-2gb-intel","s-2vcpu-4gb","s-2vcpu-4gb-intel","s-4vcpu-8gb","c-2","c2-2vcpu-4gb","s-4vcpu-8gb-intel","g-2vcpu-8gb","gd-2vcpu-8gb","s-8vcpu-16gb","m-2vcpu-16gb","c-4","c2-4vcpu-8gb","s-8vcpu-16gb-intel","m3-2vcpu-16gb","g-4vcpu-16gb","so-2vcpu-16gb","m6-2vcpu-16gb","gd-4vcpu-16gb","so1_5-2vcpu-16gb","m-4vcpu-32gb","c-8","c2-8vcpu-16gb","m3-4vcpu-32gb","g-8vcpu-32gb","so-4vcpu-32gb","m6-4vcpu-32gb","gd-8vcpu-32gb","so1_5-4vcpu-32gb","m-8vcpu-64gb","c-16","c2-16vcpu-32gb","m3-8vcpu-64gb","g-16vcpu-64gb","so-8vcpu-64gb","m6-8vcpu-64gb","gd-16vcpu-64gb","so1_5-8vcpu-64gb","m-16vcpu-128gb","c-32","c2-32vcpu-64gb","m3-16vcpu-128gb","m-24vcpu-192gb","g-32vcpu-128gb","so-16vcpu-128gb","m6-16vcpu-128gb","gd-32vcpu-128gb","m3-24vcpu-192gb","g-40vcpu-160gb","so1_5-16vcpu-128gb","m-32vcpu-256gb","gd-40vcpu-160gb","so-24vcpu-192gb","m6-24vcpu-192gb","m3-32vcpu-256gb","so1_5-24vcpu-192gb"]},"tags":["Wget-tutorial"]},"links":{"actions":[{"id":1164336542,"rel":"create","href":"https://api.digitalocean.com/v2/actions/1164336542"}]}}
If you see an output similar to the one above that means that you have successfully created a Droplet.
Now let’s get a list of all the Droplets in your account that have the tag Wget-tutorial
. Run the following command, replacing your_personal_access_token
with your DigitalOcean Personal Access Token:
- wget -O- -q --header="Content-Type: application/json" --header="Authorization: Bearer your_personal_access_token" https://api.digitalocean.com/v2/droplets?tag_name=Wget-tutorial
You should see the name of the Droplet you have just created in the output:
Output{"droplets":[{"id":237171073,"name":"Wget-example","memory":1024,"vcpus":1,"disk":25,"locked":false,"status":"active","kernel":null,"created_at":"2021-03-16T12:38:59Z","features":["private_networking"],"backup_ids":[],"next_backup_window":null,"snapshot_ids":[],"image":{"id":72067660,"name":"20.04 (LTS) x64","distribution":"Ubuntu","slug":"ubuntu-20-04-x64","public":true,"regions":["nyc3","nyc1","sfo1","nyc2","ams2","sgp1","lon1","ams3","fra1","tor1","sfo2","blr1","sfo3"],"created_at":"2020-10-20T16:34:30Z","min_disk_size":15,"type":"base","size_gigabytes":0.52,"description":"Ubuntu 20.04 x86","tags":[],"status":"available"},"volume_ids":[],"size":{"slug":"s-1vcpu-1gb","memory":1024,"vcpus":1,"disk":25,"transfer":1.0,"price_monthly":5.0,"price_hourly":0.00744,"regions":["ams2","ams3","blr1","fra1","lon1","nyc1","nyc2","nyc3","sfo1","sfo3","sgp1","tor1"],"available":true,"description":"Basic"},"size_slug":"s-1vcpu-1gb","networks":{"v4":[{"ip_address":"10.116.0.2","netmask":"255.255.240.0","gateway":"","type":"private"},{"ip_address":"204.48.20.197","netmask":"255.255.240.0","gateway":"204.48.16.1","type":"public"}],"v6":[]},"region":{"name":"New York 1","slug":"nyc1","features":["backups","ipv6","metadata","install_agent","storage","image_transfer"],"available":true,"sizes":["s-1vcpu-1gb","s-1vcpu-1gb-intel","s-1vcpu-2gb","s-1vcpu-2gb-intel","s-2vcpu-2gb","s-2vcpu-2gb-intel","s-2vcpu-4gb","s-2vcpu-4gb-intel","s-4vcpu-8gb","c-2","c2-2vcpu-4gb","s-4vcpu-8gb-intel","g-2vcpu-8gb","gd-2vcpu-8gb","s-8vcpu-16gb","m-2vcpu-16gb","c-4","c2-4vcpu-8gb","s-8vcpu-16gb-intel","m3-2vcpu-16gb","g-4vcpu-16gb","so-2vcpu-16gb","m6-2vcpu-16gb","gd-4vcpu-16gb","so1_5-2vcpu-16gb","m-4vcpu-32gb","c-8","c2-8vcpu-16gb","m3-4vcpu-32gb","g-8vcpu-32gb","so-4vcpu-32gb","m6-4vcpu-32gb","gd-8vcpu-32gb","so1_5-4vcpu-32gb","m-8vcpu-64gb","c-16","c2-16vcpu-32gb","m3-8vcpu-64gb","g-16vcpu-64gb","so-8vcpu-64gb","m6-8vcpu-64gb","gd-16vcpu-64gb","so1_5-8vcpu-64gb","m-16vcpu-128gb","c-32","c2-32vcpu-64gb","m3-16vcpu-128gb","m-24vcpu-192gb","g-32vcpu-128gb","so-16vcpu-128gb","m6-16vcpu-128gb","gd-32vcpu-128gb","m3-24vcpu-192gb","g-40vcpu-160gb","so1_5-16vcpu-128gb","m-32vcpu-256gb","gd-40vcpu-160gb","so-24vcpu-192gb","m6-24vcpu-192gb","m3-32vcpu-256gb","so1_5-24vcpu-192gb"]},"tags":["Wget-tutorial"],"vpc_uuid":"5ee0a168-39d1-4c60-a89c-0b47390f3f7e"}],"links":{},"meta":{"total":1}}
Now let’s take the id
of the Droplet you have created and use it to delete the Droplet. Run the following command, replacing your_personal_access_token
with your DigitalOcean Personal Access Token and your_droplet_id
with your Droplet id
:
- wget --method=delete -O- --header="Content-Type: application/json" --header="Authorization: Bearer your_personal_access_token" https://api.digitalocean.com/v2/droplets/your_droplet_id
In the command above, you added your Droplet id
to the URL to delete it. If you are seeing a 204 No Content
in the output, that means that you succeeded in deleting the Droplet.
In this section, you used Wget to send multiple headers. Then, you created and managed a Droplet in your DigitalOcean account.
In this tutorial, you used Wget to download files in stable and unstable network conditions and interact with REST API endpoints. You then used this knowledge to create and manage a Droplet in your DigitalOcean account. If you would like to learn more about Wget, visit this tool’s manual page. For more Linux command-line tutorials visit DigitalOcean community tutorials.
]]>Linux is a family of free and open-source operating systems based on the Linux kernel. Operating systems based on Linux are known as Linux distributions or distros. Examples include Debian, Ubuntu, Fedora, CentOS, Gentoo, Arch Linux, and many others.
The Linux kernel has been under active development since 1991, and has proven to be extremely versatile and adaptable. You can find computers that run Linux in a wide variety of contexts all over the world, from web servers to cell phones. Today, 90% of all cloud infrastructure and 74% of the world’s smartphones are powered by Linux.
However, newcomers to Linux may find it somewhat difficult to approach, as Linux filesystems have a different structure than those found on Windows or MacOS. Additionally, Linux-based operating systems depend heavily on working with the command line interface, while most personal computers rely on graphical interfaces.
This guide serves as an introduction to important command line concepts and skills and equips newcomers to learn more about Linux.
To follow along with this guide, you will need access to a computer running a Linux-based operating system. This can either be a virtual private server which you’ve connected to with SSH or your local machine. Note that this tutorial was validated using a Linux server running Ubuntu 20.04, but the examples given should work on a computer running any version of any Linux distribution.
If you plan to use a remote server to follow this guide, we encourage you to first complete our Initial Server Setup guide. Doing so will set you up with a secure server environment — including a non-root user with sudo
privileges and a firewall configured with UFW — which you can use to build your Linux skills.
The terms “terminal,” “shell,” and “command line interface” are often used interchangeably, but there are subtle differences between them:
When someone refers to one of these three terms in the context of Linux, they generally mean a terminal environment where you can run commands and see the results printed out to the terminal, such as this:
Becoming a Linux expert requires you to be comfortable with using a terminal. Any administrative task, including file manipulation, package installation, and user management, can be accomplished through the terminal. The terminal is interactive: you specify commands to run and the terminal outputs the results of those commands. To execute any command, you type it into the prompt and press ENTER
.
When accessing a cloud server, you’ll most often be doing so through a terminal shell. Although personal computers that run Linux often come with the kind of graphical desktop environment familiar to most computer users, it is often more efficient or practical to perform certain tasks through commands entered into the terminal.
Nearly all Linux distributions are compliant with a universal standard for filesystem directory structure known as the Filesystem Hierarchy Standard (FHS). The FHS defines a set of directories, each of which serve their own special function.
The forward slash (/
) is used to indicate the root directory in the filesystem hierarchy defined by the FHS.
When a user logs in to the shell, they are brought to their own user directory, stored within /home/
. This is referred to as the user’s home directory. The FHS defines /home/
as containing the home directories for regular users.
The root user has its own home directory specified by the FHS: /root/
. Note that /
is referred to as the “root directory”, and that it is different from root/
, which is stored within /
.
Because the FHS is the default filesystem layout on Linux machines, and each directory within it is included to serve a specific purpose, it simplifies the process of organizing files by their function.
Linux filesystems are based on a directory tree. This means that you can create directories (which are functionally identical to folders found in other operating systems) inside other directories, and files can exist in any directory.
To see what directory you are currently active in you can run the pwd
command, which stands for “print working directory”:
- pwd
pwd
prints the path to your current directory. The output will be similar to this:
Output/home/sammy
This example output indicates that the current active directory is sammy
, which is inside the home/
directory, which lives in the root directory, /
. As mentioned previously, since the sammy/
directory is stored within the home/
directory, sammy/
represents the sammy user’s home directory.
To see a list of files and directories that exist in your current working directory, run the ls
command:
- ls
This will return a list of the names of any files or directories held in your current working directory. If you’re following this guide on a new machine, though, this command may not return any output.
You can create one or more new directories within your current working directory with the mkdir
command, which stands for “make directory”. For example, to create two new directories named testdir1
and testdir2
, you might run the following command:
- mkdir testdir1 testdir2
Now when you run the ls
command, these directories will appear in the output:
- ls
Outputtestdir1
testdir2
To navigate into one of these new directories, run the cd
command (which stands for “change directory”) and specify the directory’s name:
- cd testdir1
This will change your new current working directory to the directory you specified. You can see this with pwd
:
- pwd
Output/home/sammy/testdir1
However, because testdir1
and testdir2
are both held in the sammy user’s home directory, they reside in different branches of the directory tree. The cd
command looks for directories within your current working directory, so this means that you cannot cd
directly into the testdir2
directory you created previously while testdir1
is your working directory:
- cd testdir2
Outputbash: cd: testdir2: No such file or directory
However, you can navigate into any existing directory regardless of your current working directory if you specify the full path of the directory you want to navigate to:
- cd /home/sammy/testdir2
Note: In Linux, a tilde (~
) is shorthand for the home directory of the user you’re logged in as. Knowing this, you could alternatively write the previous command like this and it would achieve the same result:
- cd ~/testdir2
Additionally, you can specify ..
to change to the directory one level up in your path. To get back to your original directory:
- cd ..
If you’re ever confused about where you are in the navigation tree, remember you can always run the pwd
command to find your current directory. Many modern shells (including Bash, the default for many Linux distributions) also indicate your current directory, as exhibited in the example commands throughout this section.
You cannot use cd
to interact with files; cd
stands for “change directory”, and only allows you to navigate directories. You can, however, create, edit, and view the contents of files.
One way to create a file is with the touch
command. To create a new file called file.txt
:
- touch file.txt
This creates an empty file with the name file.txt
in your current working directory. The contents of this file are empty.
If you decide to rename file.txt
later on, you can do so with the mv
command:
- mv file.txt newfile.txt
mv
stands for “move” and it can move a file or directory from one place to another. By specifying the original file, file.txt
, you can “move” it to a new location in the current working directory, thereby renaming it.
It is also possible to copy a file to a new location with the cp
command. If we want to bring back file.txt
but keep newfile.txt
, you can make a copy of newfile.txt
named file.txt
like this:
- cp newfile.txt file.txt
As you may have guessed, cp
is short for “copy”. By copying newfile.txt
to a new file called file.txt
, you have replicated the original file in a new file with a different name.
However, files are not of much use if they don’t contain anything. To edit files, a file editor is necessary.
There are many options for file editors, all created by professionals for daily use. Such editors include vim
, emacs
, nano
, and pico
.
nano
is a suitable option for beginners: it is relatively user-friendly and doesn’t overload you with cryptic options or commands.
To add text to file.txt
with nano
, run the following command:
- nano file.txt
This will open up a space where you can immediately start typing to edit file.txt
. Add whatever text you like, or you can copy the text in this example:
Say it's only a paper moon
Sailing over a cardboard sea,
But it wouldn't be make believe
If you believed in me.
Yes it's only a canvas sky
Hanging over a muslin tree,
But it wouldn't be make believe
If you believed in me.
Without your love,
It's a honky-tonk parade.
Without your love,
It's a melody played in a penny arcade.
It's a Barnum and Bailey world,
Just as phony as it can be,
But it wouldn't be make believe
If you believed in me.
To save your written text, press CTRL + X
, Y
, and then ENTER
. This returns you to the shell with a newly saved file.txt
file.
Now that file.txt
has some text within it, you can view it using cat
or less
.
The cat
command prints the contents of a specified file to your system’s output. Try running cat
and pass the file.txt
file you just edited as an argument:
- cat file.txt
This will print out the entire contents of file.txt
to the terminal. If you used the text from the previous example, this command will return output similar to this:
OutputSay it's only a paper moon
Sailing over a cardboard sea,
But it wouldn't be make believe
If you believed in me.
Yes it's only a canvas sky
Hanging over a muslin tree,
But it wouldn't be make believe
If you believed in me.
Without your love,
It's a honky-tonk parade.
Without your love,
It's a melody played in a penny arcade.
It's a Barnum and Bailey world,
Just as phony as it can be,
But it wouldn't be make believe
If you believed in me.
Using cat
to view file contents can be unwieldy and difficult to read if the file is particularly long. As an alternative, you can use the less
command which will allow you to paginate the output.
Use less
to view the contents of the file.txt
file, like this:
- less file.txt
This will also print the contents of file.txt
, but one terminal page at a time beginning at the start of the file. You can use the spacebar to advance a page, or the arrow keys to go up and down one line at a time.
Press q
to quit out of less
.
Finally, to delete the file.txt
file, pass the name of the file as an argument to rm
:
- rm file.txt
Note: Without other options, the rm
command (which stands for “remove”) cannot be used to delete directories. However, it does include the -d
flag which allows you to delete empty directories:
- rm -d directory
You can also remove empty directories with the rmdir
command:
- rmdir directory
If you want to delete a directory that isn’t empty, you can run rm
with the -r
flag. This will delete the specified directory along with of its contents, including any files and subdirectories:
- rm -r directory
However, because deleting content is a permanent action, you should only run rm
with the -r
option if you’re certain that you want to delete the specified directory.
It takes time, dedication, and a curious mindset to feel comfortable navigating around a Linux system through a terminal window, especially if it’s entirely new to you.
When you have a question about how to accomplish a certain task, there are several avenues of instruction you can turn to. Search engines like Google and DuckDuckGo are invaluable resources, as are question-and-answer sites like Stack Exchange or DigitalOcean’s Community Q&A. Odds are that if you have a question, many others have already asked it and had the question answered.
If your question has to do with a specific Linux command, the manual pages offer detailed and insightful documentation for nearly every command. To see the man page for any command, pass the command’s name as an argument to the man
command:
- man command
For instance, man rm
displays the purpose of rm
, how to use it, what options are available, examples of use, and more useful information.
This guide serves as an introduction to working with a Linux environment. However, fully understanding Linux and all of its components is far beyond the scope of a single tutorial. For instance, this tutorial makes no mention of permissions, a fundamental concept of Linux system administration.
We encourage you to check out all of our introductory Linux content, which can be found on our Linux Basics tag page.
]]>“Connection failed. mesg: ttyname failed: Inappropriate loctl for device”
]]>I’ve been searching online for the past 2 days and I’ve tried a lot of things but it still doesn’t work.
What else do you thing I can do about this issue please?
nodejs - server.js
import express from "express";
import cors from "cors";
import server from "http";
import { Server } from "socket.io";
import pkg from "dotenv";
pkg.config();
const app = express();
const serve = server.createServer(app);
const io = new Server(serve, {
cors: {
origin: process.env.CLIENT_SERVER,
methods: ["GET", "POST"],
},
});
const port = process.env.PORT || 5000;
app.get("/", (req, res) => {
res.send("SOCKET SERVER!");
});
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
io.on("connection", (socket) => {
console.log("new socket connected");
socket.on("join-room", (roomId, userId, name, mic, cam, screen = false) => {
console.log(`new peer joined : ${name} `, userId);
socket.join(roomId);
socket.broadcast
.to(roomId)
.emit("user-connected", userId, name, mic, cam, screen);
socket.on("disconnect", () => {
console.log("disconnected", userId);
socket.broadcast.to(roomId).emit("user-disconnected", userId);
});
});
socket.on("toggle media", (userId, roomId, type) => {
console.log("toggle media for ", userId);
socket.broadcast.to(roomId).emit("user toggled media", userId, type);
});
});
// Server listen initilized
serve
.listen(port, () => {
console.log(`Listening on the port ${port}`);
})
.on("error", (e) => {
console.error(e);
});
nginx config - /etc/nginx/sites-available/socket.<mydomain>.com
server {
if ($host = socket.<mydomain>.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
}
server{
listen 443 ssl;
access_log /var/log/nginx/socket.<mydomain>.com.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:5000;
}
}
I also created a symbolic link at /etc/nginx/sites-enabled/socket.<mydomain>.com
/etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
When I run curl http://127.0.0.1:5000 and http://<myserverip>:5000 it shows the right output based on the nodejs app
After all of these, https://socket.<mydomain>.com still shows me the nginx welcome page on browser and with curl.
Please what do you think the problem is?
]]>I was working with a remote API and I exported my API key so that I could have it available as an environment variable for my current shell session, as follows:
- export API_KEY=MY_API_KEY_HERE
However my workflow requires that at some point this env variable should be deleted, but I don’t want to log out and log in back again each time.
What is a good way to delete an environment variable?
Thanks!
]]>We had a power outage at our office that seemed to reset/corrupt our company router configurations. Since the outage, we have not been able to access one of the domains hosted on the webserver/droplet, as well as the webhook that is stored on the page that transfers are customer orders to our file database. I have configured both ports 80 (HTTP) & 443 (HTTPS) and directed them to the computer IP that hosts the server. After all this, we still cannot access the domain. Is there something on the digital ocean’s end that could have caused this?
]]>my issue is itsa .exe file
can i put this on linux
]]>Linux is known for having a great number of mature, useful command line utilities available out of the box in most distributions. Often, system administrators can do much of their work using the built-in tools without having to install additional software.
In this guide, we will discuss how to use the netcat utility. This versatile command can assist you in monitoring, testing, and sending data across network connections.
Netcat should be available on almost any modern Linux distribution. Ubuntu ships with the BSD variant of netcat, and this is what we will be using in this guide. Other versions may operate differently or provide other options.
By default, netcat operates by initiating a TCP connection to a remote host.
The most basic syntax is:
- netcat [options] host port
This will attempt to initiate a TCP connection to the defined host on the port number specified. This functions similarly to the old Linux telnet
command. Keep in mind that your connection is entirely unencrypted.
If you would like to send a UDP packet instead of initiating a TCP connection, you can use the -u
option:
- netcat -u host port
You can specify a range of ports by placing a dash between the first and last:
- netcat host startport-endport
This is generally used with some additional flags.
On most systems, we can use either netcat
or nc
interchangeably. They are aliases for the same command.
One of the most common uses for netcat is as a port scanner.
Although netcat is probably not the most sophisticated tool for the job (nmap is a better choice in most cases), it can perform simple port scans to easily identify open ports.
We do this by specifying a range of ports to scan, as we did above, along with the -z
option to perform a scan instead of attempting to initiate a connection.
For instance, we can scan all ports up to 1000 by issuing this command:
- netcat -z -v domain.com 1-1000
Along with the -z
option, we have also specified the -v
option to tell netcat to provide more verbose information.
The output will look like this:
Outputnc: connect to domain.com port 1 (tcp) failed: Connection refused
nc: connect to domain.com port 2 (tcp) failed: Connection refused
nc: connect to domain.com port 3 (tcp) failed: Connection refused
nc: connect to domain.com port 4 (tcp) failed: Connection refused
nc: connect to domain.com port 5 (tcp) failed: Connection refused
nc: connect to domain.com port 6 (tcp) failed: Connection refused
nc: connect to domain.com port 7 (tcp) failed: Connection refused
. . .
Connection to domain.com 22 port [tcp/ssh] succeeded!
. . .
As you can see, this provides a lot of information and will tell you for each port whether a scan was successful or not.
If you are actually using a domain name, this is the form you will have to use.
However, your scan will go much faster if you know the IP address that you need. You can then use the -n
flag to specify that you do not need to resolve the IP address using DNS:
- netcat -z -n -v 198.51.100.0 1-1000
The messages returned are actually sent to standard error (see our I/O redirection article for more info). We can send the standard error messages to standard out, which will allow us to filter the results easier.
We will redirect standard error to standard output using the 2>&1
bash syntax. We will then filter the results with grep
:
- netcat -z -n -v 198.51.100.0 1-1000 2>&1 | grep succeeded
OutputConnection to 198.51.100.0 22 port [tcp/*] succeeded!
Here, we can see that the only port open in the range of 1–1000 on the remote computer is port 22, the traditional SSH port.
Netcat is not restricted to sending TCP and UDP packets. It also can listen on a port for connections and packets. This gives us the opportunity to connect two instances of netcat in a client-server relationship.
Which computer is the server and which is the client is only a relevant distinction during the initial configuration. After the connection is established, communication is exactly the same in both directions.
On one machine, you can tell netcat to listen to a specific port for connections. We can do this by providing the -l
parameter and choosing a port:
- netcat -l 4444
This will tell netcat to listen for TCP connections on port 4444. As a regular (non-root) user, you will not be able to open any ports under 1000, as a security measure.
On a second server, we can connect to the first machine on the port number we chose. We do this the same way we’ve been establishing connections previously:
- netcat domain.com 4444
It will look as if nothing has happened. However, you can now send messages on either side of the connection and they will be seen on either end.
Type a message and press ENTER
. It will appear on both the local and remote screen. This works in the opposite direction as well.
When you are finished passing messages, you can press CTRL-D
to close the TCP connection.
Building off of the previous example, we can accomplish more useful tasks.
Because we are establishing a regular TCP connection, we can transmit just about any kind of information over that connection. It is not limited to chat messages that are typed in by a user. We can use this knowledge to turn netcat into a file transfer program.
Once again, we need to choose one end of the connection to listen for connections. However, instead of printing information onto the screen, as we did in the last example, we will place all of the information straight into a file:
- netcat -l 4444 > received_file
The >
in this command redirects all the output of netcat
into the specified filename.
On the second computer, create a simple text file by typing:
- echo "Hello, this is a file" > original_file
We can now use this file as an input for the netcat connection we will establish to the listening computer. The file will be transmitted just as if we had typed it interactively:
- netcat domain.com 4444 < original_file
We can see on the computer that was awaiting a connection, that we now have a new file called received_file
with the contents of the file we typed on the other computer:
- cat received_file
OutputHello, this is a file
As you can see, by piping things, we can easily take advantage of this connection to transfer all kinds of things.
For instance, we can transfer the contents of an entire directory by creating an unnamed tarball on-the-fly, transferring it to the remote system, and unpacking it into the remote directory.
On the receiving end, we can anticipate a file coming over that will need to be unzipped and extracted by typing:
- netcat -l 4444 | tar xzvf -
The ending dash (-) means that tar will operate on standard input, which is being piped from netcat across the network when a connection is made.
On the side with the directory contents we want to transfer, we can pack them into a tarball and then send them to the remote computer through netcat:
- tar -czf - * | netcat domain.com 4444
This time, the dash in the tar command means to tar and zip the contents of the current directory (as specified by the * wildcard), and write the result to standard output.
This is then written directly to the TCP connection, which is then received at the other end and decompressed into the current directory of the remote computer.
This is just one example of transferring more complex data from one computer to another. Another common idea is to use the dd
command to image a disk on one side and transfer it to a remote computer. We won’t be covering this here though.
We’ve been configuring netcat to listen for connections in order to communicate and transfer files. We can use this same concept to operate netcat as a very simple web server. This can be useful for testing pages that you are creating.
First, let’s make a simple HTML file on one server:
- nano index.html
Here is some simple HTML that you can use in your file:
<html>
<head>
<title>Test Page</title>
</head>
<body>
<h1>Level 1 header</h1>
<h2>Subheading</h2>
<p>Normal text here</p>
</body>
</html>
Save and close the file.
Without root privileges, you cannot serve this file on the default web port, port 80. We can choose port 8888 as a regular user.
If you just want to serve this page one time to check how it renders, you can run the following command:
- printf 'HTTP/1.1 200 OK\n\n%s' "$(cat index.html)" | netcat -l 8888
Now, in your browser, you can access the content by visiting:
http://server_IP:8888
This will serve the page, and then the netcat connection will close. If you attempt to refresh the page, it will be gone:
We can have netcat serve the page indefinitely by wrapping the last command in an infinite loop, like this:
- while true; do printf 'HTTP/1.1 200 OK\n\n%s' "$(cat index.html)" | netcat -l 8888; done
This will allow it to continue to receive connections after the first connection closes.
We can stop the loop by typing CTRL-C
on the server.
This allows you to see how a page renders in a browser, but it doesn’t provide much more functionality. You should never use this for serving actual websites. There is no security and simple things like links do not even work correctly.
You should now have a pretty good idea as to what netcat can be used for. It is a versatile tool that can be useful to diagnose problems and verify that base-level functionality is working correctly with TCP/UDP connections.
Using netcat, you can communicate between different computers very easily for quick interactions. Netcat attempts to make network interactions transparent between computers by taking the complexity out of forming connections.
]]>I would really be happy to put NTFS behind me, as every 1,5 year there is a case of chkdsk without a way around it. And with an 8TB Disk it is no more fun, to have not access to the games and downloads folders.
So anyway to make a step-by-step tutorial for installing VB in W10 and setting up the sshfs or NFS and get rid of the childish behaviour of MS not implementing ext4 etc for Windows?
That would be a punch under the bell for MS too.
Regards
]]>When interacting with your server through a shell session, there are many pieces of information that your shell compiles to determine its behavior and access to resources. Some of these settings are contained within configuration settings and others are determined by user input.
One way that the shell keeps track of all of these settings and details is through an area it maintains called the environment. The environment is an area that the shell builds every time that it starts a session that contains variables that define system properties.
In this guide, we will discuss how to interact with the environment and read or set environmental and shell variables interactively and through configuration files.
If you’d like to follow along using your local system or a remote server, open a terminal and run the commands from this tutorial there.
Every time a shell session spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.
The environment provides a medium through which the shell process can get or set settings and, in turn, pass these on to its child processes.
The environment is implemented as strings that represent key-value pairs. If multiple values are passed, they are typically separated by colon (:
) characters. Each pair will generally look something like this:
KEY=value1:value2:...
If the value contains significant white-space, quotations are used:
KEY="value with spaces"
The keys in these scenarios are variables. They can be one of two types, environmental variables or shell variables.
Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.
Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.
By convention, these types of variables are usually defined using all capital letters. This helps users distinguish environmental variables within other contexts.
Each shell session keeps track of its own shell and environmental variables. We can access these in a few different ways.
We can see a list of all of our environmental variables by using the env
or printenv
commands. In their default state, they should function exactly the same:
- printenv
Your shell environment may have more or fewer variables set, with different values than the following output:
OutputSHELL=/bin/bash
TERM=xterm
USER=demouser
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca:...
MAIL=/var/mail/demouser
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
PWD=/home/demouser
LANG=en_US.UTF-8
SHLVL=1
HOME=/home/demouser
LOGNAME=demouser
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/printenv
This is fairly typical of the output of both printenv
and env
. The difference between the two commands is only apparent in their more specific functionality. For instance, with printenv
, you can request the values of individual variables:
- printenv PATH
Output/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
On the other hand, env
lets you modify the environment that programs run in by passing a set of variable definitions into a command like this:
- env VAR1="value" command_to_run command_options
Since, as we learned above, child processes typically inherit the environmental variables of the parent process, this gives you the opportunity to override values or add additional variables for the child.
As you can see from the output of our printenv
command, there are quite a few environmental variables set up through our system files and processes without our input.
These show the environmental variables, but how do we see shell variables?
The set
command can be used for this. If we type set
without any additional parameters, we will get a list of all shell variables, environmental variables, local variables, and shell functions:
- set
OutputBASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
. . .
This is usually a huge list. You probably want to pipe it into a pager program to more easily deal with the amount of output:
- set | less
The amount of additional information that we receive back is a bit overwhelming. We probably do not need to know all of the bash functions that are defined, for instance.
We can clean up the output by specifying that set
should operate in POSIX mode, which won’t print the shell functions. We can execute this in a sub-shell so that it does not change our current environment:
- (set -o posix; set)
This will list all of the environmental and shell variables that are defined.
We can attempt to compare this output with the output of the env
or printenv
commands to try to get a list of only shell variables, but this will be imperfect due to the different ways that these commands output information:
- comm -23 <(set -o posix; set | sort) <(env | sort)
This will likely still include a few environmental variables, due to the fact that the set
command outputs quoted values, while the printenv
and env
commands do not quote the values of strings.
This should still give you a good idea of the environmental and shell variables that are set in your session.
These variables are used for all sorts of things. They provide an alternative way of setting persistent values for the session between processes, without writing changes to a file.
Some environmental and shell variables are very useful and are referenced fairly often. Here are some common environmental variables that you will come across:
SHELL
: This describes the shell that will be interpreting any commands you type in. In most cases, this will be bash by default, but other values can be set if you prefer other options.TERM
: This specifies the type of terminal to emulate when running the shell. Different hardware terminals can be emulated for different operating requirements. You usually won’t need to worry about this though.USER
: The current logged in user.PWD
: The current working directory.OLDPWD
: The previous working directory. This is kept by the shell in order to switch back to your previous directory by running cd -
.LS_COLORS
: This defines color codes that are used to optionally add colored output to the ls
command. This is used to distinguish different file types and provide more info to the user at a glance.MAIL
: The path to the current user’s mailbox.PATH
: A list of directories that the system will check when looking for commands. When a user types in a command, the system will check directories in this order for the executable.LANG
: The current language and localization settings, including character encoding.HOME
: The current user’s home directory._
: The most recent previously executed command.In addition to these environmental variables, some shell variables that you’ll often see are:
BASHOPTS
: The list of options that were used when bash was executed. This can be useful for finding out if the shell environment will operate in the way you want it to.BASH_VERSION
: The version of bash being executed, in human-readable form.BASH_VERSINFO
: The version of bash, in machine-readable output.COLUMNS
: The number of columns wide that are being used to draw output on the screen.DIRSTACK
: The stack of directories that are available with the pushd
and popd
commands.HISTFILESIZE
: Number of lines of command history stored to a file.HISTSIZE
: Number of lines of command history allowed in memory.HOSTNAME
: The hostname of the computer at this time.IFS
: The internal field separator to separate input on the command line. By default, this is a space.PS1
: The primary command prompt definition. This is used to define what your prompt looks like when you start a shell session. The PS2
is used to declare secondary prompts for when a command spans multiple lines.SHELLOPTS
: Shell options that can be set with the set
option.UID
: The UID of the current user.To better understand the difference between shell and environmental variables, and to introduce the syntax for setting these variables, we will do a small demonstration.
We will begin by defining a shell variable within our current session. This is easy to accomplish; we only need to specify a name and a value. We’ll adhere to the convention of keeping all caps for the variable name, and set it to a simple string.
- TEST_VAR='Hello World!'
Here, we’ve used quotations since the value of our variable contains a space. Furthermore, we’ve used single quotes because the exclamation point is a special character in the bash shell that normally expands to the bash history if it is not escaped or put into single quotes.
We now have a shell variable. This variable is available in our current session, but will not be passed down to child processes.
We can see this by grepping for our new variable within the set
output:
- set | grep TEST_VAR
OutputTEST_VAR='Hello World!'
We can verify that this is not an environmental variable by trying the same thing with printenv
:
- printenv | grep TEST_VAR
No output should be returned.
Let’s take this as an opportunity to demonstrate a way of accessing the value of any shell or environmental variable.
- echo $TEST_VAR
OutputHello World!
As you can see, reference the value of a variable by preceding it with a $
sign. The shell takes this to mean that it should substitute the value of the variable when it comes across this.
So now we have a shell variable. It shouldn’t be passed on to any child processes. We can spawn a new bash shell from within our current one to demonstrate:
- bash
- echo $TEST_VAR
If we type bash
to spawn a child shell, and then try to access the contents of the variable, nothing will be returned. This is what we expected.
Get back to our original shell by typing exit
:
- exit
Now, let’s turn our shell variable into an environmental variable. We can do this by exporting the variable. The command to do so is appropriately named:
- export TEST_VAR
This will change our variable into an environmental variable. We can check this by checking our environmental listing again:
- printenv | grep TEST_VAR
OutputTEST_VAR=Hello World!
This time, our variable shows up. Let’s try our experiment with our child shell again:
- bash
- echo $TEST_VAR
OutputHello World!
Great! Our child shell has received the variable set by its parent. Before we exit this child shell, let’s try to export another variable. We can set environmental variables in a single step like this:
- export NEW_VAR="Testing export"
Test that it’s exported as an environmental variable:
- printenv | grep NEW_VAR
OutputNEW_VAR=Testing export
Now, let’s exit back into our original shell:
- exit
Let’s see if our new variable is available:
- echo $NEW_VAR
Nothing is returned.
This is because environmental variables are only passed to child processes. There isn’t a built-in way of setting environmental variables of the parent shell. This is good in most cases and prevents programs from affecting the operating environment from which they were called.
The NEW_VAR
variable was set as an environmental variable in our child shell. This variable would be available to itself and any of its child shells and processes. When we exited back into our main shell, that environment was destroyed.
We still have our TEST_VAR
variable defined as an environmental variable. We can change it back into a shell variable by typing:
- export -n TEST_VAR
It is no longer an environmental variable:
- printenv | grep TEST_VAR
However, it is still a shell variable:
- set | grep TEST_VAR
OutputTEST_VAR='Hello World!'
If we want to completely unset a variable, either shell or environmental, we can do so with the unset
command:
- unset TEST_VAR
We can verify that it is no longer set:
- echo $TEST_VAR
Nothing is returned because the variable has been unset.
We’ve already mentioned that many programs use environmental variables to decide the specifics of how to operate. We do not want to have to set important variables up every time we start a new shell session, and we have already seen how many variables are already set upon login, so how do we make and define variables automatically?
This is actually a more complex problem than it initially seems, due to the numerous configuration files that the bash shell reads depending on how it is started.
The bash shell reads different configuration files depending on how the session is started.
One distinction between different sessions is whether the shell is being spawned as a login or non-login session.
A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a login shell.
If you start a new shell session from within your authenticated session, like we did by calling the bash
command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.
Another distinction that can be made is whether a shell session is interactive, or non-interactive.
An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.
So each shell session is classified as either login or non-login and interactive or non-interactive.
A normal session that begins with SSH is usually an interactive login shell. A script run from the command line is usually run in a non-interactive, non-login shell. A terminal session can be any combination of these two properties.
Whether a shell session is classified as a login or non-login shell has implications on which files are read to initialize the shell session.
A session started as a login session will read configuration details from the /etc/profile
file first. It will then look for the first login shell configuration file in the user’s home directory to get user-specific configuration details.
It reads the first file that it can find out of ~/.bash_profile
, ~/.bash_login
, and ~/.profile
and does not read any further files.
In contrast, a session defined as a non-login shell will read /etc/bash.bashrc
and then the user-specific ~/.bashrc
file to build its environment.
Non-interactive shells read the environmental variable called BASH_ENV
and read the file specified to define the new environment.
As you can see, there are a variety of different files that we would usually need to look at for placing our settings.
This provides a lot of flexibility that can help in specific situations where we want certain settings in a login shell, and other settings in a non-login shell. However, most of the time we will want the same settings in both situations.
Fortunately, most Linux distributions configure the login configuration files to source the non-login configuration files. This means that you can define environmental variables that you want in both inside the non-login configuration files. They will then be read in both scenarios.
We will usually be setting user-specific environmental variables, and we usually will want our settings to be available in both login and non-login shells. This means that the place to define these variables is in the ~/.bashrc
file.
Open this file now:
- nano ~/.bashrc
This will most likely contain quite a bit of data already. Most of the definitions here are for setting bash options, which are unrelated to environmental variables. You can set environmental variables just like you would from the command line:
- export VARNAME=value
Any new environmental variables can be added anywhere in the ~/.bashrc
file, as long as they aren’t placed in the middle of another command or for loop. We can then save and close the file. The next time you start a shell session, your environmental variable declaration will be read and passed on to the shell environment. You can force your current session to read the file now by typing:
- source ~/.bashrc
If you need to set system-wide variables, you may want to think about adding them to /etc/profile
, /etc/bash.bashrc
, or /etc/environment
.
Environmental and shell variables are always present in your shell sessions and can be very useful. They are an interesting way for a parent process to set configuration details for its children, and are a way of setting options outside of files.
This has many advantages in specific situations. For instance, some deployment mechanisms rely on environmental variables to configure authentication information. This is useful because it does not require keeping these in files that may be seen by outside parties.
There are plenty of other, more mundane, but more common scenarios where you will need to read or alter the environment of your system. These tools and techniques should give you a good foundation for making these changes and using them correctly.
]]>Note: The interactive terminal in this tutorial is currently disabled as we work on improving our interactive learning experiences. You can still use this tutorial to learn about the command line and practice Linux commands, but you will need to use the terminal on your computer or a virtual machine.
Today, many of us are familiar with computers (desktops and laptops), smartphones, and tablets which have graphical user interfaces (also referred to as GUIs), allowing us to navigate apps, the web, and our files (like documents and photos) through a visual experience. The Windows, macOS, and Linux operating systems each present varieties of a desktop environment (with images of folders and files, for example), and dropdown menus, all of which provide access to computer programs, applications, and our own media.
Although GUIs can be an intuitive way to use a computer for many users, they often do not provide us with the greatest power over our machines, and they may prevent us from having full administrative access on our computers, including installing, modifying, or deleting software or files. Additionally, as GUIs are largely visual, they are often not as accessible as they could be for all users.
One way of navigating both your own personal computer and remote cloud servers without a GUI is through a text-based terminal or command-line interface (CLI).
Terminal interfaces exist on almost every computer operating system, and terminal emulators are also available as apps for tablets and smartphones. Terminals provide users with greater overall access to their machines through increased administrator access, greater ability to customize environments, and opportunities to automate processes. They also provide users with the ability to access remote computers, such as cloud servers.
This tutorial will provide users who are new to terminal environments with the basics of using a command-line interface through an embedded web terminal in your browser, which you can launch below. If you already have some familiarity with terminals, you may prefer to go through our Introduction to the Linux Terminal tutorial instead. Once you complete this tutorial, you should have an understanding of how to use a terminal on a Linux (or macOS) computer or server.
When you first get access to a new computer or smartphone, you likely want to turn it on and get a feel for how to use it by checking which apps are available, and to learn where things are so that you can customize the device to suit your needs. You can become familiar with a computer through a terminal in a similar way.
The interactive terminal you launched in this browser window, by clicking the Launch an Interactive Terminal!
button above, displays a white rectangle on the bottom of your browser window:
If you have not launched the terminal, please do so now using the button at the beginning of this tutorial.
In your interactive browser terminal, there should be a dollar sign, $
and a blinking cursor. This is where you will begin to type commands to tell the terminal what to do.
The terminal you have launched is an Ubuntu 20.04 terminal. Ubuntu is a popular distribution of Linux, which was originally based on the Unix operating system. The macOS operating system is also based on Unix. If you are reading this tutorial on a Linux or macOS machine, you should have a terminal on your operating system that works similarly to the embedded terminal we’ll be using in this guide.
In many of these Unix (or *nix-based) operating systems, the symbols at the end of the prompt may be a $
symbol or a #
symbol, which mean the following:
$
or dollar sign — you are logged in as a regular user#
or hashtag/pound symbol — you are logged in as a user with elevated privilegesThe user that is noted in the #
environment is also known as a root user, which is considered to be a super user, or administrator, by default.
For our purposes within the browser terminal below, you are logged in as a regular user, but you also have administrator privileges via the sudo
command. As this is a temporary terminal, you do not need to worry about what you type into the terminal, as we will destroy everything once we are done. Similarly, with a cloud server, it is possible to destroy a server and start fresh if something goes awry.
Please note that it is best to exercise more care when working on a local computer’s terminal as there may be changes you can make as an administrator on the terminal that can make permanent changes on the computer you are using.
At this point, with your terminal launched in the browser, you can begin to type into it using your local computer. Your text will appear at the blinking cursor. We’ll learn about what we can type here in the next sections.
We’ll begin working with the terminal by typing a command. A command is an instruction that is given by a user, communicating what it is that the user wants the computer to do. You will be typing your commands into the terminal and then pressing ENTER
or RETURN
when you are ready for the computer to execute on a given command.
Let’s type the following command followed by ENTER
. You can also copy the command, or ask it to run in a launched interactive terminal by clicking on the relevant links in the code block below when you hover over it with a mouse.
- pwd
Once you run this command, you’ll receive the following output:
Output/home/sammy
The pwd
command stands for “present working directory,” and it lets you know where you are within the current filesystem.
In this example, you are in the directory (or folder) called /home/sammy
, which stands for the user called sammy
. If you are logged in as root
, a user with elevated privileges, then the directory would be called /root
. On a personal computer, this directory may be called the name of the user who owns the computer. Sammy Shark’s computer may have /sammy
or /sammy-shark
or /home/sammy
as their primary user directory.
Right now, this directory is empty. Let’s create a directory to store the files we’ll be creating as we go through this tutorial, which we can call files
, for example.
To do this, we’ll use the mkdir
command, which stands for “make directory.” After we type the command, we’ll need to write the name of the folder, which will pass the value to the command so that the command can execute on creating this directory. This value (the name of the folder) is known as an argument, which is an input being given to the command. If you are familiar with natural language grammar, you can think of the argument as an object that is being acted upon by the verb of the command.
In order to create a new directory called files
we’ll write the following, with mkdir
being the command and files
being the argument:
- mkdir files
After you run this command, you won’t receive any output other than a new line with a blinking cursor. With this fresh line on your terminal, you are ready for your next command.
As we have not received any concrete feedback about our new directory yet, we’ll use a command to learn more about what is in our present working directory. You can confirm that the new directory is indeed there by listing out the files in the directory, with the ls
command (signifying “list”):
- ls
You’ll receive output that confirms the files
directory is there:
Outputfiles
This gives us general information about what is in our present working directory. If we want to have more details, we can run the ls
command with what is called a flag. In Linux commands, a flag is written with a hyphen -
and letters, passing additional options (and more arguments) to the command. In our example, we’ll add the -l
flag, which — when paired with ls
— denotes that we would like to use the option to use a long listing format with our command.
Let’s type this command and flag, like so:
- ls -l
Upon pressing ENTER
, we’ll receive the following output in our terminal:
Outputtotal 4
drwxr-xr-x 2 sammy sammy 4096 Nov 13 18:06 files
Here, there are two lines of output. The first line refers to computer memory blocks being allocated to this directory, the second line mostly refers to user permissions on the file.
To get a somewhat more human readable output, we can also pass the -h
or --human-readable
flag, which will print memory sizes in a human readable format, as below. Generally, one hyphen -
refers to single-letter options, and two hyphens --
refer to options that are written out in words. Note that some options can use both formats. We can build multiple options into a command by chaining flags together, as in -lh
.
For example, the two commands below deliver the same results even though they are written differently:
- ls -lh
- ls -l --human-readable
Both of these commands will return the following output, similar to the output above but with greater context of the memory blocks:
Outputtotal 4.0K
drwxr-xr-x 2 sammy sammy 4.0K Nov 13 18:06 files
The first line of output lets us know that 4K of computer memory is dedicated to the folder. The second line of output has many more details, which we’ll go over in more detail. A general high-level reference of all the information that we’ll cover is indicated in the table below.
File type | Permissions | Link count | Owner | Group | File size | Last modified date | File name |
---|---|---|---|---|---|---|---|
d | rwxr-xr-x | 2 | sammy | sammy | 4.0K | Nov 13 18:06 | files |
You’ll note that the name of our directory, files
, is at the end of the second line of output. This name indicates which specific item in the /home/sammy
user directory is being described by the line of output. If we had another file in the directory, we would have another line of output with details on that file.
At the front of the line, there is a list of characters and dashes. Let’s break down the meaning of each of the characters:
Character | Description |
---|---|
d | directory (or folder) — a type of file that can hold other files, useful for organizing a file system; if this were - instead, this would refer to a non-directory file |
r | read — permission to open and read a file, or list the contents of a directory |
w | write — permission to modify the content of a file; and to add, remove, rename files in a directory |
x | execute — permission to run a file that is a program, or to enter and access files within a directory |
In the first drwx
characters of the string, the first letter d
means that the item files
is a directory. If this were a file other than a directory, this string of characters would begin with a hyphen instead, as in -rwx
, where the first hyphen signifies a non-directory file. The following three letters, rwx
, represent the permissions for the owner of the directory files
, and mean that the directory files
can be read, written, and executed by the owner of the file. If any of these characters were replaced with hyphens, that would mean that the owner does not have the type of permission represented by that character. We’ll discuss how to identify the owner of a file in just a moment.
The next three characters in the output are r-x
, which represent the group permissions for the files
directory. In this instance, the group has read and execute permissions, but not write permissions, as the w
is replaced with a -
. We’ll discuss how to identify the group in just a moment.
The final three characters of the first string, r-x
represents the permissions for any other groups that have access to the machine. In this case, these user groups can also read and execute, but not write.
The number 2
in the output refers to the number of links to this file. In Linux, links provide a method to create shortcuts to help users navigate the filesystem. When you created this file, Linux did some background work to create an absolute link to the file, and a self-referential link to the file to allow for users to navigate along a relative path. We’ll discuss absolute and relative paths in the next section.
After the number 2
, the word sammy
is displayed twice. This part of the output gives information about the owner and group associated with the files
directory. The first instance of sammy
in this line refers to the owner of the directory, whose permissions we saw earlier are rwx
. The sammy
user is the owner as we created the files
directory as the sammy
user and are the current owner of the file. Though the sammy
user is the only user in our current environment, Unix-like operating systems often have more than one user and so it is useful to know which user has ownership of a file.
The second instance of sammy
refers to the group that has access to the files
directory, whose permissions we saw earlier are r-x
. In this case, the group name is the same as the owner username sammy
. In real-world environments, there may be other groups on the operating system that have access to the directory, such as staff
or a username like admin
.
The rest of the details on this output line are the 4.0K
for the memory allocation of the directory on the machine, and the date that the directory was last modified (so far, we have just created it).
With this greater understanding of file systems and permissions, we can move onto navigating the file system on our Linux terminal.
So far, we have learned how to determine where we are in a filesystem, how to make a new directory, how to list out files, and how to determine permissions.
Let’s next learn how to move around the file system. We have made a new directory, but we are still in the main /home/sammy
user directory. In order to move into the /home/sammy/files
directory that we have created, we’ll use the cd
command and pass the name of the directory we want to move into as the argument. The command cd
stands for “change directory,” and we’ll construct it like so:
- cd files
Again, you won’t receive output other than a new line with a blinking cursor, but we can check that we are in the /home/sammy/files
directory with the pwd
command we used earlier:
- pwd
You’ll get the following output, confirming where you are:
Output/home/sammy/files
This validates that you are in the /home/sammy/files
directory of the /home/sammy
user directory. Does this syntax look familiar to you? It may remind you of a website’s URL with its forward slashes, and, indeed, websites are structured on servers within directories, too.
Let’s move to the primary directory of the server. Regardless of where we are in a filesystem, we can always use the command cd /
to move to the primary directory:
- cd /
To confirm that we have moved and learn what is in this directory, let’s run our list command:
- ls
We’ll receive the following output:
Outputbin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run s sbin srv sys tmp usr var
There are a lot of files in there! The /
directory is the main directory of a Linux server, referred to as the “root” directory. Note that the root directory is different from the default “root” user. You can think of the /
directory as the major artery of a Linux machine, as it contains all the folders necessary to run the computer. For example, the sys
directory holds the Linux kernel and system information virtual filesystem. If you would like to learn more about each of these directories, you can visit the Linux Foundation documentation.
You’ll also notice that there is a directory we have been in already, the /home
user folder. From the /
directory, we can change directories back into /home
then back into files
, or we can move directly back into that folder by typing the absolute path there with cd
:
- cd /home/sammy/files
Now, if you run pwd
you’ll receive /home/sammy/files
as your output.
A file path is the representation of where a file or directory is located on your computer or server. You can call a path to a file or directory in either a relative or absolute way. A relative path would be when we move to a location relative to our current working directory, like we did when we were already in /home/sammy/
and then moved into files/
. An absolute path is when we call the direct line to a location, as we did above with /home/sammy/files
, showing that we started in the /
directory, called the /home/sammy/
user directory and then the nested files/
directory.
Additionally, Linux leverages dot notation to help users navigate via relative paths. A single .
stands for the directory you are currently in, and a double ..
stands for the parent directory. So, from where we currently are (/home/sammy/files
), we can use two dots to return to the parent /home/sammy
user directory, like so:
- cd ..
If you run pwd
, you’ll receive /home/sammy
as your output, and if you run ls
, you’ll receive files
as your output.
Another important symbol to be familiar with is ~
which stands for the home directory of your machine. Here, our home directory is called /home/sammy
for the sammy user, but on a local machine it may be your own name as in sammy-shark/
.
You can type the following from anywhere on your machine and return to this home directory:
- cd ~
At this point, feel free to navigate around your file system with the commands you have learned so far. In the next section, we’ll begin working with text files.
Now that we have a foundation in the Linux file system and how to get around it, let’s start creating new files and learn about how to manipulate text on the command line.
Let’s first be sure that we’re in the files/
directory of the /home/sammy
user folder, which we can do by either verifying with pwd
, or by changing directories on the absolute path:
- cd /home/sammy/files
Now, we’ll create a new text file. We’ll be making a .txt
file, which is a standard file that can be read across operating systems. Unlike .doc
files, a .txt
file is composed of unformatted text. Unformatted text, including the text in.txt
files, can readily be used on the command line, and therefore can be used when working with textual data programmatically (as in, to automate text analysis, to pull information from text, and more).
We’ll begin by using the touch
command, which can create a new file or modify an existing file. To use it, you can use the command touch
and pass the name of the text file you want to create as the argument, as demonstrated below.
- touch ocean.txt
Once you press ENTER
, you’ll receive a new line of the command prompt, and you can list the current contents of files/
to ensure it was created.
- ls
Outputocean.txt
So far we have created an ocean.txt
file which contains no text at the time of creation.
If we want to create a text file that is initialized with text, we can use the echo
command, which is used to display strings of text in Linux.
We can use echo
directly on the command line to have the interface repeat after us. The traditional first program, "Hello, World!"
, can be written with echo
like so:
- echo Hello, World!
OutputHello, World!
Named for Echo of Ovid’s Metamorphosis, the echo
command returns back what we request. In this case, it echoed, “Hello, World!” On its own, however, the echo
command does not allow us to store the value of our text into a text file. In order to do that, we will need to type the following:
- echo "Sammy the Shark" > sammy.txt
The above command uses echo
, then the text we would like to add to our file in quotes, then the redirection operator >
, and finally the name of our new text file, sammy.txt
.
We can check that our new file exists, again with ls
.
- ls
Outputocean.txt sammy.txt
We now have two text files in our /home/sammy/files
user folder. Next, we can confirm that the file sammy.txt
does have the text we asked the terminal to echo into it. We can do that with the cat
command. Short for concatenate, the cat
command is very useful for working with files. Among its functions is showing the contents of a file.
- cat sammy.txt
Once we run the command, we’ll receive the following output:
OutputSammy the Shark
If we were to run cat
on the empty file ocean.txt
, we would receive nothing in return as there is no text in that file. We can add text to this existing file with echo
as well. Let’s add a quote from Zora Neale Hurston to the file.
- echo "Some people could look at a mud puddle and see an ocean with ships." > ocean.txt
Now, if we run cat
on the file, we’ll receive output of the text we just entered.
- cat ocean.txt
OutputSome people could look at a mud puddle and see an ocean with ships.
So far, we have created text files and have added text to these files, but we have not yet modified these files. If we would like to do that, we can use a command-line text editor. Several popular choices exist, including Vim and Emacs. For our purposes, we’ll use nano, which is a less complex CLI text editor program that we can use to begin our exploration.
The nano text editor can be summoned with the nano
command. If we want to edit our existing sammy.txt
file, we can do so by passing the file name as an argument.
- nano sammy.txt
The file will open up on your terminal:
Sammy the Shark
With your keyboard’s arrow keys, move your cursor to the end of the line and begin typing a few lines from the perspective of Sammy.
Note: On the command line, you can’t use your mouse or other pointer to navigate, both through the file system and within files. You’ll need to use your keyboard, and your arrow keys in particular, to move around textual files.
When you’re done with your file, it may read something like this:
Sammy the Shark
Hello, I am Sammy.
I am studying computer science.
Nice to meet you!
With your file now containing the text you would like, we can now save and close the file. You may notice that there is some guidance at the bottom of your terminal window:
^G Get Help ^O WriteOut ^R Read File ^Y Prev Page ^K Cut Text ^C Cur Pos
^X Exit ^J Justify ^W Where Is ^V Next Page ^U UnCut Text ^T To Spell
Because we are currently done with working on this file, we would like to Exit
the file. Here, the ^
symbol refers to the Control
or CTRL
key on your keyboard, and the output above tells us that we need to combine that key with X
(use this lower case, without pressing the SHIFT
key) in order to leave the file. Let’s press those two keys together:
CTRL x
The above is often written inline as CTRL + X
or Ctrl+x
in technical documentation.
At this point, you’ll receive the following prompt:
OutputSave modified buffer?
Y Yes
N No ^C Cancel
In order to save it, we’ll press the letter y
for yes:
y
You’ll receive feedback like the following.
OutputFile Name to Write: sammy.txt
There are additional options, including cancelling with CTRL + C
, but if you are comfortable with closing the file, you can press ENTER
at this point to save the file and exit it.
Let’s say that we want to make a few files of students at DigitalOcean University. Let’s create a new directory in files/
called students
:
- mkdir students
Next, let’s move sammy.txt
into the new students/
directory. The mv
command, which stands for move, will allow us to change the location of a file. The command is constructed by taking the file we want to move as the first argument, and the new location as the second argument. Both of the following executions will produce the same result.
- mv sammy.txt students
- mv sammy.txt students/sammy.txt
This latter option would be useful if we would like to change the name of the file, as in mv sammy.txt students/sammy-the-shark.txt
.
Now, if we run the ls
command, we’ll see that only ocean.txt
and the students/
directory are in our current directory (files/
). Let’s move into the students/
folder.
- cd students
In order to have a template for the other students, we can copy the sammy.txt
file to create more files. To do this, we can use the cp
command, which stands for copy. This command works similarly to the mv
command, taking the original file as the first argument, and the new file as the second argument. We’ll make a file for Alex the Leafy Seadragon:
- cp sammy.txt alex.txt
Now, we can open alex.txt
and inspect it.
- nano alex.txt
So far, alex.txt
looks identical to sammy.txt
. By replacing some of the words, we can modify this file to read like the following. Note that you can use CTRL + K
to remove an entire line.
Alex the Leafy Seadragon
Hello, I am Alex.
I am studying oceanography.
Nice to meet you!
You can save and close the file by pressing CTRL + X
then y
then ENTER
.
If you would like to get more practice with text files, consider creating files for Jamie the Mantis Shrimp, Jesse the Octopus, Drew the Squid, or Taylor the Yellowfin Tuna.
Once you feel comfortable with creating, editing, copying, and moving text files, we can move onto the next section.
Many versions of the command line, including the interactive terminal embedded in this tutorial, allow you to autocomplete and to reuse commands as you go. This supports you moving more quickly as it saves you typing time.
Try typing cat
along with the first few letters of one of the text files you have been working on — for example, cat sa
. Before you finish typing the whole file name of sammy.txt
, press the TAB
key instead. This should autocomplete the full file name, so that your terminal prompt displays the following:
- cat sammy.txt
Now, if you press ENTER
, the terminal should return the contents of the file to the command line.
Another shortcut is to press the UP
arrow key, which will let you cycle through the most recent commands you have run. On a new line with a blinking cursor, press the UP
arrow key a few times to have quick access to your previous commands.
If you need to replicate all the commands you have done in your terminal, you can also summon the entire history of this session with the aptly named history
command:
- history
Depending on how much you have practiced, you should receive 30 or more lines of commands, starting with the following output:
Output 1 pwd
2 mkdir files
3 ls
4 ls -l
...
Familiarizing yourself with these shortcuts will support you as you become more proficient with the command line interface.
One of the most exciting aspects of working on a command line interface connected to the internet is that you have access to all of the resources on the web, and can act on them in an automated way. With the terminal, you can also directly access cloud servers that you have credentials for, manage and orchestrate cloud infrastructure, build your own web apps, and more. For now, as we have already learned how to work with text files on the terminal, we’ll go over how to pull down a text file from the web so that the machine we are using has that text file available to us.
Let’s move back into the files/
directory:
- cd /home/sammy/files
From here, we’ll use the curl
command to transfer data from the web to our personal interactive terminal on the browser. The command curl
stands for client URL (web address).
We have uploaded a short passage from Jules Verne’s Twenty Thousand Leagues Under the Seas on a cloud server. We’ll pass the URL of that file to the curl
command, as demonstrated below.
- curl https://assets.digitalocean.com/articles/command-line-intro/verne_twenty-thousand-leagues.txt
Once we press ENTER
, we’ll receive the text of the passage as output to our terminal (excerpted below)
Output"You like the sea, Captain?"
"Yes; I love it! The sea is everything. It covers seven tenths of the terrestrial globe.
...
"Captain Nemo," said I to my host, who had just thrown himself on one of the divans, "this
is a library which would do honor to more than one of the continental palaces, and I am
absolutely astounded when I consider that it can follow you to the bottom of the seas."
While it’s interesting to have the text display on our terminal window, we do not have the file available to us, we have only transferred the data but have not stored it. (You can verify that the file is not there by running ls
).
In order to save the text to a file, we’ll need to run curl
with the -O
flag, which enables us to output the text to a file, taking the same name of the remote file for our local copy.
- curl -O https://assets.digitalocean.com/articles/command-line-intro/verne_twenty-thousand-leagues.txt
You’ll receive feedback from the terminal that your file has downloaded.
Output % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2671 100 2671 0 0 68487 0 --:--:-- --:--:-- --:--:-- 68487
If you would like to use a specific and alternate name of the file, you could do so with the -o
flag and pass the name of the new file as an argument (in this case, jules.txt
).
- curl -o jules.txt https://assets.digitalocean.com/articles/command-line-intro/verne_twenty-thousand-leagues.txt
We can now work on this file exactly as we would any other text file. Try using cat
, or editing it with nano
.
In the next section, we’ll clean up some of the files and exit our terminal.
As with any other computer, we sometimes need to remove files and folders that are no longer relevant, and exit the program we are using.
Let’s say that the students we know from DigitalOcean University have graduated and we need to clean up their files and the relevant folder. Ensure you are in the students/
directory:
- cd /home/sammy/files/students
If you run ls
, your folder may have a few files, like so:
Outputalex.txt drew.txt jamie.txt jesse.txt sammy.txt taylor.txt
We can remove individual files with the rm
command, which stands for remove. We’ll need to pass the file we want to remove as the argument.
Warning: Note that once you remove a file, it cannot be undone. Be sure that you want to remove the file before pressing ENTER
.
- rm sammy.txt
Now, if we run ls
, we’ll notice that sammy.txt
is no longer in the folder:
Outputalex.txt drew.txt jamie.txt jesse.txt taylor.txt
While we now know we can remove individual files with rm
, it is not very time efficient if we want to remove the entire students/
directory and all of its contents.
The command that is used to remove directories is called rmdir
, which stands for remove directory. Let’s move to the parent folder of files
so that we can work with the students/
directory from there (we would not be able to delete a folder we are presently in).
- cd ..
From the /home/sammy/
user directory, we can run rmdir
on students
.
- rmdir students
However, this does not work, as we receive the following feedback:
Outputrmdir: failed to remove 'students': Directory not empty
The command did not work as rmdir
only works on empty directories and the students
directory still has files in it. (Here, you can create a new, empty folder, and try rmdir
on it. Empty folders can be removed with rmdir
.)
To remove the directory with files still inside, we’ll need to try a different option. In computer science, recursion is commonly used to iteratively self-reference; so we can call both a primary item and all its dependencies. Using the rm
command, we can recursively remove the primary students
directory and all of its content dependencies. We’ll use the -r
flag, which stands for recursive, and pass the folder students
as the argument.
- rm -r students
At this point, if we run ls
, we’ll notice that students/
is no longer in our present directory, and none of the files it held are available either, as they have all been deleted.
When you are done with a terminal session, and especially when you are working on a remote server, you can exit the terminal with the exit
command. Once you feel comfortable with what you have achieved in this session (as you won’t be able to restore it), you can type the following, followed by ENTER
to leave the terminal.
- exit
On our interactive terminal, we’ll receive the following output, confirming that our session has ended.
OutputSession ended
With this session complete, you can refresh this page and then launch a new terminal to try out alternate commands, or create a new file system to explore.
Congratulations! You now know your way around the terminal interface, and are well on your way to doing more with computers and servers.
To continue your learning, you can take a guided pathway on setting up and managing remote servers with our Introduction to Cloud Computing curriculum.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a Linux server you may often spend much of your time in a terminal session connected to your server through SSH.
While there are a few different ways of logging into an SSH server, in this guide, we’ll focus on setting up SSH keys. SSH keys provide an extremely secure way of logging into your server. For this reason, this is the method we recommend for all users.
Simplify deploying applications to servers with DigitalOcean App Platform. Deploy directly from GitHub in minutes.
An SSH server can authenticate clients using a variety of different methods. The most basic of these is password authentication, which is easy to use, but not the most secure.
Although passwords are sent to the server in a secure manner, they are generally not complex or long enough to be resistant to repeated, persistent attackers. Modern processing power combined with automated scripts make brute-forcing a password-protected account very possible. Although there are other methods of adding additional security (fail2ban
, etc.), SSH keys prove to be a reliable and secure alternative.
SSH key pairs are two cryptographically secure keys that can be used to authenticate a client to an SSH server. Each key pair consists of a public key and a private key.
The private key is retained by the client and should be kept absolutely secret. Any compromise of the private key will allow the attacker to log into servers that are configured with the associated public key without additional authentication. As an additional precaution, the key can be encrypted on disk with a passphrase.
The associated public key can be shared freely without any negative consequences. The public key can be used to encrypt messages that only the private key can decrypt. This property is employed as a way of authenticating using the key pair.
The public key is uploaded to a remote server that you want to be able to log into with SSH. The key is added to a special file within the user account you will be logging into called ~/.ssh/authorized_keys
.
When a client attempts to authenticate using SSH keys, the server can test the client on whether they are in possession of the private key. If the client can prove that it owns the private key, a shell session is spawned or the requested command is executed.
The first step to configure SSH key authentication to your server is to generate an SSH key pair on your local computer.
To do this, we can use a special utility called ssh-keygen
, which is included with the standard OpenSSH suite of tools. By default, this will create a 3072 bit RSA key pair.
On your local computer, generate a SSH key pair by typing:
- ssh-keygen
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
The utility will prompt you to select a location for the keys that will be generated. By default, the keys will be stored in the ~/.ssh
directory within your user’s home directory. The private key will be called id_rsa
and the associated public key will be called id_rsa.pub
.
Usually, it is best to stick with the default location at this stage. Doing so will allow your SSH client to automatically find your SSH keys when attempting to authenticate. If you would like to choose a non-standard path, type that in now, otherwise, press ENTER
to accept the default.
If you had previously generated an SSH key pair, you may see a prompt that looks like this:
Output/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
OutputCreated directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Next, you will be prompted to enter a passphrase for the key. This is an optional passphrase that can be used to encrypt the private key file on disk.
You may be wondering what advantages an SSH key provides if you still need to enter a passphrase. Some of the advantages are:
Since the private key is never exposed to the network and is protected through file permissions, this file should never be accessible to anyone other than you (and the root user). The passphrase serves as an additional layer of protection in case these conditions are compromised.
A passphrase is an optional addition. If you enter one, you will have to provide it every time you use this key (unless you are running SSH agent software that stores the decrypted key). We recommend using a passphrase, but if you do not want to set a passphrase, you can press ENTER
to bypass this prompt.
OutputYour identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CAjsV9M/tt5skazroTc1ZRGCBz+kGtYUIPhRvvZJYBs username@hostname
The key's randomart image is:
+---[RSA 3072]----+
|o ..oo.++o .. |
| o o +o.o.+... |
|. . + oE.o.o . |
| . . oo.B+ .o |
| . .=S.+ + |
| . o..* |
| .+= o |
| .=.+ |
| .oo+ |
+----[SHA256]-----+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH key authentication to log in.
Note: a previous version of this tutorial had instructions for adding an SSH public key to your DigitalOcean account. Those instructions can now be found in the SSH Keys section of our DigitalOcean product documentation.
There are multiple ways to upload your public key to your remote SSH server. The method you use depends largely on the tools you have available and the details of your current configuration.
The following methods all yield the same end result. The simplest, most automated method is described first, and the ones that follow it each require additional manual steps. You should follow these only if you are unable to use the preceding methods.
ssh-copy-id
The simplest way to copy your public key to an existing server is to use a utility called ssh-copy-id
. Because of its simplicity, this method is recommended if available.
The ssh-copy-id
tool is included in the OpenSSH packages in many distributions, so you may already have it available on your local system. For this method to work, you must currently have password-based SSH access to your server.
To use the utility, you need to specify the remote host that you would like to connect to, and the user account that you have password-based SSH access to. This is the account where your public SSH key will be copied.
The syntax is:
- ssh-copy-id username@remote_host
You may see a message like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You will see output that looks like this:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue onto the next section.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.
We can do this by outputting the content of our public SSH key on our local computer and piping it through an SSH connection to the remote server. On the other side, we can make sure that the ~/.ssh
directory exists under the account we are using and then output the content we piped over into a file called authorized_keys
within this directory.
We will use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.
The full command will look like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see a message like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type yes
and press ENTER
to continue.
Afterwards, you will be prompted with the password of the account you are attempting to connect to:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue to the next section if this was successful.
If you do not have password-based SSH access to your server available, you will have to do the above process manually.
The content of your id_rsa.pub
file will have to be added to a file at ~/.ssh/authorized_keys
on your remote machine somehow.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which may look something like this:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== username@hostname
Access your remote host using whatever method you have available. This may be a web-based console provided by your infrastructure provider.
Note: if you’re using a DigitalOcean Droplet, please refer to our Recovery Console documentation in the DigitalOcean product docs.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory is created. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
or similar.
If this works, you can move on to test your new key-based SSH authentication.
If you have successfully completed one of the procedures above, you should be able to log into the remote host without the remote account’s password.
The process is mostly the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type yes
and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be required to enter it now. Afterwards, a new shell session will be created for you with the account on the remote system.
If successful, continue on to find out how to lock down the server.
If you were able to login to your account using SSH without a password, you have successfully configured SSH key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH key-based authentication configured for the root account on this server, or preferably, that you have SSH key-based authentication configured for an account on this server with sudo
access. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is essential.
Once the above conditions are true, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Open the SSH daemon’s configuration file:
- sudo nano /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out. Uncomment the line by removing any #
at the beginning of the line, and set the value to no
. This will disable your ability to log in through SSH using account passwords:
PasswordAuthentication no
Save and close the file when you are finished. To actually implement the changes we just made, you must restart the service.
On most Linux distributions, you can issue the following command to do that:
- sudo systemctl restart ssh
After completing this step, you’ve successfully transitioned your SSH daemon to only respond to SSH keys.
You should now have SSH key-based authentication configured and running on your server, allowing you to sign in without providing an account password. From here, there are many directions you can head. If you’d like to learn more about working with SSH, take a look at our SSH essentials guide.
]]>In this tutorial, you’ll use the curl
command to download a text file from a web server. You’ll view its contents, save it locally, and tell curl
to follow redirects if files have moved.
Downloading files off of the Internet can be dangerous, so be sure you are downloading from reputable sources. In this tutorial you’ll download files from DigitalOcean, and you won’t be executing any files you download.
Out of the box, without any command-line arguments, the curl
command will fetch a file and display its contents to the standard output.
Let’s give it a try by downloading the robots.txt
file from Digitalocean.com:
- curl https://www.digitalocean.com/robots.txt
You’ll see the file’s contents displayed on the screen:
OutputUser-agent: *
Disallow:
sitemap: https://www.digitalocean.com/sitemap.xml
sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz
Give curl
a URL and it will fetch the resource and display its contents.
Fetching a file and display its contents is all well and good, but what if you want to actually save the file to your system?
To save the remote file to your local system, with the same filename as the server you’re downloading from, add the --remote-name
argument, or use the -O
option:
- curl -O https://www.digitalocean.com/robots.txt
Your file will download:
Output % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 286 0 286 0 0 5296 0 --:--:-- --:--:-- --:--:-- 5296
Instead of displaying the contents of the file, curl
displays a text-based progress meter and saves the file to the same name as the remote file’s name. You can check on things with the cat
command:
- cat robots.txt
The file contains the same contents you saw previously:
OutputUser-agent: *
Disallow:
sitemap: https://www.digitalocean.com/sitemap.xml
sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz
Now let’s look at specifying a filename for the downloaded file.
You may already have a local file with the same name as the file on the remote server.
To avoid overwriting your local file of the same name, use the -o
or --output
argument, followed by the name of the local file you’d like to save the contents to.
Execute the following command to download the remote robots.txt
file to the locally named do-bots.txt
file:
- curl -o do-bots.txt https://www.digitalocean.com/robots.txt
Once again you’ll see the progress bar:
Output % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 286 0 286 0 0 6975 0 --:--:-- --:--:-- --:--:-- 7150
Now use the cat
command to display the contents of do-bots.txt
to verify it’s the file you downloaded:
- cat do-bots.txt
The contents are the same:
OutputUser-agent: *
Disallow:
sitemap: https://www.digitalocean.com/sitemap.xml
sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz
By default, curl
doesn’t follow redirects, so when files move, you might not get what you expect. Let’s look at how to fix that.
Thus far all of the examples have included fully qualified URLs that include the https://
protocol. If you happened to try to fetch the robots.txt
file and only specified www.digitalocean.com
, you would not see any output, because DigitalOcean redirects requests from http://
to https://
:
You can verify this by using the -I
flag, which displays the request headers rather than the contents of the file:
- curl -I www.digitalocean.com/robots.txt
The output shows that the URL was redirected. The first line of the output tells you that it was moved, and the Location
line tells you where:
OutputHTTP/1.1 301 Moved Permanently
Cache-Control: max-age=3600
Cf-Ray: 65dd51678fd93ff7-YYZ
Cf-Request-Id: 0a9e3134b500003ff72b9d0000000001
Connection: keep-alive
Date: Fri, 11 Jun 2021 19:41:37 GMT
Expires: Fri, 11 Jun 2021 20:41:37 GMT
Location: https://www.digitalocean.com/robots.txt
Server: cloudflare
. . .
You could use curl
to make another request manually, or you can use the --location
or -L
argument which tells curl
to redo the request to the new location whenever it encounters a redirect. Give it a try:
- curl -L www.digitalocean.com/robots.txt
This time you see the output, as curl
followed the redirect:
OutputUser-agent: *
Disallow:
sitemap: https://www.digitalocean.com/sitemap.xml
sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz
sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz
You can combine the -L
argument with some of the aforementioned arguments to download the file to your local system:
- curl -L -o do-bots.txt www.digitalocean.com/robots.txt
Warning: Many resources online will ask you to use curl
to download scripts and execute them. Before you run any scripts you have downloaded, it’s good practice to check their contents before making them executable and running them. Use the less
command to review the code to ensure it’s something you want to run.
curl
lets you quickly download files from a remote system. curl
supports many different protocols and can also make more complex web requests, including interacting with remote APIs to send and receive data.
You can learn more by viewing the manual page for curl
by running man curl
.
This a in new Ubuntu 20.04.2 server I just installed on a raspberry Pi4 and on a Vbox VM as well. I haven’t created a public/private key pair, I’m just trying to access using my user passowrd, and I want to now if I need to change some configuration(s) on my windows PC or what do I have to change on my ubuntu server so I can get access and then do the public/private key pair or if there’s already one created in linux, how can is send it my Windows machine so I can get access to the server?
I’m confused and hope you can understand my question(s) and provide a resoluton.
]]>Everything works fine when I access server from outside. The web-server is accessible both via IP and subdomain (it’s serving proper content). However, when I try to access it from LAN, it works only via local IP address. When I try to use public IP or domain, I get error message ERR_EMPTY_RESPONSE in browser.
Error log shows that apache is shutting down for some reason (only when accessed from LAN via domain or public IP). It happens both in HTTP and HTTPS settings.
[Thu May 27 15:02:10.985765 2021] [mpm_prefork:notice] [pid 2889] AH00169: caught SIGTERM, shutting down
[Thu May 27 15:02:11.481503 2021] [mpm_prefork:notice] [pid 3111] AH00163: Apache/2.4.38 (Debian) OpenSSL/1.1.1d configured -- resuming normal operations
[Thu May 27 15:02:11.481570 2021] [core:notice] [pid 3111] AH00094: Command line: '/usr/sbin/apache2'
No IPs are blocked in firewall. DNS are correctly resolving domain to public IP.
Does any one have an idea how to find some answers?
]]>This is the second mini tutorial about Associative Arrays in Bash. You can check the previous one here:
In this mini-tutorial, we’ll be talking about how to access elements in an associative array. We’ll be accessing them individually with for loops.
]]>grub-emu
:
error: sparse file not allowed.
error: no such device: 21b294f1-25bd-4265-9c4e-d6e4aeb57e97.
error: can't find command `linux'.
error: can't find command `initrd'.
If I run file -s /dev/sda
, I see:
/dev/sda: Linux rev 1.0 ext4 filesystem data, UUID=6e1fd477-4aec-4323-83b8-55c419ce471f (needs journal recovery) (extents) (64bit) (large files) (huge files)
This is very worrying because I can’t reboot the machine, what can I do?
I also posted this question to https://askubuntu.com/questions/1337564/my-grub-bootloader-appears-to-be-severely-broken-how-can-i-repair-it
]]>kubectl delete pods <pod> --grace-period=0 --force
Later I tried to helm upgrade again, my pod was stuck at containercreating status, and this event from pod
Warning FailedMount pod/db-mysql-primary-0 MountVolume.SetUp failed for volume “pvc-f32a6f84-d897-4e35-9595-680302771c54” : kubernetes.io/csi: mount er.SetUpAt failed to check for STAGE_UNSTAGE_VOLUME capability: rpc error: code = Unavailable desc = connection error: desc = “transport: Error while dialing dial unix /var/lib/kubelet/plugins/dobs.csi.digitalocean.com/csi.sock: connect: no such file or directory”
anyone please can help me to resolve this issue, thanks a lot.
]]>So recently I was working a project where I had to implement some new features. The project was primarily written in BASH. While I love BASH it’s for simpler tasks, or at least this is what I thought.
What is an Associative array:
In computer science, an associative array, map, symbol table, or dictionary is an abstract data type composed of a collection of (key, value) pairs, such that each possible key appears at most once in the collection.
I’ve used associative arrays in different projects when I had to work with PHP, Python, and other languages but never with BASH, I didn’t even know it was possible.
In this mini-tutorial, I’ll go over exactly that, associative arrays in BASH, how to control, populate it, how to define and use them.
]]>My server is running low on disk space and I think it is it is because of some Docker images.
How can I find the directory where the Docker images are stored at on the host/server itself?
Thanks!
]]>ERROR: Could not find a version that satisfies the requirement cloud-init==20.2 (from -r /tmp/build_bd2124f1/requirements.txt (line 8)) (from versions: none)
remote:
ERROR: No matching distribution found for cloud-init==20.2 (from -r /tmp/build_bd2124f1/requirements.txt (line 8))
remote: !
Push rejected, failed to compile Python app.
My requirements.txt file:
asgiref==3.3.4
attrs==19.3.0
Automat==0.8.0
blinker==1.4
certifi==2019.11.28
chardet==3.0.4
Click==7.0
cloud-init==20.2
colorama==0.4.3
command-not-found==0.3
conda==4.3.16
configobj==5.0.6
constantly==15.1.0
cryptography==2.8
dbus-python==1.2.16
distro==1.4.0
distro-info===0.23ubuntu1
dj-database-url==0.5.0
Django==3.2
django-heroku==0.3.1
entrypoints==0.3
gunicorn==20.1.0
httplib2==0.14.0
hyperlink==19.0.0
idna==2.8
importlib-metadata==1.5.0
incremental==16.10.1
jellyfish==0.8.2
Jinja2==2.10.1
jsonpatch==1.22
jsonpointer==2.0
jsonschema==3.2.0
keyring==18.0.1
language-selector==0.1
launchpadlib==1.10.13
lazr.restfulclient==0.14.2
lazr.uri==1.0.3
lib50==2.0.8
MarkupSafe==1.1.0
more-itertools==4.2.0
netifaces==0.10.4
oauthlib==3.1.0
pbr==5.6.0
pexpect==4.8.0
psycopg2==2.8.6
ptyprocess==0.6.0
pyasn1==0.4.2
pyasn1-modules==0.2.1
pycairo==1.20.0
pycosat==0.6.3
PyGObject==3.36.0
PyHamcrest==1.9.0
PyJWT==1.7.1
pymacaroons==0.13.0
PyNaCl==1.3.0
pyOpenSSL==19.0.0
pyrsistent==0.15.5
pyserial==3.4
python-aalib==0.4
python-apt==2.0.0+ubuntu0.20.4.1
python-debian===0.1.36ubuntu1
python-decouple==3.4
pytz==2021.1
PyYAML==5.3.1
requests==2.22.0
requests-unixsocket==0.2.0
ruamel.yaml==0.17.4
ruamel.yaml.clib==0.2.2
SecretStorage==2.3.1
service-identity==18.1.0
simplejson==3.16.0
six==1.14.0
sqlparse==0.4.1
ssh-import-id==5.10
submit50==3.0.2
systemd-python==234
termcolor==1.1.0
testresources==2.0.1
touch==2020.12.3
Twisted==18.9.0
ubuntu-advantage-tools==20.3
ufw==0.36
unattended-upgrades==0.1
urllib3==1.25.8
values==2020.12.3
wadllib==1.3.3
whitenoise==5.2.0
zipp==1.0.0
zope.interface==4.7.1
]]>However when created a SAS Data set using SAS Programming Runtime Environment(SPRE) in Viya while logged in as that specific user, the permission being reflected are 0022.
Any one help me understand what could have happened to override the initial set umask 0002 to 0022 ?
Thanks in advance.
]]>For my domain mbaglue.com, I am getting flaky SSL errors. On certain visits, users are reporting SSL Certificate expired error.
I validated this with third party SSL sites and indeed in certain requests, SSL cert shows expired on 25th April.
However, when I run certbot certificates, it shows
Found the following certs:
Certificate Name: mbaglue.com
Domains: mbaglue.com www.mbaglue.com
Expiry Date: 2021-06-25 02:59:14+00:00 (VALID: 60 days)
Certificate Path: /etc/letsencrypt/live/mbaglue.com/fullchain.pem
Private Key Path: /etc/letsencrypt/live/mbaglue.com/privkey.pem
This is sort of a conflicting evidence piling up. Can someone help with this please.
DO support has been unresponsive for hours now.
]]>I am writing a script that would take some user input but I want to make sure that the string provided by the user is consistent.
Is there a way to transform a string in bash to all lower case or all upper case?
Thanks!
]]>I am just getting started with Kubernetes and the kubectl
command specifically.
As there are so many flags and arguments, is there a way to enable autocompletion just like in git
?
So for example when I type kubectl get no[TAB]
it would autocomplete nodes
.
Thank you!
]]>I have set up a droplet with LAMP installed and managed to clone the repository from GitHub to /var/www/html. If I set a regular HTML page as the index, it works just fine. However, attempting to do the same with a PHP file presents me with the error seen in the title of the post. I have a .sql file containing the data and I can’t figure out where it should be uploaded to. I know this is a problem with the database, but I am unsure of how to solve it.
Using XAMPP to test locally works perfectly fine.
I realize this is a novice question and might come across as ignorant, but this is my first time hosting and using a Linux terminal.
Greatly appreciate any and all help.
Thank you.
]]>Auf einem Linux-Server werden wie auf jedem anderen Computer, mit dem Sie möglicherweise vertraut sind, Anwendungen ausgeführt. Auf dem Computer werden diese als „Prozesse“ bezeichnet.
Während Linux die Verwaltung auf niedriger Ebene hinter den Kulissen im Lebenszyklus eines Prozesses übernimmt, benötigen Sie eine Möglichkeit zur Interaktion mit dem Betriebssystem, um es von einer höheren Ebene aus zu verwalten.
In diesem Leitfaden werden wir einige einfache Aspekte der Prozessverwaltung erörtern. Linux bietet eine reichliche Sammlung von Tools für diesen Zweck.
Wir werden diese Ideen auf einem Ubuntu 12.04 VPS untersuchen, aber jede moderne Linux-Distribution funktioniert auf ähnliche Weise.
Der einfachste Weg, um herauszufinden, welche Prozesse auf Ihrem Server ausgeführt werden, besteht darin, den Befehl top
auszuführen:
top***
top - 15:14:40 bis 46 min, 1 Benutzer, Lastdurchschnitt: 0,00, 0,01, 0,05 Aufgaben: 56 insgesamt, 1 laufend, 55 inaktiv, 0 gestoppt, 0 Zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k gesamt, 316576k gebraucht, 703024k frei, 7652k Puffer Swap: 0k insgesamt, 0k verwendet, 0k frei, 258976k zwischengespeichert PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
Der oberste Informationsblock enthält Systemstatistiken wie die Systemlast und die Gesamtzahl der Aufgaben.
Sie können leicht erkennen, dass 1 Prozess ausgeführt wird und 55 Prozesse inaktiv sind (auch bekannt als inaktiv/ohne CPU-Ressourcen).
Der untere Teil enthält die laufenden Prozesse und ihre Nutzungsstatistiken.
Eine verbesserte Version von top
namens htop
ist in den Repositorys verfügbar. Installieren Sie sie mit diesem Befehl:
sudo apt-get install htop
Wenn wir den Befehl htop
ausführen, sehen wir, dass es eine benutzerfreundlichere Anzeige gibt:
htop***
Mem[||||||||||| 49/995MB] Durchschnittslast: 0.00 0.03 0.05 CPU[ 0.0%] Aufgaben: 21, 3 thr; 1 laufend Swp[ 0/0MB] Betriebszeit: 00:58:11 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init 311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid 314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae 389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys 407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5 408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5 553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
Sie können hier mehr über die Verwendung von top und htop erfahren.
Sowohl top
als auch htop
bieten eine schöne Benutzeroberfläche, um laufende Prozesse zu sehen, die einem grafischen Aufgabenmanager ähneln.
Diese Tools sind jedoch nicht immer flexibel genug, um alle Szenarien angemessen zu behandeln. Ein leistungsfähiger Befehl namens ps
ist oft die Antwort auf diese Probleme.
Wenn er ohne Argumente aufgerufen wird, kann die Ausgabe etwas fehlerhafter sein:
ps***
PID TTY TIME CMD 1017 pts/0 00:00:00 bash 1262 pts/0 00:00:00 ps
Diese Ausgabe zeigt alle mit dem aktuellen Benutzer und der Terminalsitzung verknüpften Prozesse an. Dies ist sinnvoll, da wir derzeit nur bash
und ps
mit diesem Terminal ausführen.
Um ein vollständigeres Bild der Prozesse auf diesem System zu erhalten, können wir Folgendes ausführen:
ps aux***
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 24188 2120 ? Ss 14:28 0:00 /sbin/initroot 2 0.0 0.0 0 0 ? S 14:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 14:28 0:00 [ksoftirqd/0] root 6 0.0 0.0 0 0 ? S 14:28 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 14:28 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S< 14:28 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S< 14:28 0:00 [khelper] . . .
Diese Optionen weisen ps
an, Prozesse, die allen Benutzern gehören (unabhängig von ihrer Terminalzuordnung), in einem benutzerfreundlichen Format anzuzeigen.
Um eine Baumansicht zu sehen, in der hierarchische Beziehungen illustriert werden, können wir den Befehl mit diesen Optionen ausführen:
ps axjf***
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 0 2 0 0 ? -1 S 0 0:00 [kthreadd] 2 3 0 0 ? -1 S 0 0:00 \_ [ksoftirqd/0] 2 6 0 0 ? -1 S 0 0:00 \_ [migration/0] 2 7 0 0 ? -1 S 0 0:00 \_ [watchdog/0] 2 8 0 0 ? -1 S< 0 0:00 \_ [cpuset] 2 9 0 0 ? -1 S< 0 0:00 \_ [khelper] 2 10 0 0 ? -1 S 0 0:00 \_ [kdevtmpfs] 2 11 0 0 ? -1 S< 0 0:00 \_ [netns] . . .
Wie Sie sehen können, wird der Prozess kthreadd
als übergeordnetes Element des Prozesses ksoftirqd/0
und der anderen Prozesse angezeigt.
In Linux- und Unix-ähnlichen Systemen wird jedem Prozess einer Prozess-ID oder PID zugewiesen. So identifiziert und verfolgt das Betriebssystem Prozesse.
Eine schnelle Möglichkeit zum Abrufen der PID eines Prozesses ist mit dem Befehl pgrep
:
pgrep bash***
1017
Dadurch wird die Prozess-ID einfach abfragt und zurückgegeben.
Der erste beim Booten erzeugte Prozess namens init erhält die PID „1“.
pgrep init***
1
Dieser Prozess ist dann dafür verantwortlich, jeden anderen Prozess auf dem System zu erzeugen. Die späteren Prozesse erhalten größere PID-Nummern.
Das übergeordnete Element eines Prozesses ist der Prozess, der für das Ablegen verantwortlich war. Übergeordnete Prozesse verfügen über eine PPID, die Sie in den Spaltenüberschriften vieler Prozessverwaltungsanwendungen sehen können, einschließlich top
, htop
und ps
.
Jede Kommunikation zwischen dem Benutzer und dem Betriebssystem über Prozesse umfasst die Übersetzung zwischen Prozessnamen und PIDs zu einem bestimmten Zeitpunkt während des Vorgangs. Aus diesem Grund teilen Dienstprogramme Ihnen die PID mit.
Das Erstellen eines untergeordneten Prozesses erfolgt in zwei Schritten: fork(), das einen neuen Adressraum erstellt und die Ressourcen des übergeordneten Elements per Copy-on-Write kopiert, um dem untergeordneten Prozess zur Verfügung zu stehen; und exec(), das eine ausführbare Datei in den Adressraum lädt und ausführt.
Für den Fall, dass ein untergeordneter Prozess vor seinem übergeordneten Prozess beendet wird, wird der untergeordnete Prozess zu einem Zombie, bis der übergeordnete Prozess Informationen darüber gesammelt oder dem Kernel angezeigt hat, dass er diese Informationen nicht benötigt. Die Ressourcen aus dem untergeordneten Prozess werden dann freigegeben. Wenn der übergeordnete Prozess jedoch vor dem untergeordneten Prozess beendet wird, wird der untergeordnete Prozess von init übernommen, obwohl er auch einem anderen Prozess neu zugewiesen werden kann.
Alle Prozesse in Linux reagieren auf Signale. Signale sind eine Methode auf Betriebssystemebene, mit der Programme angewiesen werden, ihr Verhalten zu beenden oder zu ändern.
Die häufigste Art, Signale an ein Programm weiterzuleiten, ist mit dem Befehl kill
.
Wie Sie möglicherweise erwarten, besteht die Standardfunktion dieses Dienstprogramms darin, zu versuchen, einen Prozess zu beenden:
<pre>kill <span class=“highlight”>PID_of_target_process</span></pre>
Dadurch wird das TERM-Signal an den Prozess gesendet. Das TERM-Signal weist den Prozess an, zu beenden. Dadurch kann das Programm Reinigungsvorgänge durchführen und reibungslos beenden.
Wenn sich das Programm schlecht verhält und bei Erhalt des TERM-Signals nicht beendet wird, können wir das Signal durch Weiterleiten des KILL
-Signals eskalieren:
<pre>kill -KILL <span class=“highlight”>PID_of_target_process</span></pre>
Dies ist ein spezielles Signal, das nicht an das Programm gesendet wird.
Stattdessen wird es dem Betriebssystem-Kernel übergeben, der den Prozess herunterschaltet. Dies wird verwendet, um Programme zu umgehen, die die an sie gesendeten Signale ignorieren.
Jedem Signal ist eine Nummer zugeordnet, die anstelle des Namens übergeben werden kann. Beispielsweise können Sie „-15“ anstelle von „-TERM“ und „-9“ anstelle von „-KILL“ übergeben.
Signale werden nicht nur zum Herunterfahren von Programmen verwendet. Sie können auch verwendet werden, um andere Aktionen auszuführen.
Beispielsweise werden viele Daemons neu gestartet, wenn sie das HUP
- oder Auflegesignal erhalten. Apache ist ein Programm, das so funktioniert.
<pre>sudo kill -HUP <span class=“highlight”>pid_of_apache</span></pre>
Der obige Befehl führt dazu, dass Apache seine Konfigurationsdatei neu lädt und Inhalte wiederbelebt.
Sie können alle Signale auflisten, die mit kill gesendet werden können, indem Sie Folgendes eingeben:
kill -l***
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM . . .
Obwohl die konventionelle Art des Sendens von Signalen durch die Verwendung von PIDs ist, gibt es auch Methoden, dies mit regulären Prozessnamen zu tun.
Der Befehl pkill
funktioniert fast genau so wie kill
, operiert jedoch stattdessen auf einem Prozessnamen:
pkill -9 ping
Der obige Befehl ist das Äquivalent von:
kill -9 `pgrep ping`
Wenn Sie ein Signal an jede Instanz eines bestimmten Prozesses senden möchten, können Sie den Befehl killall
verwenden:
killall firefox
Der obige Befehl sendet das TERM-Signal an jede Instanz von Firefox, das auf dem Computer ausgeführt wird.
Oft möchten Sie anpassen, welchen Prozessen in einer Serverumgebung Priorität eingeräumt wird.
Einige Prozesse können als geschäftskritisch für Ihre Situation angesehen werden, während andere ausgeführt werden können, wenn Ressourcen übrig bleiben.
Linux kontrolliert die Priorität durch einen Wert namens niceness.
Hohe Prioritätsaufgaben werden als weniger nett angesehen, da sie auch keine Ressourcen teilen. Prozesse mit niedriger Priorität sind dagegen nett, weil sie darauf bestehen, nur minimale Ressourcen zu verbrauchen.
Als wir am Anfang des Artikels top
ausgeführt haben, gab es eine Spalte mit der Bezeichnung „NI“. Dies ist der nette Wert des Prozesses:
top***
Aufgaben: 56 insgesamt, 1 laufend, 55 inaktiv, 0 gestoppt, 0 Zombie Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k insgesamt, 324496k verwendet, 695104k frei, 8512k Puffer Swap: 0k insgesamt, 0k verwendet, 0k frei, 264812k zwischengespeichert PID-BENUTZER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1635 root 20 0 17300 1200 920 R 0.3 0.1 0:00.01 top 1 root 20 0 24188 2120 1300 S 0,0 0,2 0:00,56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
Nette Werte können je nach System zwischen „-19/-20“ (höchste Priorität) und „19/20“ (niedrigste Priorität) liegen.
Um ein Programm mit einem bestimmten netten Wert auszuführen, können wir den Befehl nice
verwenden:
<pre>nice -n 15 <span class=“highlight”>command_to_execute</span></pre>
Dies funktioniert nur, wenn ein neues Programm gestartet wird.
Um den netten Wert eines Programms zu ändern, das bereits ausgeführt wird, verwenden wir ein Tool namens renice
:
<pre>renice 0 <span class=“highlight”>PID_to_prioritize</span></pre>
Hinweis: Während nice zwangsläufig mit einem Befehlsnamen funktioniert, ruft renice die Prozess-PID auf
Die Prozessverwaltung ist ein Thema, das für neue Benutzer manchmal schwer zu verstehen ist, da sich die verwendeten Tools von ihren grafischen Gegenstücken unterscheiden.
Die Ideen sind jedoch vertraut und intuitiv und werden mit ein wenig Übung zur Gewohnheit. Da Prozesse an allem beteiligt sind, was Sie mit einem Computersystem tun, ist es eine wesentliche Fähigkeit, zu lernen, wie man sie effektiv steuert.
<div class=“author”>Von Justin Ellingwood</div>
]]>Die Trennung von Berechtigungen ist eines der grundlegenden Sicherheitsparadigmen, die in Linux- und Unix-ähnlichen Betriebssystemen implementiert sind. Normale Benutzer arbeiten mit eingeschränkten Berechtigungen, um den Umfang ihres Einflusses auf ihre eigene Umgebung und nicht auf das breitere Betriebssystem zu reduzieren.
Ein spezieller Benutzer namens root verfügt über Super-Benutzer-Berechtigungen. Dies ist ein Administratorkonto ohne die Einschränkungen, die für normale Benutzer gelten. Benutzer können Befehle mit Super-Benutzer- oder root-Berechtigungen auf verschiedene Arten ausführen.
In diesem Artikel wird erläutert, wie Sie root-Berechtigungen korrekt und sicher erhalten. Ein besonderer Schwerpunkt liegt dabei auf der Bearbeitung der Datei /etc/sudoers
.
Wir werden diese Schritte auf einem Ubuntu 20.04-Server ausführen, aber die meisten modernen Linux-Distributionen wie Debian und CentOS sollten auf ähnliche Weise funktionieren.
In diesem Leitfaden wird davon ausgegangen, dass Sie die hier beschriebene Ersteinrichtung des Servers bereits abgeschlossen haben. Melden Sie sich als regulärer Benutzer ohne root-Berechtigung bei Ihrem Server an und fahren Sie unten fort.
Hinweis: Dieses Tutorial befasst sich ausführlich mit der Eskalation von Berechtigungen und der sudoers
-Datei. Wenn Sie einem Benutzer nur sudo
-Berechtigungen hinzufügen möchten, lesen Sie unsere Schnellstart-Tutorials zum Erstellen eines neuen Sudo-fähigen Benutzers für Ubuntu und CentOS.
Es gibt drei grundlegende Möglichkeiten, um root-Berechtigungen zu erhalten, die sich in ihrem Entwicklungsstand unterscheiden.
Die einfachste und unkomplizierteste Methode zum Abrufen von root-Berechtigungen besteht darin, sich direkt als root-Benutzer bei Ihrem Server anzumelden.
Wenn Sie sich bei einem lokalen Computer anmelden (oder eine Out-of-Band-Konsolenfunktion auf einem virtuellen Server verwenden), geben Sie an der Anmeldeaufforderung root
als Benutzernamen ein und geben Sie das root-Passwort ein, wenn Sie dazu aufgefordert werden.
Wenn Sie sich über SSH anmelden, geben Sie den root-Benutzer vor der IP-Adresse oder dem Domänennamen in Ihrer SSH-Verbindungszeichenfolge an:
- ssh root@server_domain_or_ip
Wenn Sie keine SSH-Schlüssel für den root-Benutzer eingerichtet haben, geben Sie das root-Passwort ein, wenn Sie dazu aufgefordert werden.
su
, um ein root-Benutzer zu werdenDie direkte Anmeldung als root-Benutzer wird normalerweise nicht empfohlen, da es einfach ist, das System für nicht administrative Aufgaben zu verwenden, was gefährlich ist.
Der nächste Weg, um Superuser-Berechtigungen zu erhalten, ermöglicht es Ihnen, jederzeit der root-Benutzer zu werden, wie Sie es benötigen.
Wir können dies tun, indem wir den Befehl su
aufrufen, der für „Ersatzbenutzer“ steht. Geben Sie Folgendes ein, um root-Berechtigungen zu erhalten:
- su
Sie werden aufgefordert, das Kennwort des root-Benutzers einzugeben. Anschließend werden Sie in eine root-Shell-Sitzung versetzt.
Wenn Sie die Aufgaben abgeschlossen haben, für die root-Berechtigungen erforderlich sind, kehren Sie zu Ihrer normalen Shell zurück, indem Sie Folgendes eingeben:
- exit
sudo
, um Befehle als root-Benutzer auszuführenDer letzte Weg, um root-Berechtigungen zu erhalten, den wir diskutieren werden, ist mit dem Befehl sudo
.
Mit dem Befehl sudo
können Sie einmalige Befehle mit root-Berechtigungen ausführen, ohne eine neue Shell erstellen zu müssen. Es wird wie folgt ausgeführt:
- sudo command_to_execute
Im Gegensatz zu su
fordert der Befehl sudo
das Passwort des aktuellen Benutzers an, nicht das root-Passwort.
Aufgrund seiner Auswirkungen auf die Sicherheit wird Sudo
-Zugriff Benutzern standardmäßig nicht gewährt und muss eingerichtet werden, bevor er ordnungsgemäß funktioniert. In unseren Schnellstart-Tutorials zum Erstellen eines neuen Sudo-fähigen Benutzers für Ubuntu und CentOS erfahren Sie, wie Sie einen Sudo
-fähigen Benutzer einrichten.
Im folgenden Abschnitt werden wir ausführlicher erläutern, wie Sie die Sudo
-Konfiguration ändern können.
Der Befehl sudo
wird über eine Datei unter /etc/sudoers
konfiguriert.
Warnung: Bearbeiten Sie diese Datei nie mit einem normalen Texteditor! Verwenden Sie stattdessen immer den Befehl visudo
!
Da eine falsche Syntax in der Datei /etc/sudoers
zu einem Systembruch führen kann, bei dem es nicht möglich ist, erhöhte Berechtigungen zu erhalten, ist es wichtig, den Befehl visudo
zum Bearbeiten der Datei zu verwenden.
Der Befehl visudo
öffnet wie gewohnt einen Texteditor, überprüft jedoch beim Speichern die Syntax der Datei. Dies verhindert, dass Konfigurationsfehler sudo
-Vorgänge blockieren. Dies ist möglicherweise Ihre einzige Möglichkeit, root-Berechtigungen zu erhalten.
Traditionell öffnet visudo
die Datei /etc/sudoers
mit dem vi
-Texteditor. Ubuntu hat visudo
jedoch so konfiguriert, dass stattdessen der nano
-Texteditor verwendet wird.
Wenn Sie ihn wieder in vi
ändern möchten, geben Sie den folgenden Befehl ein:
- sudo update-alternatives --config editor
OutputThere are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim.basic 30 manual mode
4 /usr/bin/vim.tiny 10 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Wählen Sie die Zaus, die der Auswahl entspricht, die Sie treffen möchten.
Unter CentOS können Sie diesen Wert ändern, indem Sie Ihrem ~/.bashrc
die folgende Zeile hinzufügen:
- export EDITOR=`which name_of_editor`
Geben Sie die Datei ein, um die Änderungen zu implementieren:
- . ~/.bashrc
Nachdem Sie visudo
konfiguriert haben, führen Sie den Befehl aus, um auf die Datei /etc/sudoers
zuzugreifen:
- sudo visudo
In Ihrem ausgewählten Texteditor wird die Datei /etc/sudoers
angezeigt.
Ich habe die Datei von Ubuntu 18.04 kopiert und eingefügt, wobei Kommentare entfernt wurden. Die Datei CentOS /etc/sudoers
enthält viele weitere Zeilen, von denen einige in diesem Leitfaden nicht behandelt werden.
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
#includedir /etc/sudoers.d
Werfen wir einen Blick darauf, was diese Zeilen bewirken.
In der ersten Zeile „Defaults env_reset“ wird die Terminalumgebung zurückgesetzt, um alle Benutzervariablen zu entfernen. Dies ist eine Sicherheitsmaßnahme, mit der potenziell schädliche Umgebungsvariablen aus der sudo
-Sitzung entfernt werden.
In der zweiten Zeile, wird mit dem Befehl Defaults mail_badpass
das System angewiesen, Benachrichtigungen über fehlerhafte sudo
-Passwortversuche an den konfigurierten mailto
-Benutzer zu senden. Standardmäßig ist dies das root-Konto.
Die dritte Zeile, die mit „Defaults Secure_path=…“ beginnt, gibt den Pfad
an (die Stellen im Dateisystem, nach denen das Betriebssystem nach Anwendungen sucht), die für sudo
-Operationen verwendet werden. Dies verhindert, dass Benutzerpfade verwendet werden, was schädlich sein kann.
Die vierte Zeile, die die sudo-Berechtigungen des root
-Benutzers vorschreibt, unterscheidet sich von den vorhergehenden Zeilen. Schauen wir uns an, was die verschiedenen Felder bedeuten:
root ALL=(ALL:ALL) ALL
Das erste Feld gibt den Benutzernamen an, für den die Regel gilt (root).
root ALL=(ALL:ALL) ALL
Das erste „ALL“ gibt an, dass diese Regel für alle Hosts gilt.
root ALL=(ALL:ALL) ALL
Dieses „ALL“ gibt an, dass der root-Benutzer Befehle wie alle Benutzer ausführen kann.
root ALL=(ALL:ALL) ALL
Dieses „ALL“ gibt an, dass der root-Benutzer Befehle wie alle Gruppen ausführen kann.
root ALL=(ALL:ALL) ALL
Das letzte „ALL“ gibt an, dass diese Regeln für alle Befehle gelten.
Dies bedeutet, dass unser root-Benutzer jeden Befehl mit sudo
ausführen kann, solange er sein Passwort angibt.
Die nächsten beiden Zeilen ähneln den Benutzerberechtigungszeilen, geben jedoch sudo
-Regeln für Gruppen an.
Namen, die mit %
beginnen, geben Gruppennamen an.
Hier sehen wir, dass die Administratorgruppe jeden Befehl als jeder Benutzer auf jedem Host ausführen kann. Ebenso hat die sudo-Gruppe die gleichen Berechtigungen, kann aber auch wie jede andere Gruppe ausgeführt werden.
Die letzte Zeile könnte auf den ersten Blick wie ein Kommentar aussehen:
. . .
#includedir /etc/sudoers.d
Sie **beginnt mit einem #
, das normalerweise einen Kommentar anzeigt. Diese Zeile zeigt jedoch tatsächlich an, dass Dateien im Verzeichnis /etc/sudoers.d
ebenfalls bezogen und angewendet werden.
Dateien in diesem Verzeichnis folgen denselben Regeln wie die Datei /etc/sudoers
. Jede Datei, die nicht mit ~
endet und keinen .
enthält, wird gelesen und an die sudo
-Konfiguration angehängt.
Dies ist hauptsächlich für Anwendungen gedacht, die die sudo
-Berechtigungen bei der Installation ändern möchten. Wenn Sie alle zugehörigen Regeln in einer einzigen Datei im Verzeichnis /etc/sudoers.d
ablegen, können Sie leicht erkennen, welche Berechtigungen welchen Konten zugeordnet sind, und Anmeldeinformationen einfach umkehren, ohne die Datei /etc/sudoers
direkt bearbeiten zu müssen.
Wie bei der Datei /etc/sudoers
selbst sollten Sie Dateien im Verzeichnis /etc/sudoers.d
immer mit visudo
bearbeiten. Die Syntax zum Bearbeiten dieser Dateien lautet:
- sudo visudo -f /etc/sudoers.d/file_to_edit
Die häufigste Operation, die Benutzer beim Verwalten von sudo
-Berechtigungen ausführen möchten, besteht darin, einem neuen Benutzer allgemeinen sudo
-Zugriff zu gewähren. Dies ist nützlich, wenn Sie einem Konto vollen Administratorzugriff auf das System gewähren möchten.
Der einfachste Weg, dies auf einem System zu tun, das mit einer Allzweck-Verwaltungsgruppe wie dem Ubuntu-System in diesem Leitfaden eingerichtet ist, besteht darin, den betreffenden Benutzer zu dieser Gruppe hinzuzufügen.
Unter Ubuntu 20.04 verfügt die sudo
-Gruppe beispielsweise über vollständige Administratorrechte. Wir können einem Benutzer dieselben Berechtigungen gewähren, indem wir ihn der Gruppe wie folgt hinzufügen:
- sudo usermod -aG sudo username
Der Befehl gpasswd
kann auch verwendet werden:
- sudo gpasswd -a username sudo
Beides wird dasselbe bewirken.
Unter CentOS ist dies normalerweise die wheel
-Gruppe anstelle der sudo
-Gruppe:
- sudo usermod -aG wheel username
Oder unter Verwendung von gpasswd
:
- sudo gpasswd -a username wheel
Wenn das Hinzufügen des Benutzers zur Gruppe unter CentOS nicht sofort funktioniert, müssen Sie möglicherweise die Datei /etc/sudoers
bearbeiten, um den Gruppennamen zu kommentieren:
- sudo visudo
. . .
%wheel ALL=(ALL) ALL
. . .
Nachdem wir uns mit der allgemeinen Syntax der Datei vertraut gemacht haben, erstellen wir einige neue Regeln.
Die sudoers
-Datei kann einfacher organisiert werden, indem Dinge mit verschiedenen Arten von „Aliases“ gruppiert werden.
Zum Beispiel können wir drei verschiedene Benutzergruppen mit überlappender Mitgliedschaft erstellen:
. . .
User_Alias GROUPONE = abby, brent, carl
User_Alias GROUPTWO = brent, doris, eric,
User_Alias GROUPTHREE = doris, felicia, grant
. . .
Gruppennamen müssen mit einem Großbuchstaben beginnen. Wir können dann Mitgliedern von GROUPTWO
erlauben, die apt
-Datenbank zu aktualisieren, indem wir eine Regel wie die folgende erstellen:
. . .
GROUPTWO ALL = /usr/bin/apt-get update
. . .
Wenn Sie keinen Benutzer/keine Gruppe angeben, die wie oben ausgeführt werden soll, ist sudo
standardmäßig der root-Benutzer.
Wir können Mitgliedern von GROUPTHREE
erlauben, den Computer herunterzufahren und neu zu starten, indem wir einen „Befehls-Alias“ erstellen und diesen in einer Regel für GROUPTHREE
verwenden:
. . .
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart
GROUPTHREE ALL = POWER
. . .
Wir erstellen einen Befehls-Alias namens POWER
, der Befehle zum Ausschalten und Neustarten des Computers enthält. Wir erlauben dann den Mitgliedern von GROUPTHREE
, diese Befehle auszuführen.
Wir können auch Aliase „Ausführen als“ erstellen, die den Teil der Regel ersetzen können, der den Benutzer angibt, der den Befehl ausführen soll:
. . .
Runas_Alias WEB = www-data, apache
GROUPONE ALL = (WEB) ALL
. . .
Auf diese Weise kann jeder, der Mitglied von GROUPONE
ist, Befehle als www-Datenbenutzer
oder Apache
-Benutzer ausführen.
Denken Sie daran, dass spätere Regeln frühere Regeln überschreiben, wenn ein Konflikt zwischen beiden besteht.
Es gibt eine Reihe von Möglichkeiten, wie Sie mehr Kontrolle darüber erlangen können, wie sudo
auf einen Anruf reagiert.
Der dem mlocate
-Paket zugeordnete Befehl updateb
ist auf einem Einzelbenutzersystem relativ harmlos. Wenn wir Benutzern erlauben möchten, es mit root-Berechtigungen auszuführen, ohne ein Passwort eingeben zu müssen, können wir eine Regel wie die folgende festlegen:
. . .
GROUPONE ALL = NOPASSWD: /usr/bin/updatedb
. . .
NOPASSWD
ist ein „Tag“, das bedeutet, dass kein Passwort angefordert wird. Es hat einen Begleitbefehl namens PASSWD
, der das Standardverhalten ist. Ein Tag ist für den Rest der Regel relevant, es sei denn, es wird später von seinem „Twin“ -Tag außer Kraft gesetzt.
Zum Beispiel können wir eine Zeile wie diese haben:
. . .
GROUPTWO ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill
. . .
Ein weiteres hilfreiches Tag ist NOEXEC
, mit dem in bestimmten Programmen gefährliches Verhalten verhindert werden kann.
Beispielsweise können einige Programme, wie z. B. less
, andere Befehle erzeugen, indem man dies über seine Benutzeroberfläche eingibt:
!command_to_run
Dies führt grundsätzlich jeden Befehl aus, den der Benutzer mit denselben Berechtigungen erteilt, unter denen less
ausgeführt wird, was sehr gefährlich sein kann.
Um dies einzuschränken, könnten wir eine Zeile wie die folgende verwenden:
. . .
username ALL = NOEXEC: /usr/bin/less
. . .
Es gibt einige weitere Informationen, die beim Umgang mit sudo
nützlich sein können.
Wenn Sie in der Konfigurationsdatei einen Benutzer oder eine Gruppe als „Ausführen als“ angegeben haben, können Sie Befehle als diese Benutzer mithilfe der Flags -u
bzw. -g
ausführen:
- sudo -u run_as_user command
- sudo -g run_as_group command
Praktischerweise speichert sudo
Ihre Authentifizierungsdaten standardmäßig für einen bestimmten Zeitraum in einem Terminal. Dies bedeutet, dass Sie Ihr Passwort erst erneut eingeben müssen, wenn dieser Timer abgelaufen ist.
Wenn Sie diesen Timer aus Sicherheitsgründen löschen möchten, wenn Sie mit dem Ausführen von Verwaltungsbefehlen fertig sind, können Sie Folgendes ausführen:
- sudo -k
Wenn Sie andererseits den Befehl sudo
„vorbereiten“ möchten, damit Sie später nicht dazu aufgefordert werden, oder Ihren sudo
-Mietvertrag erneuern möchten, können Sie jederzeit Folgendes eingeben:
- sudo -v
Sie werden aufgefordert, Ihr Passwort einzugeben, das für spätere sudo
-Verwendungen zwischengespeichert wird, bis der sudo
-Zeitrahmen abläuft.
Wenn Sie sich nur fragen, welche Berechtigungen für Ihren Benutzernamen definiert sind, können Sie Folgendes eingeben:
- sudo -l
Dadurch werden alle Regeln in der Datei /etc/sudoers
aufgelistet, die für Ihren Benutzer gelten. Dies gibt Ihnen eine gute Vorstellung davon, was Sie mit sudo
als Benutzer tun dürfen oder nicht.
Es gibt viele Fälle, in denen Sie einen Befehl ausführen und dieser fehlschlägt, weil Sie vergessen haben, ihm sudo
voranzustellen. Um zu vermeiden, dass Sie den Befehl erneut eingeben müssen, können Sie eine Bash-Funktion nutzen, die „letzten Befehl wiederholen“ bedeutet:
- sudo !!
Das doppelte Ausrufezeichen wiederholt den letzten Befehl. Wir haben sudo
vorangestellt, um den nicht privilegierten Befehl schnell in einen privilegierten Befehl umzuwandeln.
Für ein bisschen Spaß können Sie Ihrer /etc/sudoers
-Datei mit visudo
die folgende Zeile hinzufügen:
- sudo visudo
. . .
Defaults insults
. . .
Dies führt dazu, dass sudo
eine dumme Beleidigung zurückgibt, wenn ein Benutzer ein falsches Passwort für sudo
eingibt. Wir können sudo -k
verwenden, um das vorherige zwischengespeicherte sudo
-Passwort zu löschen und es auszuprobieren:
- sudo -k
- sudo ls
Output[sudo] password for demo: # enter an incorrect password here to see the results
Your mind just hasn't been the same since the electro-shock, has it?
[sudo] password for demo:
My mind is going. I can feel it.
Sie sollten nun ein grundlegendes Verständnis für das Lesen und Ändern der sudoers
-Datei und einen Überblick über die verschiedenen Methoden haben, mit denen Sie root-Berechtigungen erhalten können.
Denken Sie daran, dass Superuser-Berechtigungen regulären Benutzern aus einem bestimmten Grund nicht gewährt werden. Es ist wichtig, dass Sie verstehen, was jeder Befehl tut, den Sie mit root-Berechtigungen ausführen. Übernehmen Sie die Verantwortung nicht leichtfertig. Erfahren Sie, wie Sie diese Tools am besten für Ihren Anwendungsfall verwenden und nicht benötigte Funktionen sperren können.
]]>Is it somehow possible to download all the disk data from the droplet to be able to start configuring the system anew, but saving all the stuff I had there?
]]>Сервер Linux, как и любой другой компьютер, использует приложения. Компьютер рассматривает эти приложения как процессы.
Хотя Linux автоматически выполняет все скрытые низкоуровневые задачи жизненного цикла процесса, нам необходим способ взаимодействия с операционной системой для управления на более высоком уровне.
В этом учебном модуле мы расскажем о некоторых простых аспектах управления процессами. Linux предоставляет широкий выбор инструментов для этой цели.
В качестве примера мы используем Ubuntu 12.04 VPS, но любые современные дистрибутивы Linux будут работать аналогичным образом.
Чтобы посмотреть, какие процессы запущены на вашем сервере, нужно запустить команду top
:
top***
top - 15:14:40 up 46 min, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 316576k used, 703024k free, 7652k buffers Swap: 0k total, 0k used, 0k free, 258976k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
В верхней части подборки приведена статистика по системе, в том числе сведения о нагрузке и общем количестве задач.
Вы можете легко увидеть, что в системе запущен 1 процесс, а 55 процессов находятся в режиме сна (т. е. не активны/не используют ресурсы ЦП).
В нижней части отображаются запущенные процессы и статистика их использования.
В репозиториях доступна улучшенная версия top
, которая называется htop
. Установите ее с помощью следующей команды:
sudo apt-get install htop
Если мы запустим команду htop
, мы увидим отображение информации в более удобном формате:
htop***
Mem[||||||||||| 49/995MB] Load average: 0.00 0.03 0.05 CPU[ 0.0%] Tasks: 21, 3 thr; 1 running Swp[ 0/0MB] Uptime: 00:58:11 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init 311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid 314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae 389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys 407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5 408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5 553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
Вы можете узнать больше об использовании top и htop здесь.
И top
, и htop
предоставляют удобный интерфейс для просмотра работающих процессов, похожий на графический диспетчер задач.
Однако эти инструменты не всегда достаточно гибкие, чтобы охватывать все сценарии. Решить эту проблему может помочь мощная команда ps
.
При вызове без аргументов вывод может быть довольно сжатым:
ps***
PID TTY TIME CMD 1017 pts/0 00:00:00 bash 1262 pts/0 00:00:00 ps
Вывод показывает все процессы, связанные с текущим пользователем и текущим сеансом терминала. Это имеет смысл, потому что мы запускаем на этом терминале только bash
и ps
.
Чтобы получить более полное представление о процессах в данной системе, мы можем использовать следующую команду:
ps aux***
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 24188 2120 ? Ss 14:28 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 14:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 14:28 0:00 [ksoftirqd/0] root 6 0.0 0.0 0 0 ? S 14:28 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 14:28 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S< 14:28 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S< 14:28 0:00 [khelper] . . .
Эти опции предписывают ps
показать процессы, принадлежащие всем пользователям (вне зависимости от привязки терминала) в удобном формате.
Чтобы посмотреть представление дерева с иллюстрацией иерархических отношений, данную команду можно запустить с этими опциями:
ps axjf***
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 0 2 0 0 ? -1 S 0 0:00 [kthreadd] 2 3 0 0 ? -1 S 0 0:00 \_ [ksoftirqd/0] 2 6 0 0 ? -1 S 0 0:00 \_ [migration/0] 2 7 0 0 ? -1 S 0 0:00 \_ [watchdog/0] 2 8 0 0 ? -1 S< 0 0:00 \_ [cpuset] 2 9 0 0 ? -1 S< 0 0:00 \_ [khelper] 2 10 0 0 ? -1 S 0 0:00 \_ [kdevtmpfs] 2 11 0 0 ? -1 S< 0 0:00 \_ [netns] . . .
Как видите, процесс kthreadd
отображается как родитель процесса ksoftirqd/0
и других процессов.
В системах Linux и Unix каждому процессу назначается идентификатор процесса или PID. Операционная система использует их для идентификации и отслеживания процессов.
Чтобы быстро узнать PID процесса, вы можете использовать команду pgrep
:
pgrep bash***
1017
Эта команда просто запросит идентификатор процесса и выведет его.
Процессу init, который создается первым при загрузке, присваивается PID “1”.
pgrep init***
1
Этот процесс отвечает за создание всех остальных процессов в системе. Последующим процессам присваиваются большие номера PID.
Родитель процесса — это процесс, который отвечает за его создание. Родительские процессы имеют идентификатор PPID, который можно увидеть в заголовках столбцов многих приложений для управления процессами, включая top
, htop
и ps
.
Любое взаимодействие между пользователем и операционной системой, связанное с процессами, включает взаимное преобразование имен процессов и PID. Именно поэтому утилиты сообщают вам PID.
Создание дочернего процесса осуществляется в два этапа: fork() создает новое адресное пространство и копирует в него ресурсы, принадлежащие родительскому процессу, с помощью copy-on-write; а exec() загружает исполняемый блок в адресное пространство и выполняет его.
Если дочерний процесс завершается раньше родительского, он остается бесхозным, пока родитель не получит информацию о нем или не сообщит ядру, что эта информация не требуется. В этом случае ресурсы дочернего процесса освободятся. Если родительский процесс завершается раньше дочернего, дочерний процесс привязывается к процессу init, хотя его можно переназначить другому процессу.
Все процессы Linux реагируют на сигналы. Операционная система использует сигналы, чтобы отправить программам команду остановиться или изменить поведение.
Наиболее распространенный способ передачи сигналов в программу — использовать команду kill
.
Как вы можете догадаться, по умолчанию эта утилита пытается уничтожить процесс:
<pre>kill <span class=“highlight”>PID_of_target_process</span></pre>
Она отправляет процессору сигнал TERM. Сигнал TERM просит процесс остановиться. Это позволяет программе выполнить операции по очистке и нормально завершить работу.
Если программа работает неправильно и не завершает работу после получения сигнала TERM, мы можем отправить сигнал более высокого уровня — KILL
:
<pre>kill -KILL <span class=“highlight”>PID_of_target_process</span></pre>
Это специальный сигнал, который не отправляется программе.
Вместо этого он передается в ядро операционной системы, которое отключает процесс. Он используется, чтобы обходить программы, игнорирующие отправляемые им сигналы.
Каждому сигналу присвоено число, которое можно передать вместо имени. Например, вы можете передать “-15” вместо “-TERM” и “-9” вместо “-KILL”.
Сигналы используются не только для отключения программ. Их также можно использовать для выполнения других действий.
Например, многие демоны перезапускаются при получении сигнала HUP
или прекращения работы. Например, так работает Apache.
<pre>sudo kill -HUP <span class=“highlight”>pid_of_apache</span></pre>
Получив вышеуказанную команду, Apache перезагрузит файл конфигурации и возобновит вывод контента.
Вы можете вывести список сигналов, которые можно отправлять с помощью kill, используя следующую команду:
kill -l***
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM . . .
Хотя обычно при отправке сигналов используются PID, существуют способы использовать для этой же цели обычные имена процессов.
Команда pkill
работает практически точно так же как и kill
, но использует имя процесса:
pkill -9 ping
Вышеуказанная команда эквивалентна команде:
kill -9 `pgrep ping`
Если вы хотите отправить сигнал каждому экземпляру определенного процесса, вы можете использовать команду killall
:
killall firefox
Приведенная выше команда отправит сигнал TERM всем экземплярам firefox, запущенным на этом компьютере.
Часто бывает необходимо изменить приоритет процессов в серверной среде.
Некоторые процессоры могут быть важными, а другие могут выполняться на излишках ресурсов.
Linux контролирует приоритеты с помощью значения вежливости.
Приоритетные задачи считаются менее вежливыми, потому что они вообще не делятся ресурсами. Процессы с низким приоритетом считаются более вежливыми, потому что они используют минимум ресурсов.
Когда мы запускали команду top
в начале этого учебного модуля, мы видели столбец “NI”. В этом столбце отображается значение вежливости процесса:
top***
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 324496k used, 695104k free, 8512k buffers Swap: 0k total, 0k used, 0k free, 264812k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1635 root 20 0 17300 1200 920 R 0.3 0.1 0:00.01 top 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
В зависимости от системы, значения вежливости могут различаться от “-19/-20” (наибольший приоритет) до “19/20” (наименьший приоритет).
Чтобы запустить программу с определенным значением вежливости, мы можем использовать команду nice
:
<pre>nice -n 15 <span class=“highlight”>command_to_execute</span></pre>
Это работает только в начале новой программы.
Чтобы изменить значение вежливости уже выполняемой программы, мы используем инструмент renice
:
<pre>renice 0 <span class=“highlight”>PID_to_prioritize</span></pre>
Примечание. Хотя nice по необходимости использует имя команды, renice вызывает PID процесса.
Управление процессами — это тема, которая иногда бывает сложной для новых пользователей, потому что используемые для этой цели инструменты отличаются от аналогичных инструментов с графическим интерфейсом.
Однако все эти идеи знакомы, интуитивно понятны и станут привычными после небольшой практики. Поскольку процессы используются в компьютерных системах повсеместно, умение эффективно управлять ими — критически важный навык.
<div class=“author”>Джастин Эллингвуд</div>
]]>Разделение привилегий — одна из основных парадигм безопасности в операционных системах семейства Linux и Unix. Обычные пользователи работают с ограниченными привилегиями и могут влиять только на собственную рабочую среду, но не на операционную систему в целом.
Специальный пользователь с именем root, имеет привилегии суперпользователя. Это административная учетная запись без ограничений, действующих для обычных пользователей. Пользователи могут выполнять команды с привилегиями суперпользователя или root разными способами.
В этой статье мы обсудим, как правильно и безопасно получать привилегии root, и при этом уделим особое внимание редактированию файла /etc/sudoers
.
Мы выполним эти действия на сервере Ubuntu 20.04, но данная процедура будет выглядеть примерно так же и на других современных дистрибутивах Linux, таких как Debian и CentOS.
В этом учебном модуле предполагается, что вы уже выполнили начальную настройку сервера, как было описано здесь. Войдите на сервер как обычный пользователь без прав root и выполните описанные ниже действия.
Примечание. В этом учебном модуле подробно рассказывается об эскалации прав и о файле sudoers
. Если вы просто хотите предоставить пользователю права sudo
, воспользуйтесь нашими краткими учебными модулями Создание нового пользователя с правами Sudo для Ubuntu и CentOS.
Существует три способа получить привилегии root, различающиеся по сложности.
Самый простой и удобный способ получить привилегии root — просто войти на сервер как пользователь root.
Если вы входите на локальный компьютер (или используете консоль для внеполосного подключения к виртуальному серверу), введите root
как имя пользователя в строке входа и пароль пользователя root, когда система его запросит.
Если вы используете для входа SSH, укажите имя пользователя root перед IP-адресом или доменным именем в строке подключения SSH:
- ssh root@server_domain_or_ip
Если вы не настроили ключи SSH для пользователя root, введите пароль пользователя root по запросу системы.
su
для получения прав rootВходить в систему как пользователь root обычно не рекомендуется, потому что при этом можно легко начать использовать систему не для административных задач, что довольно опасно.
Следующий способ получить привилегии суперпользователя позволяет становиться пользователем root, когда вам это потребуется.
Для этого можно использовать команду su
(substitute user) для замены пользователя. Чтобы получить привилегии root, введите:
- su
Вам будет предложено ввести пароль для пользователя root, после чего будет создан сеанс оболочки root.
После завершения задач, для которых требуются привилегии root, вернитесь в обычную оболочку с помощью следующей команды:
- exit
sudo
для выполнения команд от имени пользователя rootПоследний способ получения привилегий root заключается в использовании команды sudo
.
Команда sudo
позволяет выполнять разовые команды с привилегиями root без необходимости создавать новую оболочку. Она выполняется следующим образом:
- sudo command_to_execute
В отличие от su
, для команды sudo
требуется пароль текущего пользователя, а не пароль пользователя root.
В связи с вопросами безопасности доступ sudo
не предоставляется пользователям по умолчанию, и его необходимо настроить перед использованием. Ознакомьтесь с нашими краткими руководствами Создание нового пользователя с привилегиями Sudo для Ubuntu и CentOS, чтобы научиться настраивать нового пользователя с правами sudo
.
В следующем разделе мы более подробно расскажем о том, как изменять конфигурацию sudo
.
Команда sudo
настраивается с помощью файла, расположенного в каталоге /etc/sudoers
.
Предупреждение. Никогда не редактируйте этот файл в обычном текстовом редакторе! Всегда используйте для этой цели только команду visudo
!
Неправильный синтаксис файла /etc/sudoers
может нарушить работу системы и сделать невозможным получение повышенного уровня привилегий, и поэтому очень важно использовать для его редактирования команду visudo
.
Команда visudo
открывает текстовый редактор обычным образом, но проверяет синтаксис файла при его сохранении. Это не даст ошибкам конфигурации возможности блокировать операции sudo
, что может быть единственным способом получить привилегии root.
Обычно visudo
открывает файл /etc/sudoers
в текстовом редакторе vi
. Однако в Ubuntu команда visudo
настроена на использование текстового редактора nano
.
Если вы захотите изменить его обратно на vi
, используйте следующую команду:
- sudo update-alternatives --config editor
OutputThere are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim.basic 30 manual mode
4 /usr/bin/vim.tiny 10 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Выберите число, соответствующее желаемому варианту выбора.
В CentOS для изменения этого значения можно добавить следующую строку в ~/.bashrc
:
- export EDITOR=`which name_of_editor`
Исходный файл для внесения изменений:
- . ~/.bashrc
После настройки visudo
выполните эту команду для доступа к файлу /etc/sudoers
:
- sudo visudo
Файл /etc/sudoers
откроется в выбранном вами текстовом редакторе.
Я скопировал и вставил файл с сервера Ubuntu 18.04 с удаленными комментариями. Файл CentOS /etc/sudoers
содержит намного больше строк, и некоторые из них не будут обсуждаться в этом руководстве.
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
#includedir /etc/sudoers.d
Давайте посмотрим, что делают эти строки.
Первая строка Defaults env_reset сбрасывает среду терминала для удаления переменных пользователя. Эта мера безопасности используется для сброса потенциально опасных переменных среды в сеансе sudo
.
Вторая строка, Defaults mail_badpass
, предписывает системе отправлять уведомления о неудачных попытках ввода пароля sudo
для настроенного пользователя mailto
. По умолчанию это учетная запись root.
Третья строка, начинающаяся с “Defaults secure_path=…”, задает переменную PATH
(места в файловой системе, где операционная система будет искать приложения), которая будет использоваться для операций sudo
. Это предотвращает использование пользовательских путей, которые могут быть вредоносными.
Четвертая строка, которая определяет для пользователя root привилегии sudo
, отличается от предыдущих строк. Давайте посмотрим, что означают различные поля:
root ALL=(ALL:ALL) ALL
Первое поле показывает имя пользователя, которое правило будет применять к (root).
root ALL=(ALL:ALL) ALL
Первое “ALL” означает, что данное правило применяется ко всем хостам.
root ALL=(ALL:ALL) ALL
Данное “ALL” означает, что пользователь root может запускать команды от лица всех пользователей.
root ALL=(ALL:ALL) ALL
Данное “ALL” означает, что пользователь root может запускать команды от лица всех групп.
root ALL=(ALL:ALL) ALL
Последнее “ALL” означает, что данные правила применяются всем командам.
Это означает, что наш пользователь root сможет выполнять любые команды с помощью sudo
после ввода пароля.
Следующие две строки похожи на строки привилегий пользователя, но задают правила sudo
для групп.
Имена, начинающиеся с %
, означают названия групп.
Здесь мы видим, что группа admin может выполнять любые команды от имени любого пользователя на любом хосте. Группа sudo имеет те же привилегии, но может выполнять команды от лица любой группы.
Последняя строка выглядит как комментарий:
. . .
#includedir /etc/sudoers.d
Она действительно начинается с символа #
, который обычно обозначает комментарии. Однако данная строка означает, что файлы в каталоге /etc/sudoers.d
также рассматриваются как источники и применяются.
Файлы в этом каталоге следуют тем же правилам, что и сам файл /etc/sudoers
. Любой файл, который не заканчивается на ~
и не содержит символа .
, также считывается и добавляется в конфигурацию sudo
.
В основном это нужно, чтобы приложения могли изменять привилегии sudo
после установки. Размещение всех правил в одном файле в каталоге /etc/sudoers.d
позволяет видеть, какие привилегии присвоены определенным учетным записям, а также легко сменять учетные данные без прямого изменения файла /etc/sudoers
.
Как и в случае с файлом /etc/sudoers
, другие файлы в каталоге /etc/sudoers.d
также следует редактировать с помощью команды visudo
. Для редактирования этих файлов применяется следующий синтаксис:
- sudo visudo -f /etc/sudoers.d/file_to_edit
Чаще всего при управлении разрешениями sudo
используется операция предоставления новому пользователю общего доступа sudo
. Это полезно, если вы хотите предоставить учетной записи полный административный доступ к системе.
В системе с группой администрирования общего назначения, такой как система Ubuntu в этом учебном модуле, проще всего будет добавить данного пользователя в эту группу.
Например, в Ubuntu 20.04 группа sudo
имеет полные привилегии администратора. Добавляя пользователя в эту группу, мы предоставляем ему такие же привилегии:
- sudo usermod -aG sudo username
Также можно использовать команду gpasswd
:
- sudo gpasswd -a username sudo
Обе команды выполняют одно и то же.
В CentOS эта группа обычно называется wheel
, а не sudo
:
- sudo usermod -aG wheel username
Также можно использовать gpasswd
:
- sudo gpasswd -a username wheel
Если в CentOS добавление пользователя в группу не срабатывает сразу же, вам может потребоваться отредактировать файл /etc/sudoers
, чтобы убрать символ комментария перед именем группы:
- sudo visudo
. . .
%wheel ALL=(ALL) ALL
. . .
Мы познакомились с общим синтаксисом файла, а теперь попробуем создать новые правила.
Файл sudoers
можно организовать более эффективно, группируя элементы с помощью разнообразных псевдонимов.
Например, мы можем создать три разных группы пользователей с некоторыми общими участниками:
. . .
User_Alias GROUPONE = abby, brent, carl
User_Alias GROUPTWO = brent, doris, eric,
User_Alias GROUPTHREE = doris, felicia, grant
. . .
Имена групп должны начинаться с заглавной буквы. Затем мы можем дать участникам группы GROUPTWO
разрешение на обновление базы данных apt
, создав следующее правило:
. . .
GROUPTWO ALL = /usr/bin/apt-get update
. . .
Если мы не укажем пользователя или группу для запуска, команда sudo
по умолчанию использует пользователя root.
Мы можем дать членам группы GROUPTHREE
разрешение на выключение и перезагрузку системы, создав псевдоним команды и используя его в правиле для GROUPTHREE
:
. . .
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart
GROUPTHREE ALL = POWER
. . .
Мы создадим псевдоним команды с именем POWER
, который будет содержать команды выключения и перезагрузки системы. Затем мы дадим членам группы GROUPTHREE
разрешение на выполнение этих команд.
Также мы можем создать псевдонимы запуска от имени, которые могут заменять часть правила, где указывается, от имени какого пользователя следует выполнить команду:
. . .
Runas_Alias WEB = www-data, apache
GROUPONE ALL = (WEB) ALL
. . .
Это позволит любому участнику группы GROUPONE
выполнять команды от имени пользователя www-data
или пользователя apache
.
Необходимо помнить, что в случае конфликта правил более поздние правила имеют приоритет перед более ранними.
Есть ряд способов, которые позволяют более точно контролировать реакцию sudo
на вызов.
Команда updatedb
, связанная с пакетом mlocate
, относительно безобидна при ее выполнении в системе с одним пользователем. Если мы хотим разрешить пользователям выполнять ее с привилегиями root без ввода пароля, мы можем создать правило следующего вида:
. . .
GROUPONE ALL = NOPASSWD: /usr/bin/updatedb
. . .
NOPASSWD
— это свойство, означающее, что пароль не запрашивается. У него есть сопутствующее свойство PASSWD
, которое используется по умолчанию и требует ввода пароля. Данное свойство актуальной для остальной части строки, если его действие не переопределяется дублирующим тегом в этой же строке.
Например, мы можем использовать следующую строку:
. . .
GROUPTWO ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill
. . .
Также полезно свойство NOEXEC
, которое можно использовать для предотвращения опасного поведения некоторых программ.
Например, некоторые программы, такие как less
, могут активировать другие команды, вводя их через свой интерфейс:
!command_to_run
При этом все команды пользователя выполняются с теми же разрешениями, что и команда less
, что может быть довольно опасно.
Чтобы ограничить такое поведение, мы можем использовать следующую строку:
. . .
username ALL = NOEXEC: /usr/bin/less
. . .
При работе с sudo
может быть полезна следующая информация.
Если вы зададите в файле конфигурации пользователя или группу, от имени которых выполняются команды, вы можете выполнять команды от их имени, используя флаги -u
и -g
соответственно:
- sudo -u run_as_user command
- sudo -g run_as_group command
Для удобства sudo
по умолчанию сохраняет данные аутентификации в течение определенного количества времени на одном терминале. Это означает, что вам не нужно будет вводить пароль снова, пока это время не истечет.
Если в целях безопасности вы захотите сбросить этот таймер после выполнения административных команд, вы можете использовать для этого следующую команду:
- sudo -k
Если вы захотите активировать команду sudo
так, чтобы не вводить пароль и не продлевать срок действия прав sudo
, вы можете ввести:
- sudo -v
Вам будет предложено ввести пароль, который будет сохранен для последующего использования sudo
до истечения заданного периода действия sudo
.
Если вы хотите узнать, какие привилегии заданы для вашего имени пользователя, введите команду:
- sudo -l
Она выведет все правила в файле /etc/sudoers
, которые относятся к вашему пользователю. Это поможет вам понять, что вам можно делать с помощью sudo
от имени любого пользователя.
У вас несомненно будут случаи, когда запущенная команда не будет выполняться, потому что вы забудете добавить префикс sudo
. Чтобы не вводить команду повторно, вы можете воспользоваться функцией bash, позволяющей повторить последнюю команду:
- sudo !!
Двойной восклицательный знак повторяет последнюю команду. Мы ставим перед ним префикс sudo
, чтобы быстро задать соответствующие права для последней команды.
При желании вы можете добавить следующую строку в файл /etc/sudoers
с помощью visudo
:
- sudo visudo
. . .
Defaults insults
. . .
После этого sudo
будет ругаться, если пользователь введет неправильный пароль для sudo
. Мы можем использовать sudo -k
, чтобы очистить предыдущий сохраненный в кэше пароль sudo
и попробовать эту функцию:
- sudo -k
- sudo ls
Output[sudo] password for demo: # enter an incorrect password here to see the results
Your mind just hasn't been the same since the electro-shock, has it?
[sudo] password for demo:
My mind is going. I can feel it.
Теперь вы понимаете, как читать и редактировать файл sudoers
и знаете, какие методы вы можете использовать для получения привилегий root.
Помните, что есть причины, по которым обычным пользователям не предоставляются привилегии суперпользователя. Очень важно точно понимать, что делает каждая команда, которую вы выполняете с правами root. Будьте ответственны. Определите оптимальный способ использования этих инструментов в вашей ситуации и заблокируйте все функции, которые не нужны.
]]>Um servidor Linux, assim como qualquer outro computador que você conheça, executa aplicativos. Para o computador, eles são considerados “processos”.
Embora o Linux lide nos bastidores com o gerenciamento de baixo nível no ciclo de vida de um processo, você precisará de uma maneira de interagir com o sistema operacional para gerenciá-lo de um nível superior.
Neste guia, vamos discutir alguns aspectos simples do gerenciamento de processos. O Linux oferece uma coleção abundante de ferramentas para esse propósito.
Vamos explorar essas ideias em um VPS Ubuntu 12.04, mas qualquer distribuição moderna do Linux funcionará de maneira similar.
A maneira mais fácil de descobrir quais processos estão sendo executados no seu servidor é executando o comando top
:
top***
top - 15:14:40 up 46 min, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 316576k used, 703024k free, 7652k buffers Swap: 0k total, 0k used, 0k free, 258976k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
A porção superior de informações mostra estatísticas do sistema, como a carga do sistema e o número total de tarefas.
É possível ver facilmente que há 1 processo em execução e 55 processos estão suspensos (ou seja, ociosos/sem utilizar recursos da CPU).
A parte inferior mostra os processos em execução e suas estatísticas de uso.
Uma versão melhorada de top
, chamada htop
, está disponível nos repositórios. Instale-o com este comando:
sudo apt-get install htop
Se executarmos o comando htop
, veremos que as informação são exibidas de uma maneira mais inteligível:
htop***
Mem[||||||||||| 49/995MB] Load average: 0.00 0.03 0.05 CPU[ 0.0%] Tasks: 21, 3 thr; 1 running Swp[ 0/0MB] Uptime: 00:58:11 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init 311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid 314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae 389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys 407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5 408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5 553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
Aprenda mais sobre como usar o top e htop aqui.
Tanto o top
quanto o htop
fornecem uma interface agradável para visualizar processos em execução, de maneira semelhante a um gerenciador de tarefas gráfico.
No entanto, essas ferramentas nem sempre são flexíveis o suficiente para abranger adequadamente todos os cenários. Um comando poderoso chamado ps
é geralmente a resposta para esses problemas.
Quando chamado sem argumentos, o resultado pode ser um pouco confuso:
ps***
PID TTY TIME CMD 1017 pts/0 00:00:00 bash 1262 pts/0 00:00:00 ps
Esse resultado mostra todos os processos associados ao usuário e sessão de terminal atuais. Isso faz sentido porque estamos executando apenas o bash
e ps
com esse terminal atualmente.
Para conseguirmos uma visão mais completa dos processos neste sistema, podemos executar o seguinte:
ps aux***
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 24188 2120 ? Ss 14:28 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 14:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 14:28 0:00 [ksoftirqd/0] root 6 0.0 0.0 0 0 ? S 14:28 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 14:28 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S< 14:28 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S< 14:28 0:00 [khelper] . . .
Essas opções dizem ao ps
para mostrar processos de propriedade de todos os usuários (independentemente da sua associação de terminais) em um formato facilmente inteligível.
Para ver um modo de exibição em árvore, onde relações hierárquicas são mostradas, executamos o comando com essas opções:
ps axjf***
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 0 2 0 0 ? -1 S 0 0:00 [kthreadd] 2 3 0 0 ? -1 S 0 0:00 \_ [ksoftirqd/0] 2 6 0 0 ? -1 S 0 0:00 \_ [migration/0] 2 7 0 0 ? -1 S 0 0:00 \_ [watchdog/0] 2 8 0 0 ? -1 S< 0 0:00 \_ [cpuset] 2 9 0 0 ? -1 S< 0 0:00 \_ [khelper] 2 10 0 0 ? -1 S 0 0:00 \_ [kdevtmpfs] 2 11 0 0 ? -1 S< 0 0:00 \_ [netns] . . .
Como se vê, o processo kthreadd
é mostrado como um pai do processo ksoftirqd/0
e de outros.
No Linux e em sistemas do tipo Unix, cada processo é atribuído a um ID de processo, ou PID. É assim que o sistema operacional identifica e mantém o controle dos processos.
Uma maneira rápida de obter o PID de um processo é com o comando pgrep
:
pgrep bash***
1017
Isso irá simplesmente consultar o ID do processo e retorná-lo.
Ao primeiro processo gerado na inicialização do sistema, chamado init, é atribuído o PID de “1".
pgrep init***
1
Esse processo é então responsável por gerar todos os processos no sistema. Os processos posteriores recebem números PID maiores.
O pai de um processo é o processo que foi responsável por gerá-lo. Os processos pais possuem um PPID, que pode ser visto no cabeçalho da coluna correspondente em muitos aplicativos de gerenciamento de processos, incluindo o top
, htop
e ps
.
Qualquer comunicação entre o usuário e o sistema operacional em relação aos processos envolve a tradução entre os nomes de processos e PIDs em algum ponto durante a operação. É por isso que os utilitários informam o PID.
A criação de um processo filho acontece em dois passos: fork(), que cria espaços de endereço e copia os recursos de propriedade do pai via copia em gravação para que fique disponível ao processo filho; e exec(), que carrega um executável no espaço de endereço e o executa.
Caso um processo filho morra antes do seu pai, o filho torna-se um zumbi até que o pai tenha coletado informações sobre ele ou indicado ao kernel que ele não precisa dessas informações. Os recursos do processo filho serão então libertados. No entanto, se o processo pai morrer antes do filho, o filho será adotado pelo init, embora também possa ser reatribuído a outro processo.
Todos os processos no Linux respondem a sinais. Os sinais são uma maneira ao nível de SO de dizer aos programas para terminarem ou modificarem seu comportamento.
A maneira mais comum de passar sinais para um programa é com o comando kill
.
Como se espera, a funcionalidade padrão desse utilitário é tentar encerrar um processo:
<pre>kill <span class=“highlight”>PID_of_target_process</span></pre>
Isso envia o sinal TERM ao processo. O sinal TERM pede ao processo para encerrar. Isso permite que o programa execute operações de limpeza e seja finalizado sem problemas.
Se o programa estiver se comportando incorretamente e não se encerrar quando um sinal TERM for passado, podemos escalar o sinal passando o sinal KILL
:
<pre>kill -KILL <span class=“highlight”>PID_of_target_process</span></pre>
Esse é um sinal especial que não é enviado ao programa.
Ao invés disso, ele é dado ao kernel do sistema operacional, que encerra o processo. Isso é usado para passar por cima de programas que ignoram os sinais que lhe são enviados.
Cada sinal possui um número associado que pode ser passado ao invés de seu nome. Por exemplo, é possível passar “-15" ao invés de ”-TERM” e ”-9" ao invés de ”-KILL”.
Os sinais não são usados apenas para encerrar programas. Eles também podem ser usados para realizar outras ações.
Por exemplo, muitos daemons são reiniciados quando recebem o HUP
, ou sinal de desligamento. O Apache é um programa que funciona dessa forma.
<pre>sudo kill -HUP <span class=“highlight”>pid_of_apache</span></pre>
O comando acima fará com que o Apache recarregue seu arquivo de configuração e retome o serviço de conteúdo.
Liste todos os sinais que podem ser enviados com o kill digitando:
kill -l***
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM . . .
Embora a maneira convencional de enviar sinais seja através do uso de PIDs, também existem métodos para fazer isso com os nomes regulares dos processos.
O comando pkill
funciona de maneira bastante similar ao kill
, com a diferença de operar com um nome de processo:
pkill -9 ping
O comando acima é equivalente a:
kill -9 `pgrep ping`
Se quiser enviar um sinal para todas as instâncias de um determinado processo, utilize o comando killall
:
killall firefox
O comando acima enviará o sinal TERM para todas as instâncias do firefox em execução no computador.
Geralmente, é desejável ajustar quais processos recebem prioridade em um ambiente de servidor.
Alguns processos podem ser considerados críticos para sua situação, enquanto outros podem ser executados sempre que houver recursos sobrando no sistema.
O Linux controla a prioridade através de um valor chamado niceness (gentileza).
As tarefas de alta prioridade são consideradas menos nice (gentis), porque não compartilham recursos tão bem. Por outro lado, os processos de baixa prioridade são nice, porque insistem em utilizar apenas a menor quantidade de recursos.
Quando executamos o top
no início do artigo, havia uma coluna marcada “NI”. Esse é o valor de gentileza do processo:
top***
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 324496k used, 695104k free, 8512k buffers Swap: 0k total, 0k used, 0k free, 264812k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1635 root 20 0 17300 1200 920 R 0.3 0.1 0:00.01 top 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
Os valores de gentileza podem variar entre “-19/-20” (prioridade mais alta) e ”19/20” (prioridade mais baixa), dependendo do sistema.
Para executar um programa com um certo valor de gentileza, podemos usar o comando nice
:
<pre>nice -n 15 <span class=“highlight”>command_to_execute</span></pre>
Isso funciona apenas ao iniciar um novo programa.
Para alterar o valor de gentileza de um programa que já está sendo executado, usamos uma ferramenta chamada renice
:
<pre>renice 0 <span class=“highlight”>PID_to_prioritize</span></pre>
Nota: embora o comando nice opere necessariamente com um nome de comando, o renice opera chamando o PID do processo
O gerenciamento de processos é um tópico que pode ser de difícil compreensão para novos usuários, pois as ferramentas usadas são diferentes de seus equivalentes gráficos.
No entanto, as ideias são habituais e intuitivas e, com um pouco de prática, se tornarão naturais. Como os processos estão envolvidos em tudo o que é feito em um sistema de computador, aprender como controlá-los de maneira eficaz é uma habilidade essencial.
<div class=“author”>Por Justin Ellingwood</div>
]]>A separação de privilégios é um dos paradigmas de segurança fundamentais implementados em sistemas operacionais do tipo Linux e Unix. Os usuários regulares operam com privilégios limitados para reduzir o escopo de sua influência em seu próprio ambiente, e não no sistema operacional como um todo.
Um usuário especial, chamado root, possui privilégios de superusuário. Essa é uma conta administrativa sem as restrições presentes em usuários normais. Os usuários podem executar comandos com privilégios de superusuário ou root de várias maneiras diferentes.
Neste artigo, vamos discutir como obter privilégios de root de uma maneira correta e segura, dando um foco especial na edição do arquivo /etc/sudoers
.
Vamos completar esses passos em um servidor Ubuntu 20.04, mas a maioria das distribuições do Linux modernas, como o Debian e o CentOS, devem operar de maneira similar.
Este guia considera que você já completou a configuração inicial de servidor discutida aqui. Faça login no seu servidor como um usuário não raiz e continue abaixo.
Nota: este tutorial aborda com profundidade a escalação de privilégios e o arquivo sudoers
. Se quiser apenas adicionar privilégios sudo
a um usuário, verifique nossos tutoriais de início rápido Como criar um novo usuário habilitado para sudo para o Ubuntu e CentOS.
Há três maneiras básicas de obter privilégios de root, que variam em seu nível de sofisticação.
O método mais simples e direto de se obter privilégios de root é fazendo login diretamente em seu servidor como um usuário root.
Se você estiver fazendo login em uma máquina local (ou usando uma característica de acesso de console fora de banda em um servidor virtual), digite root
como seu nome de usuário no prompt de login e digite a senha root solicitada.
Caso esteja fazendo login via SSH, especifique o usuário root antes do endereço IP ou nome de domínio em sua string de conexão via protocolo SSH:
- ssh root@server_domain_or_ip
Se você não tiver configurado chaves SSH para o usuário root, digite a senha de root solicitada.
su
para tornar-se rootFazer login diretamente como root não é recomendado, pois é fácil começar a usar o sistema para tarefas não administrativas, o que é uma atividade perigosa.
A próxima maneira de obter privilégios de superusuário permite que você se torne um usuário root a qualquer momento, conforme necessário.
Podemos fazer isso invocando o comando su
, que significa “substituir usuário”. Para obter privilégios de root, digite:
- su
A senha do usuário root será solicitada e, depois disso, você será levado a uma sessão de shell root.
Quando terminar as tarefas que necessitam de privilégios de root, retorne ao seu shell padrão digitando:
- exit
sudo
para executar comandos como rootA maneira final de obter privilégios de root que vamos discutir é com o comando sudo
.
O comando sudo
permite que sejam executados comandos únicos com privilégios de root, sem a necessidade de gerar um novo shell. Ele é executado desta forma:
- sudo command_to_execute
Ao contrário do su
, o comando sudo
solicitará a senha do usuário atual, não a senha do root.
Devido às suas implicações de segurança, o acesso sudo
não é concedido a usuários por padrão e deve ser configurado antes de funcionar corretamente. Verifique nossos tutoriais de início rápido Como criar um novo usuário habilitado para sudo para o Ubuntu
e CentOS para aprender como configurar um usuário habilitado para sudo.
Na seção seguinte, vamos discutir como modificar a configuração do sudo
mais detalhadamente.
O comando sudo
é configurado através de um arquivo localizado em /etc/sudoers
.
Aviso: nunca edite este arquivo com um editor de texto normal! Use sempre o comando visudo
em vez disso!
Considerando que uma sintaxe inadequada no arquivo /etc/sudoers
pode tornar o seu sistema defeituoso, onde é impossível obter privilégios elevados, é importante usar o comando visudo
para editar o arquivo.
O comando visudo
abre um editor de texto aparentemente normal, com a diferença que ele valida a sintaxe do arquivo no momento de salvar. Isso impede que erros de configuração bloqueiem as operações do sudo
, que pode ser a única maneira disponível para obter privilégios de root.
Tradicionalmente, o visudo
abre o arquivo /etc/sudoers
com o editor de texto vi
. No entanto, o Ubuntu configurou o visudo
para usar o editor de texto nano
ao invés disso.
Se quiser alterá-lo de volta para o vi
, emita o seguinte comando:
- sudo update-alternatives --config editor
OutputThere are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim.basic 30 manual mode
4 /usr/bin/vim.tiny 10 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Selecione o número que corresponde à escolha que deseja fazer.
No CentOS, é possível alterar esse valor adicionando a seguinte linha ao seu ~/.bashrc
:
- export EDITOR=`which name_of_editor`
Habilite o arquivo para implementar as alterações:
- . ~/.bashrc
Após ter configurado o visudo
, execute o comando para acessar o arquivo /etc/sudoers
:
- sudo visudo
O arquivo /etc/sudoers
será apresentado no seu editor de texto selecionado.
O arquivo foi copiado e colado do Ubuntu 18.04, com comentários removidos. O arquivo /etc/sudoers
do CentOS possui muitas linhas a mais, algumas das quais não vamos discutir neste guia.
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
#includedir /etc/sudoers.d
Vamos dar uma olhada no que essas linhas fazem.
A primeira linha, “Defaults env_reset”, reinicia o ambiente de terminal para remover qualquer variável de usuário. Essa é uma medida de segurança usada para limpar variáveis de ambiente potencialmente nocivas na sessão sudo
.
A segunda linha, Defaults mail_badpass
, diz ao sistema para enviar e-mails ao usuário mailto
configurado informando sobre tentativas de senha incorretas do sudo
. Por padrão, ela é a conta root.
A terceira linha, que começa com “Defaults secure_path=…”, especifica o PATH
(os lugares no sistema de arquivos em que o sistema operacional irá procurar aplicativos) que será usado para as operações sudo
. Isso impede o uso de caminhos de usuário que possam ser nocivos.
A quarta linha, que dita os privilégios sudo do usuário root
, é diferente das linhas anteriores. Vamos dar uma olhada no que os diferentes campos significam:
root ALL=(ALL:ALL) ALL
O primeiro campo indica o nome do usuário ao qual a regra se aplicará (root).
root ALL=(ALL:ALL) ALL
O primeiro “ALL” indica que essa regra se aplica a todos os hosts.
root ALL=(ALL:ALL) ALL
Esse “ALL” indica que o usuário root pode executar comandos como todos os usuários.
root ALL=(ALL:ALL) ALL
Esse “ALL” indica que o usuário root pode executar comandos como todos os grupos.
root ALL=(ALL:ALL) ALL
O último “ALL” indica que essas regras se aplicam a todos os comandos.
Isso significa que o usuário root pode executar qualquer comando usando o sudo
, desde que forneçam sua senha.
As duas linhas seguintes são semelhantes às linhas de privilégio de usuário, mas especificam as regras sudo
para grupos.
Os nomes que começam com um %
indicam nomes de grupo.
Aqui, vemos que o grupo admin pode executar qualquer comando como qualquer usuário em qualquer host. De maneira similar, o grupo sudo possui os mesmos privilégios, mas também pode executar como qualquer grupo.
A última linha aparentemente é um comentário à primeira vista:
. . .
#includedir /etc/sudoers.d
Ele realmente começa com um #
, que geralmente indica um comentário. No entanto, essa linha, na realidade, indica que os arquivos dentro do diretório /etc/sudoers.d
também serão fornecidos e aplicados.
Os arquivos dentro desse diretório seguem as mesmas regras do arquivo /etc/sudoers
em si. Qualquer arquivo que não termine em ~
, nem tenha um .
nele será lido e anexado à configuração do sudo
.
Isso existe principalmente para os aplicativos alterarem privilégios sudo
na instalação. Colocar todas as regras associadas dentro de um único arquivo no diretório /etc/sudoers.d
pode facilitar a visualização de quais privilégios estão associados a quais contas e reverter credenciais facilmente sem precisar tentar manipular o arquivo /etc/sudoers
diretamente.
Assim como o arquivo /etc/sudoers
, é recomendado sempre editar arquivos dentro do diretório /etc/sudoers.d
com o visudo
. A sintaxe para editar esses arquivos seria:
- sudo visudo -f /etc/sudoers.d/file_to_edit
A operação mais comum requisitada pelos usuários ao gerenciar permissões sudo
é conceder o acesso sudo
geral a um novo usuário. Isso é útil se você quiser conceder a uma conta o acesso administrativo total ao sistema.
A maneira mais fácil de fazer isso em um sistema configurado com um grupo de administração de fins gerais, como o sistema Ubuntu neste guia, é adicionando o usuário em questão a esse grupo.
Por exemplo, no Ubuntu 20.04, o grupo sudo
possui privilégios completos de administrador. Podemos conceder esses mesmos privilégios a um usuário adicionando ele ao grupo desta forma:
- sudo usermod -aG sudo username
O comando gpasswd
também pode ser usado:
- sudo gpasswd -a username sudo
Ambos alcançarão o mesmo resultado.
No CentOS, o grupo em questão é geralmente o wheel
ao invés do grupo sudo
:
- sudo usermod -aG wheel username
Ou, é possível usar o gpasswd
:
- sudo gpasswd -a username wheel
No CentOS, se adicionar o usuário ao grupo não funcionar imediatamente, pode ser necessário editar o arquivo /etc/sudoers
para descomentar o nome do grupo:
- sudo visudo
. . .
%wheel ALL=(ALL) ALL
. . .
Agora que nos familiarizamos com a sintaxe geral do arquivo, vamos criar algumas novas regras.
O arquivo sudoers
pode ser organizado mais facilmente agrupando coisas com vários tipos de “pseudônimos”, ou ”aliases”.
Por exemplo, é possível criar três grupos diferentes de usuários, com associações coincidentes:
. . .
User_Alias GROUPONE = abby, brent, carl
User_Alias GROUPTWO = brent, doris, eric,
User_Alias GROUPTHREE = doris, felicia, grant
. . .
Os nomes de grupo devem começar com uma letra maiúscula. Então, podemos permitir que membros do GROUPTWO
atualizem o banco de dados apt
criando uma regra como esta:
. . .
GROUPTWO ALL = /usr/bin/apt-get update
. . .
Se não especificarmos um usuário/grupo para executar o comando, como acima, o sudo
utiliza o usuário root como padrão.
Podemos permitir que membros do GROUPTHREE
desliguem e reinicializem a máquina criando um “pseudônimo de comando” e usando ele em uma regra para o GROUPTHREE
:
. . .
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart
GROUPTHREE ALL = POWER
. . .
Criamos um pseudônimo de comando chamado POWER
que contém comandos para desligar e reinicializar a máquina. Em seguida, permitimos que os membros do GROUPTHREE
executem esses comandos.
Também podemos criar pseudônimos “Executar como”, que podem substituir a parte da regra que especifica o usuário com o qual será executado o comando:
. . .
Runas_Alias WEB = www-data, apache
GROUPONE ALL = (WEB) ALL
. . .
Isso permitirá que qualquer membro do GROUPONE
execute comandos como o usuário www-data
ou o usuário apache
.
Lembre-se apenas que as regras posteriores irão substituir regras mais antigas quando houver um conflito entre elas.
Há várias maneiras de ganhar mais controle sobre como o sudo
reage a uma chamada.
O comando updatedb
associado ao pacote mlocate
é relativamente inofensivo em um sistema com um único usuário. Se quisermos permitir que os usuários o executem com privilégios de root sem precisar digitar uma senha, podemos criar uma regra como esta:
. . .
GROUPONE ALL = NOPASSWD: /usr/bin/updatedb
. . .
O NOPASSWD
é uma “etiqueta”, que significa que nenhuma senha será solicitada. Ele possui um comando companheiro chamado PASSWD
, que é o comportamento padrão. Uma etiqueta é relevante para o resto da regra, a menos que seja anulada por sua etiqueta “gêmea” mais tarde no processo.
Por exemplo, podemos ter uma linha como esta:
. . .
GROUPTWO ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill
. . .
Outra etiqueta útil é a NOEXEC
, que pode ser usada para evitar um comportamento perigoso em certos programas.
Por exemplo, alguns programas, como o less
, podem gerar outros comandos digitando isto a partir de sua interface:
!command_to_run
Isso executa basicamente qualquer comando dado pelo usuário com as mesmas permissões nas quais less
está sendo executado, o que pode ser bastante perigoso.
Para restringir isso, poderíamos usar uma linha como esta:
. . .
username ALL = NOEXEC: /usr/bin/less
. . .
Há mais algumas informações que podem ser úteis ao lidar com o sudo
.
Se você especificou um usuário ou grupo para “executar como” no arquivo de configuração, é possível executar comandos como esses usuários usando os sinalizadores -u
e -g
, respectivamente:
- sudo -u run_as_user command
- sudo -g run_as_group command
Por conveniência, o sudo
salvará por padrão seus detalhes de autenticação por uma certa quantidade de tempo em um terminal. Isso significa que não será necessário digitar sua senha novamente até que esse temporizador se esgote.
Por fins de segurança, se quiser limpar esse temporizador quando terminar de executar comandos administrativos, execute:
- sudo -k
Se, por outro lado, quiser “preparar” o comando sudo
para que você não seja solicitado mais tarde, ou renovar seu lease do sudo
, digite:
- sudo -v
A sua senha será solicitada e então colocada em cache para utilização posterior do sudo
, até que o intervalo de tempo do sudo
expire.
Se quiser apenas saber quais tipos de privilégios estão definidos para seu nome de usuário, digite:
- sudo -l
Isso irá listar todas as regras no arquivo /etc/sudoers
que se aplicam ao seu usuário. Isso dá uma boa ideia do que será ou não permitido fazer com o sudo
como qualquer usuário.
Haverá muitas vezes em que você executará um comando e ele falhará por ter esquecido de iniciá-lo com o sudo
. Para evitar ter que digitar novamente o comando, utilize uma funcionalidade bash que significa “repetir o último comando”:
- sudo !!
O ponto de exclamação duplo repetirá o último comando. Adicionamos o sudo
no início para alterar rapidamente o comando não privilegiado para um comando privilegiado.
Para se divertir um pouco, adicione a linha a seguir ao seu arquivo /etc/sudoers
com o visudo
:
- sudo visudo
. . .
Defaults insults
. . .
Isso fará com que o sudo
retorne um insulto bobo quando um usuário digitar uma senha incorreta para o sudo
. Podemos usar o sudo -k
para limpar a senha sudo
anterior em cache para testar essa funcionalidade:
- sudo -k
- sudo ls
Output[sudo] password for demo: # enter an incorrect password here to see the results
Your mind just hasn't been the same since the electro-shock, has it?
[sudo] password for demo:
My mind is going. I can feel it.
Agora, você deve ter um conhecimento básico sobre como ler e modificar o arquivo sudoers
, e uma noção acerca dos vários métodos que podem ser usados para se obter privilégios de root.
Lembre-se: os privilégios de superusuário não são dados a usuários comuns por um motivo. É essencial que você compreenda o que cada comando que você executa com privilégios de root faz. Não deixe de levar a sério toda a responsabilidade existente. Aprenda a melhor maneira de usar essas ferramentas para o seu caso de uso e bloqueie todas as funcionalidades que não sejam necessárias.
]]>grep
est l’une des commandes les plus pratiques dans un environnement de terminaux Linux. L’accronyme grep
signifie « global regular expression print » (rechercher globalement les correspondances avec l’expression régulière). Cela signifie que vous pouvez utiliser grep
pour voir si l’entrée qu’il reçoit correspond à un modèle spécifié. Apparemment trivial, ce programme est extrêmement puissant. Sa capacité à trier les entrées en fonction de règles complexes fait qu’il est couramment utilisé dans de nombreuses chaînes de commande.
Au cours de ce tutoriel, vous allez explorer les options de la commande grep
. Ensuite, vous allez approfondir vos connaissances dans l’utilisation des expressions régulières qui vous permettront d’effectuer des recherches plus avancées.
[interactive_terminal bash]
Au cours de ce tutoriel, vous allez utiliser grep
pour rechercher plusieurs mots et plusieurs phrases dans GNU General Public License version 3.
Si vous utilisez un système Ubuntu, vous pouvez trouver le fichier dans le dossier /usr/share/common-licenses
. Copiez-le dans votre répertoire home :
- cp /usr/share/common-licenses/GPL-3 .
Si vous êtes sur un autre système, utilisez la commande curl
pour en télécharger une copie :
- curl -o GPL-3 https://www.gnu.org/licenses/gpl-3.0.txt
Vous allez également utiliser le fichier de licence BSD de ce tutoriel. Sous Linux, vous pouvez utiliser la commande suivante pour le copier dans votre répertoire home :
- cp /usr/share/common-licenses/BSD .
Si vous êtes sur un autre système, créez le fichier en utilisant la commande suivante :
- cat << 'EOF' > BSD
- Copyright (c) The Regents of the University of California.
- All rights reserved.
-
- Redistribution and use in source and binary forms, with or without
- modification, are permitted provided that the following conditions
- are met:
- 1. Redistributions of source code must retain the above copyright
- notice, this list of conditions and the following disclaimer.
- 2. Redistributions in binary form must reproduce the above copyright
- notice, this list of conditions and the following disclaimer in the
- documentation and/or other materials provided with the distribution.
- 3. Neither the name of the University nor the names of its contributors
- may be used to endorse or promote products derived from this software
- without specific prior written permission.
-
- THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
- ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
- FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- SUCH DAMAGE.
- EOF
Maintenant que vous avez les fichiers, vous pouvez commencer à travailler avec grep
.
Dans sa forme la plus basique, vous pouvez utiliser grep
pour trouver les correspondances avec des modèles littéraux dans un fichier texte. Cela signifie que si vous passez une commande grep
pour rechercher un mot, le système imprimera chaque ligne du fichier qui contient le mot en question.
Exécutez la commande suivante pour utiliser grep
et trouver ainsi toutes les lignes qui contiennent le mot GNU
:
- grep "GNU" GPL-3
Le premier argument, GNU
, est le modèle que vous recherchez alors que le second, GPL-3
, est le fichier saisi dans lequel vous souhaitez faire votre recherche.
La sortie affichera chacune des lignes qui contient le texte modèle :
Output GNU GENERAL PUBLIC LICENSE
The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
Developers that use the GNU GPL protect your rights with two steps:
"This License" refers to version 3 of the GNU General Public License.
13. Use with the GNU Affero General Public License.
under version 3 of the GNU Affero General Public License into a single
...
...
Sur certains systèmes, le modèle que vous recherchez sera mis en surbrillance dans la sortie.
Par défaut, grep
recherchera le modèle exact spécifié dans le fichier d’entrée et renverra les lignes qu’il trouvera. Vous pouvez rendre ce comportement plus pratique en ajoutant quelques balises facultatives à grep
.
Si vous souhaitez que grep
ignore la « case » de votre paramètre de recherche et que vous recherchez des variations à la fois en majuscule et en minuscule, vous pouvez spécifier l’option -i
ou --ignore-case
.
Recherchez chaque instance du mot license
(en majuscule, minuscule ou les deux) dans le même fichier qu’auparavant avec la commande suivante :
- grep -i "license" GPL-3
Les résultats incluent : LICENSE
, license
et License
:
Output GNU GENERAL PUBLIC LICENSE
of this license document, but changing it is not allowed.
The GNU General Public License is a free, copyleft license for
The licenses for most software and other practical works are designed
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
price. Our General Public Licenses are designed to make sure that you
(1) assert copyright on the software, and (2) offer you this License
"This License" refers to version 3 of the GNU General Public License.
"The Program" refers to any copyrightable work licensed under this
...
...
S’il y avait une instance avec LiCeNsE
, elle aurait également été renvoyée.
Si vous souhaitez trouver toutes les lignes qui ne contiennent pas le modèle spécifié, vous pouvez utiliser l’option -v
ou --invert-match
.
Recherchez chaque ligne qui ne contient pas le mot the
dans la licence BSD en exécutant la commande suivante :
- grep -v "the" BSD
Vous verrez la sortie suivante :
OutputAll rights reserved.
Redistribution and use in source and binary forms, with or without
are met:
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
...
...
Étant donné que vous n’avez pas précisé l’option « ignorer case », les deux derniers éléments ont été renvoyés comme ne comportant pas le mot the
.
Il vous sera souvent utile de connaître le numéro de la ligne sur laquelle les correspondances sont trouvées. Pour se faire, vous pouvez utiliser l’option -n
ou --line-number
. Ré-exécutez l’exemple précédent en ajoutant la balise précédente :
- grep -vn "the" BSD
Vous verrez le texte suivant :
Output2:All rights reserved.
3:
4:Redistribution and use in source and binary forms, with or without
6:are met:
13: may be used to endorse or promote products derived from this software
14: without specific prior written permission.
15:
16:THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
17:ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
...
...
Vous pouvez maintenant référencer le numéro des lignes dans le cas où vous souhaiteriez modifier chacune des lignes qui ne contient pas the
. Cela est particulièrement pratique lorsque vous travaillez avec un code source.
Au cours de l’introduction, vous avez appris que grep
signifiait « rechercher globalement les correspondances avec l’expression rationnelle ». Une « expression régulière » est une chaîne de texte qui décrit un modèle de recherche spécifique.
La manière dont les diverses applications et langages de programmation implémentent les expressions régulières est légèrement différente. Au cours de ce tutoriel, vous allez uniquement aborder un petit sous-ensemble de la façon dont grep
décrit ses modèles.
Dans les exemples précédents de ce tutoriel, lorsque vous avez recherché les mots GNU
et the
, en réalité, vous avez recherché les expressions régulières de base qui correspondaient à la chaîne exacte de caractères GNU
et the
. Les modèles qui spécifient exactement les caractères à mettre en correspondance se nomment « literals » car ils correspondent littéralement au modèle, character-for-character.
Il est plus facile de les considérer comme une correspondance à une chaîne de caractères plutôt qu’à un mot. La distinction deviendra plus importante à mesure que les modèles que vous aborderez seront plus complexes.
La correspondance se fera sur tous les caractères alphabétiques et numériques (ainsi que certains autres caractères) littéralement à moins qu’un autre mécanisme d’expression ne vienne la modifier.
Les ancres sont des caractères spéciaux qui spécifient à quel endroit de la ligne une correspondance est considérée comme valable.
Par exemple, en utilisant des ancres, vous pouvez spécifier que vous souhaitez uniquement obtenir les lignes qui comportent GNU
tout au début de la ligne. Pour cela, vous pouvez utiliser la balise ^
avant la chaîne littérale.
Exécutez la commande suivante pour rechercher le fichier GPL-3
et trouver les lignes où GNU
se trouve au tout début d’une ligne :
- grep "^GNU" GPL-3
Vous verrez apparaître les deux lignes suivantes :
OutputGNU General Public License for most of our software; it applies also to
GNU General Public License, you may choose any version ever published
De la même façon, vous devez utiliser l’ancre $
à la fin d’un modèle pour indiquer que la correspondance ne sera valable que si elle se trouve à la fin d’une ligne.
Cette commande permettra de faire la correspondance avec chaque ligne se terminant par le mot and
dans le fichier GPL-3
:
- grep "and$" GPL-3
Vous verrez la sortie suivante :
Outputthat there is no warranty for this free software. For both users' and
The precise terms and conditions for copying, distribution and
License. Each licensee is addressed as "you". "Licensees" and
receive it, in any medium, provided that you conspicuously and
alternative is allowed only occasionally and noncommercially, and
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
provisionally, unless and until the copyright holder explicitly and
receives a license from the original licensors, to run, modify and
make, use, sell, offer for sale, import and otherwise run, modify and
Dans les expressions régulières, le caractère du point (.) signifie que tout caractère individuel peut exister à l’endroit spécifié.
Par exemple, pour trouver toute correspondance dans le fichier GPL-3
qui contient deux caractères et puis la chaine cept
, vous devez utiliser le modèle suivant :
- grep "..cept" GPL-3
Vous verrez la sortie suivante :
Outputuse, which is precisely where it is most unacceptable. Therefore, we
infringement under applicable copyright law, except executing it on a
tells the user that there is no warranty for the work (except to the
License by making exceptions from one or more of its conditions.
form of a separately written license, or stated as exceptions;
You may not propagate or modify a covered work except as expressly
9. Acceptance Not Required for Having Copies.
...
...
Comme vous pouvez le voir, la sortie affiche des instances à la fois de accept
et except
ainsi que des variations des deux mots. Le modèle devrait également trouver les correspondances avec z2cept
s’il y en avait.
En plaçant un groupe de caractères entre parenthèses (\[
et \]
), vous pouvez spécifier que le caractère qui se trouve à cette position peut être l’un des caractères du groupe entre parenthèses.
Par exemple, pour trouver les lignes qui en contiennent too
ou two
, vous devez spécifier ces variations succinctement en utilisant le modèle suivant :
- grep "t[wo]o" GPL-3
La sortie montre que les deux variations existent dans le fichier :
Outputyour programs, too.
freedoms that you received. You must make sure that they, too, receive
Developers that use the GNU GPL protect your rights with two steps:
a computer network, with no transfer of a copy, is not conveying.
System Libraries, or general-purpose tools or generally available free
Corresponding Source from a network server at no charge.
...
...
La notation des parenthèses vous offre quelques options intéressantes. Vous pouvez utiliser un modèle qui trouve toutes les correspondances sauf les caractères entre parenthèses en commençant la liste de caractères entre parenthèses par un ^
Dans cet exemple, nous trouverons le modèle .ode
mais ne fera pas correspondre le modèle code
:
- grep "[^c]ode" GPL-3
Voici la sortie qui s’affichera :
Output 1. Source Code.
model, to give anyone who possesses the object code either (1) a
the only significant mode of use of the product.
notice like this when it starts in an interactive mode:
Notez que dans la seconde ligne renvoyée, il y a, en fait, le mot code
. Il ne s’agit pas d’un échec de l’expression régulière ou de grep. Au contraire, cette ligne a été renvoyée car plus tôt dans la ligne, le système a trouvé le modèle mode
qui se trouve dans le mot model
. La ligne a été renvoyée car il y avait une instance qui correspond au modèle.
Une autre des fonctionnalités utiles des parenthèses est que vous pouvez spécifier une gamme de caractères au lieu de saisir chaque caractère disponible individuellement.
Ainsi, si vous le souhaitez, vous pouvez trouver toutes les lignes qui commencent par une lettre majuscule avec le modèle suivant :
- grep "^[A-Z]" GPL-3
Voici la sortie que vous verrez :
OutputGNU General Public License for most of our software; it applies also to
States should not allow patents to restrict development and use of
License. Each licensee is addressed as "you". "Licensees" and
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
System Libraries, or general-purpose tools or generally available free
Source.
User Product is transferred to the recipient in perpetuity or for a
...
...
À cause de certains des problèmes de tri hérités, il est souvent plus juste d’utiliser les catégories de caractères POSIX à la place des gammes de caractères que vous venez d’utiliser.
Il existe plusieurs caractères qui sont en-dehors du champ de ce guide, mais voici un exemple qui accomplira la même procédure que celle de l’exemple précédent en utilisant la catégorie de caractères \[:upper:\]
dans un sélectionneur de parenthèses :
- grep "^[[:upper:]]" GPL-3
La sortie sera la même qu’auparavant.
Enfin, l’un des méta-caractères les plus couramment utilisés est l’astérisque ou *
, qui signifie « répétez le caractère précédent ou l’expression zero ou more times ».
Pour trouver chaque ligne dans le fichier GPL-3
qui contient des parenthèses d’ouverture et de fermeture, avec seulement des lettres et des espaces uniques entre les deux, utilisez l’expression suivante :
- grep "([A-Za-z ]*)" GPL-3
Vous verrez la sortie suivante :
Output Copyright (C) 2007 Free Software Foundation, Inc.
distribution (with or without modification), making available to the
than the work as a whole, that (a) is included in the normal form of
Component, and (b) serves only to enable use of the work with that
(if any) on which the executable work runs, or a compiler used to
(including a physical distribution medium), accompanied by the
(including a physical distribution medium), accompanied by a
place (gratis or for a charge), and offer equivalent access to the
...
...
Jusqu’à présent, vous avez utilisé les points, les astérisques et d’autres caractères dans vos expressions. Cependant, vous aurez parfois besoin de rechercher ces caractères spécifiques.
Parfois, vous aurez besoin de trouver un point ou une parenthèse d’ouverture en tant que tel, tout particulièrement lorsque vous travaillerez avec un code source ou des fichiers de configuration. Étant donné que ces caractères ont une signification spéciale dans les expressions régulières, vous devez « échapper » ces caractères pour indiquer à grep
que vous ne souhaitez pas utiliser leur signification spéciale dans ce cas.
Vous pouvez éviter des caractères en utilisant un caractère de barre oblique inverse (\
) devant le caractère qui devrait normalement avoir une signification spéciale.
Par exemple, pour trouver toutes les lignes qui commencent par une lettre majuscule et se termine par un point, utilisez l’expression suivante qui évite le point final afin qu’elle représente un point littéral et non pas le sens habituel « any character » :
- grep "^[A-Z].*\.$" GPL-3
Voici la sortie qui s’affichera :
OutputSource.
License by making exceptions from one or more of its conditions.
License would be to refrain entirely from conveying the Program.
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
SUCH DAMAGES.
Also add information on how to contact you by electronic and paper mail.
Prenons maintenant en considération d’autres options d’expressions régulières.
La commande grep
prend en charge un langage d’expression plus vaste en utilisant la balise -E
ou en appelant la commande egrep
au lieu de grep
.
Ces options vous donne la possibilité d’utiliser des « expressions régulières étendues ». Les expressions régulières étendues incluent tous les méta-caractères de base ainsi que les méta-caractères supplémentaires qui permettent d’exprimer des caractères plus complexes.
L’une des capacités les plus utiles qu’offrent les expressions régulières étendues est la possibilité de regrouper des expressions ensemble et de les manipuler ou d’y faire référence en tant qu’une seule unité.
Groupez les expressions ensemble à l’aide de parenthèses. Si vous souhaitez utiliser des parenthèses sans utiliser des expressions régulières étendues, vous pouvez les éviter avec la barre oblique inverse pour activer cette fonctionnalité.
Les trois expressions suivantes se valent en termes de fonctionnalité :
- grep "\(grouping\)" file.txt
- grep -E "(grouping)" file.txt
- egrep "(grouping)" file.txt
Tout comme les expressions entre parenthèses peuvent spécifier différents choix possibles de correspondance avec des caractères uniques, l’alternance vous permet de spécifier des correspondances alternatives pour les chaînes de caractères ou ensembles d’expression.
Pour indiquer une alternance, utilisez le caractère de barre droite |
. Ils sont souvent utilisés dans des regroupements entre parenthèses pour spécifier qu’une sur deux ou plusieurs des possibilités devraient être considérées comme une correspondance.
La commande suivante trouvera soit GPL
ou General Public Licence
dans le texte :
- grep -E "(GPL|General Public License)" GPL-3
Le résultat ressemblera à ce qui suit :
Output The GNU General Public License is a free, copyleft license for
the GNU General Public License is intended to guarantee your freedom to
GNU General Public License for most of our software; it applies also to
price. Our General Public Licenses are designed to make sure that you
Developers that use the GNU GPL protect your rights with two steps:
For the developers' and authors' protection, the GPL clearly explains
authors' sake, the GPL requires that modified versions be marked as
have designed this version of the GPL to prohibit the practice for those
...
...
Avec l’alternance, vous pouvez faire votre sélection entre plus de deux options en ajoutant des choix supplémentaires dans le groupe de sélection séparé par des caractères supplémentaires de barre oblique (|).
Comme le méta-caractère *
qui correspond au caractère précédent ou au caractère défini sur zero ou more times, d’autres méta-caractères disponibles dans des expressions régulières étendues permettent de spécifier le nombre d’événements.
Pour trouver une correspondance de caractère zero ou one times, vous pouvez utiliser le ?
. Cela rend le caractère ou les ensembles de caractères précédents optionnels, par essence.
La commande suivante cherche les correspondances de copyright
et right
en plaçant copy
dans un groupe optionnel :
- grep -E "(copy)?right" GPL-3
Vous verrez la sortie suivante :
Output Copyright (C) 2007 Free Software Foundation, Inc.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
"Copyright" also means copyright-like laws that apply to other kinds of
...
Le caractère +
correspond à l’apparition une ou plusieurs fois d’une expression. Il est presque similaire au méta-caractère *
, mais avec le caractère +
, l’expression doit correspondre au moins une fois.
L’expression suivante correspond à la chaîne free
plus un ou plusieurs caractères qui ne sont pas des caractères d’espacement :
- grep -E "free[^[:space:]]+" GPL-3
Vous verrez la sortie suivante :
Output The GNU General Public License is a free, copyleft license for
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
When we speak of free software, we are referring to freedom, not
have the freedom to distribute copies of free software (and charge for
you modify it: responsibilities to respect the freedom of others.
freedomss that you received. You must make sure that they, too, receive
protecting users' freedom to change the software. The systematic
of the GPL, as needed to protect the freedom of users.
patents cannot be used to render the program non-free.
Pour spécifier le nombre de fois qu’une correspondance est répétée, utilisez les caractères d’accolade ({
et }
). Ces caractères vous permettent de spécifier avec exactitude un numéro, une plage ou une limite supérieure ou inférieure pour le nombre de fois qu’une correspondance à une expression est trouvée.
Utilisez l’expression suivante pour trouver toutes les lignes dans le fichier GPL-3
qui contiennent trois voyelles :
- grep -E "[AEIOUaeiou]{3}" GPL-3
Chaque ligne renvoyée contient un mot avec trois voyelles :
Outputchanged, so that their problems will not be attributed erroneously to
authors of previous versions.
receive it, in any medium, provided that you conspicuously and
give under the previous paragraph, plus a right to possession of the
covered work so as to satisfy simultaneously your obligations under this
Pour trouver la correspondance avec tous les mots qui ont entre 16 et 20 caractères, utilisez l’expression suivante :
- grep -E "[[:alpha:]]{16,20}" GPL-3
Vous verrez la sortie suivante :
Output certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
c) Prohibiting misrepresentation of the origin of that material, or
Seules les lignes contenant des mots de cette longueur s’afficheront.
grep
est une fonctionnalité pratique pour trouver des modèles dans des fichiers ou la hiérarchie du système de fichiers. Il est donc conseiller de passer un peu de temps pour se familiariser avec ses options et sa syntaxe.
Les expressions régulières sont encore plus versatiles et peuvent être utilisées avec plusieurs programmes populaires. Par exemple, de nombreux éditeurs de texte implémentent des expressions régulières pour rechercher et remplacer du texte.
En outre, la plupart des langages de programmation modernes utilisent des expressions régulières pour exécuter des procédures sur des données spécifiques. Une fois que vous aurez compris les expressions régulières, vous pourrez transférer ces connaissances sur plusieurs tâches informatiques courantes, de la recherche avancée dans votre éditeur de texte à la validation des entrées de l’utilisateur.
]]>Un serveur Linux, tout comme tous les autres ordinateurs que vous connaissez, exécute des applications. L’ordinateur considère cela comme des « processus ».
Bien que Linux se chargera de la gestion de bas niveau et en coulisses du cycle de vie d’un processus, vous avez besoin de pouvoir interagir avec le système d’exploitation et à le gérer ainsi à un niveau supérieur.
Dans ce guide, nous allons voir certains des aspects simples de la gestion de processus. Linux vous propose un vaste ensemble d’outils.
Nous allons voir ces idées sur un Ubuntu 12.04 VPS. Cependant, toute distribution Linux moderne fonctionnera de manière similaire.
Afin de déterminer quels sont les processus en cours d’exécution sur votre serveur, la façon la plus simple consiste à exécuter la commande top
:
top***
top - 15:14:40 up 46 min, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 316576k used, 703024k free, 7652k buffers Swap: 0k total, 0k used, 0k free, 258976k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24188 2120 1300 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 6 root RT 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
Le premier morceau d’information vous donne les statistiques sur les systèmes, comme la charge du système et le nombre total de tâches.
Vous pouvez facilement voir qu’il y a 1 processus en cours d’exécution et que 55 processus sont dormants (aka idle/n’utilisant pas les ressources du processeur).
La partie inférieure comporte les processus en cours d’exécution et leurs statistiques d’utilisation.
Vous disposez d’une version améliorée de top
, appelée htop
, dans les référentiels. Installez-la avec cette commande :
sudo apt-get install htop
Si nous exécutons la commande htop
, l’affichage qui apparaîtra sera plus convivial :
htop***
Mem[||||||||||| 49/995MB] Load average: 0.00 0.03 0.05 CPU[ 0.0%] Tasks: 21, 3 thr; 1 running Swp[ 0/0 Mo] Uptime: 00:58:11 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init 311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid 314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae 389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys 407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5 408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5 553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
Vous pouvez en savoir plus sur la manière d’utiliser top et htop ici.
top
et htop
vous offrent tous les deux une belle interface pour visualiser les processus en cours d’exécution, similaire à un gestionnaire de tâches graphique.
Cependant, ces outils ne sont pas toujours suffisamment flexibles pour couvrir tous les scénarios correctement. Une commande puissante appelée ps
est souvent utilisée pour répondre à ces problèmes.
Lorsque vous appelez une commande sans arguments, la sortie peut manquer d’éclat :
ps***
PID TTY TIME CMD 1017 pts/0 00:00:00 bash 1262 pts/0 00:00:00 ps
Cette sortie affiche tous les processus associés à l’utilisateur et la session du terminal actuels. Cela est logique, car nous exécutons actuellement bash
et ps
avec ce terminal.
Pour obtenir une image plus complète des processus sur ce système, nous pouvons exécuter ce qui suit :
ps aux***
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 24188 2120 ? Ss 14:28 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 14:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 14:28 0:00 [ksoftirqd/0] root 6 0.0 0.0 0 0 ? S 14:28 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 14:28 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S< 14:28 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S< 14:28 0:00 [khelper] . . .
Ces options indiquent à ps
d’afficher les processus détenus par tous les utilisateurs (quel que soit le terminal auquel ils sont associés) sous un format convivial pour les utilisateurs.
Pour avoir une vue en arborescencee, dans laquelle les relations hiérarchiques sont illustrées, nous pouvons exécuter la commande avec les options suivantes :
ps axjf***
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 0 2 0 0 ? -1 S 0 0:00 [kthreadd] 2 3 0 0 ? -1 S 0 0:00 \_ [ksoftirqd/0] 2 6 0 0 ? -1 S 0 0:00 \_ [migration/0] 2 7 0 0 ? -1 S 0 0:00 \_ [watchdog/0] 2 8 0 0 ? -1 S< 0 0:00 \_ [cpuset] 2 9 0 0 ? -1 S< 0 0:00 \_ [khelper] 2 10 0 0 ? -1 S 0 0:00 \_ [kdevtmpfs] 2 11 0 0 ? -1 S< 0 0:00 \_ [netns] . . .
Comme vous pouvez le voir, le processus kthreadd
est montré comme un parent du processus ksoftirqd/0
et les autres.
Dans les systèmes de type Linux et Unix, chaque processus se voit attribuer un process ID ou** PID**. Voici de quelle manière le système d’exploitation identifie et assure le suivi des processus.
Vous pouvez rapidement obtenir le PID d’un processus en utilisant la commande pgrep
:
pgrep bash***
1017
Cette commande interrogera simplement l’ID du processus et la renverra.
Le premier processus généré au démarrage, appelé init, se voit attribuer le PID de « 1 ».
pgrep init***
1
Ce processus est ensuite chargé de générer tout autre processus sur le système. Les processus suivants se voient attribuer des numéros PID plus grands.
Le parent d’un processus est le processus qui était chargé de le générer. Les processus parent disposent d’un PPID, que vous pouvez voir dans l’en-tête des colonnes dans de nombreuses applications de gestion de processus, dont top
, htop
et ps
.
Toute communication sur les processus entre l’utilisateur et le système d’exploitation implique, à un moment donné de l’opération, la traduction entre les noms de processus et le PID. C’est pour cette raison que les utilitaires vous indiquent le PID.
La création d’un processus enfant se fait en deux étapes : fork(), qui crée un nouvel espace d’adresse et copie les ressources que le parent possède via une copie-sur-écriture pour les rendre disponibles sur le processus enfant ; et exec(), qui charge un exécutable dans l’espace de l’adresse et l’exécute.
Dans le cas où un processus enfant meurt avant son parent, l’enfant devient un zombie jusqu’à que le parent collecte les informations le concernant ou indique au noyau qu’il n’a pas besoin de ces informations Les ressources du processus enfant seront ensuite libérées. Cependant, si le processus parent meurt avant l’enfant, l’enfant sera adopté par l’init, bien qu’il puisse également être réattribué à un autre processus.
Tous les processus de Linux répondent à des signals. Les signaux permettent d’indiquer aux programmes, au niveau du système d’exploitation, de s’arrêter ou de modifier leur comportement.
La façon la plus courante de transmettre des signaux à un programme consiste à utiliser la commande kill
.
Comme vous pouvez vous y attendre, la fonctionnalité par défaut de cet utilitaire consiste à tenter de tuer un processus :
<pre>kill <span class=“highlight”>PID_of_target_process</span></pre>
Cela envoie le signal TERM au processus. Le signal TERM indique au processus de bien vouloir se terminer. Cela permet au programme d’effectuer des opérations de nettoyage et de s’arrêter en douceur.
Si le programme se comporte mal et ne se ferme pas lorsque le signal TERM est actionné, nous pouvons escalader le signal en passant le signal KILL
:
<pre>kill -KILL <span class=“highlight”>PID_of_target_process</span></pre>
Il s’agit d’un signal spécial que n’est pas envoyé au programme.
Au lieu de cela, il est envoyé au noyau du système d’exploitation qui interrompt le processus. Vous pouvez l’utiliser pour contourner les programmes qui ignorent les signaux qui leur sont envoyés.
Chaque signal est associé à un numéro que vous pouvez passer à la place du nom. Par exemple, vous pouvez passer « -15 » au lieu de « -TERM » et « -9 » au lieu de « -KILL ».
Les signaux ne servent pas uniquement à fermer des programmes. Vous pouvez également les utiliser pour effectuer d’autres actions.
Par exemple, de nombreux démons redémarrent lorsqu’un signal HUP
ou de suspension leur est envoyé Apache est un programme qui fonctionne ainsi.
<pre>sudo kill -HUP <span class=“highlight”>pid_of_apache</span></pre>
La commande ci-dessus poussera Apache à recharger son fichier de configuration et à reprendre le contenu d’utilisation.
Vous pouvez répertorier tous les signaux que vous pouvez envoyer avec kill en saisissant :
kill -l***
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM . . .
Bien que la façon classique d’envoyer des signaux consiste à utiliser des PIDS, il existe également des méthodes de le faire avec des noms de processus réguliers.
La commande pkill
fonctionne de manière pratiquement de la même manière que kill
, mais elle fonctionne plutôt avec le nom d’un processus :
pkill -9 ping
La commande ci-dessus est l’équivalent de :
kill -9 `pgrep ping`
Vous pouvez utiliser la commande killall
pour envoyer un signal à chaque instance d’un processus donné :
killall firefox
La commande ci-dessus enverra le signal TERM à chaque instance de firefox en cours d’exécution sur l’ordinateur.
Il vous arrivera souvent de vouloir ajuster la priorité donnée aux processus dans un environnement de serveur.
Certains processus peuvent être considérés comme critiques à la mission pour votre situation, tandis que d’autres peuvent être exécutés chaque fois qu’il y aura des ressources restantes.
Linux contrôle la priorité par le biais d’une valeur appelée niceness.
Les tâches hautement prioritaires sont considérées comme moins nice, car elles ne partagent pas également les ressources. Les processus faiblement prioritaires sont en revanche nice car ils insistent à prendre seulement des ressources minimales.
Lorsque nous avons exécuté top
au début de l’article, il y avait une colonne nommée « NI ». Il s’agit de la valeur nice du processus :
top***
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 324496k used, 695104k free, 8512k buffers Swap: 0k total, 0k used, 0k free, 264812k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1635 root 20 0 17300 1200 920 R 0.3 0.1 0:00.01 top 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
Les valeurs nice peuvent aller de « -19/-20 » (priorité la plus grande) à «19/20» (priorité la plus faible) en fonction du système.
Pour exécuter un programme avec une certaine nice valeur, nous pouvons utiliser la commande nice
:
<pre>nice -n 15 <span class=“highlight”>command_to_execute</span></pre>
Elle fonctionne uniquement au démarrage d’un nouveau programme.
Pour modifier la valeur nice d’un programme déjà en cours d’exécution, nous utilisons un outil
appelé renice :
<pre>renice 0 <span class=“highlight”>PID_to_prioritize</span></pre>
Remarque : bien que nice fonctionne avec un nom de commande par nécessité, renice fonctionne en appelant le PID de processus
La gestion de processus est un sujet parfois difficile à comprendre pour les nouveaux utilisateurs car les outils utilisés sont différents de leurs homologues graphiques.
Cependant, les idées sont familières et intuitives et deviennent naturelles avec un peu de pratique. Étant donné que les processus sont impliqués dans toutes les tâches que vous effectuez avec un système informatique, il est essentiel que vous appreniez à les contrôler efficacement.
<div class=“author”>Par Justin Ellingwood</div>
]]>La séparation de privilèges est l’un des paradigmes fondamentaux de la sécurité intégrée dans les systèmes d’exploitation de type Linux et Unix. Les utilisateurs réguliers utilisent des privilèges limités afin de réduire la portée de leur influence à leur propre environnement et non pas au système d’exploitation plus large.
Un utilisateur spécial, appelé root, dispose de privilèges de super-utilisateur. Il s’agit d’un compte administratif qui n’intègre pas les restrictions auxquelles sont soumis les utilisateurs normaux. Les utilisateurs peuvent exécuter des commandes avec des privilèges de super-utilisateur ou root de plusieurs manières différentes.
Tout au long de cet article, nous allons voir comment obtenir des privilèges root correctement et en toute sécurité, en mettant tout particulièrement l’accent sur la modification du fichier /etc/sudoers
.
Nous procéderons à ces étapes sur un serveur Ubuntu 20.04. Cependant, notez que la plupart des distributions Linux modernes comme Debian et CentOS devraient fonctionner de manière similaire.
Ce guide suppose que vous avez déjà procédé à la configuration initiale du serveur mentionnée ici. Connectez-vous à votre serveur en tant que non-root user et continuez de la manière indiquée ci-dessous.
Remarque : ce tutoriel traite de l’escalade des privilèges et du fichier sudoer
plus en détails. Pour ajouter des privilèges sudo
à un utilisateur, consultez nos tutoriels de démarrage rapide Comment créer un nouvel utilisateur actif Sudo pour Ubuntu et CentOS.
Vous disposez de trois méthodes de base pour obtenir des privilèges root, qui diffèrent en termes de sophistication.
Pour obtenir des privilèges root, la méthode la plus directe et la plus simple consiste à se connecter directement à votre serveur en tant que le root user.
Si vous vous connectez à une machine locale (ou en utilisant une fonction de console hors-bande sur un serveur virtuel), lorsque vous serez invité à vous connecter, saisissez root
pour votre nom d’utilisateur et le mot de passe root.
Si vous vous connectez via SSH, spécifiez le root user avant l’adresse IP ou le nom de domaine dans votre chaîne de connexion SSH :
- ssh root@server_domain_or_ip
Si vous n’avez pas configuré de clés SSH pour l’utilisateur root, saisissez le mot de passe root lorsque vous y serez invité.
su
pour devenir rootEn règle générale, il n’est pas recommandé de vous connecter directement en root. En effet, même si cela vous permet de commencer à utiliser le système facilement pour effectuer des tâches non administratives, cette pratique reste dangereuse.
La prochaine méthode que nous allons vous présenter pour obtenir des privilèges de super-utilisateur vous permet de devenir l’utilisateur root à tout moment, à chaque fois que vous en aurez besoin.
Pour cela, vous pouvez invoquer la commande su
, qui signifie « utilisateur de remplacement ». Pour obtenir des privilèges root, tapez :
- su
Vous serez invité à saisir le mot de passe du root user, quite à quoi vous serez dirigé vers une session de shell root.
Une fois que vous en aurez terminé avec les tâches qui nécessitent des privilèges root, retournez à votre shell normal en saisissant :
- exit
sudo
pour exécuter des commandes en tant que rootLa dernière méthode qui permet d’obtenir des privilèges root que nous allons aborder est celle qui utilise la commande sudo
.
La commande sudo
vous permet d’exécuter des commandes uniques avec des privilèges root, sans avoir à générer un nouveau shell. Elle s’exécute de la manière suivante :
- sudo command_to_execute
Contrairement à su
, la commande sudo
vous demandera de saisir le mot de passe de l’utilisateur actuel, pas le mot de passe root.
En raison de ses implications en termes de sécurité, les utilisateurs ne se voient pas attribuer l’accès sudo
par défaut qui doit être préalablement configuré avant de pouvoir fonctionner correctement. Consultez nos tutoriels de démarrage rapide Comment créer un nouvel utilisateur Sudo dans Ubuntu et CentOS pour en apprendre davantage sur la configuration d’un utilisateur sudo
.
Dans la section suivante, nous allons voir comment modifier la configuration sudo
plus en détail.
La configuration de la commande sudo
se fait par le biais d’un fichier situé dans /etc/sudoers
.
Avertissement : ne modifiez jamais ce fichier en utilisant un éditeur de texte normal ! À la place, utilisez toujours la commande visudo
!
Étant donné que l’utilisation d’une mauvaise syntaxe dans le fichier /etc/sudoers
vous laissera créer un système défaillant dans lequel il sera impossible d’obtenir des privilèges élevés, il est important d’utiliser la commande visudo
pour modifier le fichier.
La commande visudo
ouvre un éditeur de texte normalement, mais valide la syntaxe du fichier au moment de la sauvegarde. Cela empêche les erreurs de configuration de venir bloquer les opérations sudo
, qui est votre seule façon d’obtenir des privilèges root.
Traditionnellement, visudo
ouvre le fichier /etc/sudoers
avec l’éditeur de texte vi
. Cependant, en Ubuntu, visudo
est configuré pour utiliser l’éditeur de texte nano
à la place.
Si vous souhaitez le reconfigurer sur vi
, vous devez lancer la commande suivante :
- sudo update-alternatives --config editor
OutputThere are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim.basic 30 manual mode
4 /usr/bin/vim.tiny 10 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Sélectionnez le numéro qui correspond au choix que vous souhaitez appliquer.
Sur CentOS, vous pouvez modifier cette valeur en ajoutant la ligne suivante à votre ~/.bashrc
:
- export EDITOR=`which name_of_editor`
Créez le fichier pour implémenter les changements :
- . ~/.bashrc
Une fois que vous aurez configuré visudo
, exécutez la commande pour accéder au fichier /etc/sudoers
:
- sudo visudo
Nous allons vous présenter le fichier /etc/sudoers
dans l’éditeur de texte que vous avez sélectionné.
J’ai copié et collé le fichier à partir d’Ubuntu 18.04 en supprimant les commentaires. Le fichier /etc/sudoers
de CentOS dispose de bien plus de lignes, dont certaines ne seront pas abordées dans ce guide.
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
#includedir /etc/sudoers.d
Voyons ce que font ces lignes.
La première ligne, « Defaults env_reset », réinitialise l’environnement du terminal afin de supprimer toute variable d’utilisateur. Il s’agit d’une mesure de sécurité utilisée pour effacer les variables d’environnement potentiellement néfastes de la session sudo
.
La deuxième ligne, Defaults mail_badpass
, indique au système d’envoyer des notifications par mail des tentatives de saisie erronée de mot de passe sudo
à l’utilisateur mailto
configuré. Il s’agit du compte root par défaut.
La troisième ligne, qui commence par « Defaults secure_path=… », spécifie le PATH
(les endroits du système de fichiers dans lesquels le système d’exploitation recherchera des applications) qui sera utilisé pour les opérations sudo
. Cela empêche toute utilisation de chemins utilisateurs susceptibles d’être préjudiciables.
La quatrième ligne, qui dicte les privilèges sudo du root
user, est différente des lignes précédentes. Examinons à quoi correspondent les différents champs :
root ALL=(ALL:ALL) ALL
Le premier champ indique le nom d’utilisateur que la règle appliquera à (root).
root ALL=(ALL:ALL) ALL
Le premier « ALL » indique que cette règle s’applique à tous les hôtes.
root ALL=(ALL:ALL) ALL
Ce « ALL » indique que le root user peut exécuter des commandes en tant que tous les utilisateurs.
root ALL=(ALL:ALL) ALL
Ce « ALL » indique que root user peut exécuter des commandes en tant que tous les groupes.
root ALL=(ALL:ALL) ALL
Le dernier « ALL » indique que ces règles s’appliquent à toutes les commandes.
Cela signifie que notre root user peut exécuter toute commande en utilisant sudo
, à condition qu’il fournisse son mot de passe.
Les deux lignes suivantes sont similaires celles des privilèges de l’utilisateur, mais les règles sudo
spécifiées s’appliquent aux groupes.
Les noms commençant par un %
indiquent des noms de groupes.
Ici, nous voyons que le groupe admin peut exécuter toute commande en tant que n’importe quel utilisateur, quel que soit l’hôte. De la même façon, le groupe sudo dispose des mêmes privilèges, mais peut également s’exécuter en tant que tout groupe également.
Au premier regard, la dernière ligne peut ressembler à un commentaire :
. . .
#includedir /etc/sudoers.d
Elle commence par un #
, ce qui indique généralement un commentaire. Cependant, en réalité, cette ligne indique que les fichiers dans le répertoire /etc/sudoers.d
seront également sourcés et appliqués.
Les fichiers qui se trouvent dans ce répertoire suivent les mêmes règles que le fichier /etc/sudoers
en lui-même. Tout fichier qui ne finit pas par ~
et qui ne contient pas un .
sera lu et annexé à la configuration de sudo
.
Cela est principalement destiné aux applications qui modifient les privilèges sudo
lors de l’installation. En mettant toutes les règles associées dans un seul fichier du répertoire /etc/sudoers.d
, vous pouvez facilement voir quels privilèges sont associés à quels comptes et inverser tout aussi aisément les identifiants sans avoir à tenter de manipuler le fichier /etc/sudoers.d
directement.
Comme pour le fichier /etc/sudoers
en lui-même, vous devriez toujours modifier les fichiers du répertoire /etc/sudoers.d
avec visudo
. La syntaxe à utiliser pour modifier ces fichiers serait la suivante :
- sudo visudo -f /etc/sudoers.d/file_to_edit
Le plus souvent, dans le cadre de la gestion des autorisations sudo
, les utilisateurs souhaitent accorder un accès sudo
général à un nouvel utilisateur. Cette pratique est pratique pour octroyer un accès administratif complet au système.
Sur un système configuré avec un groupe d’administration générale comme le système Ubuntu utilisé dans ce guide, le plus simple consiste à ajouter l’utilisateur en question à ce groupe.
Par exemple, sur Ubuntu 20.04, le groupe sudo
dispose de privilèges administratifs complets. Nous pouvons accorder ces mêmes privilèges à un utilisateur en les ajoutant au groupe de la manière suivante :
- sudo usermod -aG sudo username
Vous pouvez également utiliser la commande gpasswd
:
- sudo gpasswd -a username sudo
Les deux méthodes vous donneront les mêmes résultats.
En CentOS, on utilise généralement le groupe wheel
au lieu du groupe sudo
:
- sudo usermod -aG wheel username
Ou, on utilise gpasswd
:
- sudo gpasswd -a username wheel
Sur CentOS, si vous n’arrivez pas à ajouter un utilisateur au groupe immédiatement, vous aurez éventuellement à modifier le fichier /etc/sudoers
pour retirer les commentaires du nom du groupe :
- sudo visudo
. . .
%wheel ALL=(ALL) ALL
. . .
Maintenant que nous nous sommes familiarisés avec la syntaxe générale du fichier, créons quelques nouvelles règles.
Le fichier sudoers
peut être organisé plus efficacement en regroupant des éléments avec différents types de « alias ».
Par exemple, nous pouvons créer trois groupes différents d’utilisateurs, en chevauchant l’appartenance :
. . .
User_Alias GROUPONE = abby, brent, carl
User_Alias GROUPTWO = brent, doris, eric,
User_Alias GROUPTHREE = doris, felicia, grant
. . .
Les noms de groupes doivent commencer par une lettre majuscule. Nous pouvons ensuite autoriser les membres de GROUPTWO
à mettre à jour la base de données apt
en créant une règle similaire à celle-ci :
. . .
GROUPTWO ALL = /usr/bin/apt-get update
. . .
Si nous ne spécifions pas l’utilisateur/groupe par lequel la commande est exécutée, sudo
désigne le root user par défaut.
Nous pouvons autoriser les membres de GROUPTHREE
à arrêter et redémarrer la machine en créant un « alias de commande » et en l’utilisant dans une règle pour GROUPTHREE
:
. . .
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart
GROUPTHREE ALL = POWER
. . .
Nous créons un alias de commandes appelé POWER
, qui intègre des commandes qui permettent d’arrêter et de redémarrer la machine. Nous allons ensuite autoriser les membres de GROUPTHREE
à exécuter ces commandes.
Nous pouvons également créer des alias « Run as », qui peuvent remplacer la partie de la règle qui spécifie l’utilisateur par lequel la commande est exécutée :
. . .
Runas_Alias WEB = www-data, apache
GROUPONE ALL = (WEB) ALL
. . .
Cela permettra à toute personne membre de GROUPONE
d’exécuter des commandes en tant que l’utilisateur www-data
ou apache
user.
Gardez juste à l’esprit, qu’en cas de conflit, les règles les plus récentes remplaceront les anciennes.
Il existe plusieurs manières d’avoir un meilleur contrôle sur la façon dont sudo
réagit à un appel.
La commande updatedb
associée au package mlocate
est relativement sans danger sur un système à utilisateur unique. Si nous souhaitons autoriser les utilisateurs à l’exécuter avec des privilèges root sans avoir à saisir de mot de passe, nous pouvons créer une règle comme suit:
. . .
GROUPONE ALL = NOPASSWD: /usr/bin/updatedb
. . .
NOPASSWD
est une « balise » qui signifie qu’aucun mot de passe ne sera requis. Elle est accompagnée d’une commande appelée PASSWD
qui indique le comportement par défaut. Une balise est pertinente pour le reste de la règle à moins qu’elle ne soit écrasée par une balise « jumelle » plus loin dans la ligne.
Par exemple, nous pouvons avoir une ligne comme celle-ci :
. . .
GROUPTWO ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill
. . .
La balise NOEXEC
est également une balise utile qui vous permettra d’empêcher certains comportements dangereux dans certains programmes.
Par exemple, certains programmes, comme less
, peuvent générer d’autres commandes si vous saisissez ce qui suit à partir de leur interface :
!command_to_run
Pour résumer, cela permet d’exécuter toute commande instruite par l’utilisateur avec les mêmes autorisations sous lesquelles less
s’exécute, ce qui peut être assez dangereux.
Pour restreindre ce phénomène, nous pouvons utiliser une ligne comme la suivante :
. . .
username ALL = NOEXEC: /usr/bin/less
. . .
Il existe quelques autres éléments d’information qui pourront vous être d’une grande utilité lorsque vous utilisez sudo
.
Si, dans le fichier de configuration, vous avez configuré un utilisateur ou un groupe sur « run as », vous pouvez exécuter les commandes en tant que ces utilisateurs en utilisant respectivement les balises -u
et -g
:
- sudo -u run_as_user command
- sudo -g run_as_group command
Pour plus de commodité, sudo
enregistrera par défaut vos détails d’authentification pendant un certain temps sur un terminal. Cela signifie que vous n’aurez plus à saisir votre mot de passe à chaque fois jusqu’à expiration du minuteur.
Pour des raisons de sécurité, si, une fois que vous aurez terminé d’exécuter des commandes administratives, vous souhaitez supprimer ce minuteur, vous pouvez exécuter la commande suivante :
- sudo -k
Si, en revanche, vous souhaitez « primer » la commande sudo
afin de ne plus recevoir d’invitation par la suite ou de renouveler votre location sudo
, vous pouvez toujours saisir la commande suivante :
- sudo -v
Vous serez invité à saisir votre mot de passe, invite qui sera mise en cache pour vos prochaines utilisations sudo
jusqu’à expiration du délai de sudo
.
Si vous vous demandez tout simplement quel type de privilèges ont été définis pour votre nom d’utilisateur, vous pouvez saisir :
- sudo -l
Vous obtiendrez une liste de toutes les règles dans le fichier /etc/sudoers
qui s’appliquent à votre utilisateur. Vous aurez ainsi une bonne idée de ce que vous serez autorisé à faire ou pas avec sudo
en tant que tout utilisateur.
Il vous arrivera souvent d’exécuter une commande qui ne fonctionnera pas, car vous aurez oublié de la préfacer avec sudo
. Pour ne pas avoir à re-saisir la commande, vous pouvez tirer profit de la fonctionnalité de bash qui signifie « répéter la dernière commande » :
- sudo !!
Le double point d’exclamation répétera la dernière commande. Nous l’avons précédé de sudo
afin de rapidement changer la commande sans privilège en commande avec privilège.
Pour le plaisir, vous pouvez ajouter la ligne suivante à votre fichier /etc/sudoers
avec visudo
:
- sudo visudo
. . .
Defaults insults
. . .
sudo
renverra alors une insulte stupide à l’utilisateur si le mot de passe sudo
qu’il a saisi est erroné. Nous pouvons utiliser sudo -k
pour effacer l’ancien mot de passe en cache sudo
pour l’essayer :
- sudo -k
- sudo ls
Output[sudo] password for demo: # enter an incorrect password here to see the results
Your mind just hasn't been the same since the electro-shock, has it?
[sudo] password for demo:
My mind is going. I can feel it.
Vous devriez maintenant avoir une compréhension de base sur la façon de lire et de modifier le fichier sudoers
et des différentes méthodes disponibles pour obtenir des privilèges root.
N’oubliez pas qu’il existe une raison pour laquelle les privilèges de super-utilisateurs ne sont pas octroyés aux utilisateurs réguliers. Il est essentiel que vous ayez une bonne compréhension de ce que fait chacune des actions que vous exécutez avec des privilèges root. Ne prenez pas cette responsabilité à la légère. Apprenez à utiliser ces outils pour votre cas d’utilisation et à verrouiller toute fonctionnalité inutile.
]]>SSH, ou secure shell, est un protocole crypté utilisé pour administrer et communiquer avec des serveurs. Si vous travaillez avec un serveur Linux, il est fort probable que vous passiez la majeure partie de votre temps dans une session terminal connectée à votre serveur par SSH.
Bien qu’il existe plusieurs façons de se connecter à un serveur SSH, dans ce guide, nous allons essentiellement nous concentrer sur la configuration des clés SSH. Les clés SSH vous donnent un moyen facile et extrêmement sûr de vous connecter à votre serveur. Il s’agit donc de la méthode que nous recommandons à tous les utilisateurs.
Un serveur SSH utilise diverses méthodes pour authentifier des clients. La plus simple est l’authentification par mot de passe, qui, malgré sa simplicité d’utilisation, est loin d’être la plus sécurisée.
Bien que l’envoi des mots de passe au serveur se fasse de manière sécurisée, ces derniers ne sont généralement pas suffisamment complexes ou longs pour arrêter les attaquants assidus et insistants. La puissance de traitement moderne combinée aux scripts automatisés rendent tout à fait possible toute attaque par force brute d’un compte protégé par mot de passe. Bien qu’il existe d’autres méthodes d’ajouter davantage de sécurité (fail2ban
, etc.), les clés SSH ont fait leur preuve en termes de fiabilité comme de sécurité.
Les paires de clés SSH sont deux clés chiffrées qui peuvent être utilisées pour authentifier un client sur un serveur SSH. Chaque paire de clés est composée d’une clé publique et d’une clé privée.
Le client doit conserver la clé privée qui doit rester absolument secrète. Si la clé privée venait à être compromise, tout attaquant pourrait alors se connecter aux serveurs configurés avec la clé publique associée sans authentification supplémentaire. Par mesure de précaution supplémentaire, la clé peut être cryptée sur le disque avec une phrase de passe.
La clé publique associée pourra être librement partagée sans aucun impact négatif. La clé publique servira à crypter les messages que seule la clé privée pourra déchiffrer. Cette propriété est utilisée comme un moyen de s’authentifier avec la paire de clés.
La clé publique est chargée sur un serveur distant auquel vous devez pouvoir vous connecter avec SSH. La clé est ajoutée à un fichier spécifique dans le compte utilisateur auquel vous allez vous connecter qui se nomme ~/.ssh/authorized_keys
.
Si un client tente de s’authentifier à l’aide de clés SSH, le serveur pourra demander au client s’il a bien la clé privée en sa possession. Une fois que le client pourra prouver qu’il possède bien la clé privée, une session shell est lancée ou la commande demandée est exécutée.
Afin de configurer l’authentification avec des clés SSH sur votre serveur, la première étape consiste à générer une paire de clés SSH sur votre ordinateur local.
Pour ce faire, nous pouvons utiliser un utilitaire spécial appelé ssh-keygen
, inclus dans la suite standard d’outils OpenSSH. Par défaut, cela créera une paire de clés RSA de 2048 bits, parfaite pour la plupart des utilisations.
Sur votre ordinateur local, générez une paire de clés SSH en saisissant ce qui suit :
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
L’utilitaire vous demandera de sélectionner un emplacement pour les clés qui seront générées. Le système stockera les clés par défaut dans le répertoire ~/.ssh
du répertoire d’accueil de votre utilisateur. La clé privée se nommera id_rsa
et la clé publique associée, id_rsa.pub
.
En règle générale, à ce stade, il est préférable de conserver l’emplacement par défaut. Cela permettra à votre client SSH de trouver automatiquement vos clés SSH lorsqu’il voudra s’authentifier. Si vous souhaitez choisir un autre chemin, vous devez le saisir maintenant. Sinon, appuyez sur ENTER pour accepter l’emplacement par défaut.
Si vous avez précédemment généré une paire de clés SSH, vous verrez apparaître une invite similaire à la suivante :
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
Si vous choisissez d’écraser la clé sur le disque, vous ne pourrez plus vous authentifier à l’aide de la clé précédente. Soyez très prudent lorsque vous sélectionnez « yes », car il s’agit d’un processus de suppression irréversible.
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Vous serez ensuite invité à saisir une phrase de passe pour la clé. Il s’agit d’une phrase de passe facultative qui peut servir à crypter le fichier de la clé privée sur le disque.
Il serait légitime de vous demander quels avantages pourrait avoir une clé SSH si vous devez tout de même saisir une phrase de passe. Voici quelques-uns des avantages :
Étant donné que la clé privée n’est jamais exposée au réseau et est protégée par des autorisations d’accès au fichier, ce fichier ne doit jamais être accessible à toute autre personne que vous (et le root user). La phrase de passe offre une couche de protection supplémentaire dans le cas où ces conditions seraient compromises.
L’ajout d’une phrase de passe est facultatif. Si vous en entrez une, vous devrez la saisir à chaque fois que vous utiliserez cette clé (à moins que vous utilisiez un logiciel d’agent SSH qui stocke la clé décryptée). Nous vous recommandons d’utiliser une phrase de passe. Cependant, si vous ne souhaitez pas définir une phrase de passe, il vous suffit d’appuyer sur ENTER pour contourner cette invite.
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Vous disposez désormais d’une clé publique et privée que vous pouvez utiliser pour vous authentifier. L’étape suivante consiste à placer la clé publique sur votre serveur afin que vous puissiez utiliser l’authentification par clé SSH pour vous connecter.
Si vous démarrez un nouveau serveur DigitalOcean, vous pouvez automatiquement intégrer votre clé publique SSH dans le compte root de votre nouveau serveur.
En bas de la page de création de Droplet, une option vous permet d’ajouter des clés SSH à votre serveur :
Si vous avez déjà ajouté un fichier de clé publique à votre compte DigitalOcean, vous la verrez apparaître comme une option sélectionnable (il y a deux clés existantes dans l’exemple ci-dessus : « Work key » et « Home key »). Pour intégrer une clé existante, cliquez dessus pour la mettre en surbrillance. Vous pouvez intégrer plusieurs clés sur un seul serveur :
Si vous n’avez pas encore chargé de clé SSH publique sur votre compte, ou si vous souhaitez ajouter une nouvelle clé à votre compte, cliquez sur le bouton « + Add SSH Key ». Cela créera une invite :
Dans la case « SSH Key content », collez le contenu de votre clé publique SSH. En supposant que vous ayez généré vos clés en utilisant la méthode ci-dessus, vous pouvez obtenir le contenu de votre clé publique sur votre ordinateur local en tapant :
cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com
Collez cette valeur, dans son intégralité, dans la boîte plus grande. Dans la case « Comment (optional) », vous pouvez choisir une étiquette pour la clé. Elle apparaîtra sous le nom de clé dans l’interface DigitalOcean :
Lorsque vous créez votre Droplet, les clés SSH publiques que vous avez sélectionnées seront placées dans le fichier ~/.ssh/authorized_keys
du compte de l’utilisateur root. Cela vous permettra de vous connecter au serveur à partir de l’ordinateur qui intègre votre clé privée.
Si vous disposez déjà d’un serveur et que vous n’avez pas intégré de clés lors de la création, vous pouvez toujours charger votre clé publique et l’utiliser pour vous authentifier sur votre serveur.
La méthode que vous allez utiliser dépendra principalement des outils dont vous disposez et des détails de votre configuration actuelle. Vous obtiendrez le même résultat final avec toutes les méthodes suivantes. La première méthode est la plus simple et la plus automatisée. Celles qui suivent nécessitent chacune des manipulations supplémentaires si vous ne pouvez pas utiliser les méthodes précédentes.
La façon la plus simple de copier votre clé publique sur un serveur existant consiste à utiliser un utilitaire appelé ssh-copy-id
. En raison de sa simplicité, nous vous recommandons d’utiliser cette méthode, si elle est disponible.
L’outil ssh-copy-id
est inclus dans les paquets OpenSSH de nombreuses distributions. Vous pouvez donc en disposer sur votre système local. Pour que cette méthode fonctionne, vous devez déjà disposer d’un accès SSH à votre serveur, basé sur un mot de passe.
Pour utiliser l’utilitaire, il vous suffit de spécifier l’hôte distant auquel vous souhaitez vous connecter et le compte utilisateur auquel vous avez accès SSH par mot de passe. Il s’agit du compte sur lequel votre clé publique SSH sera copiée.
La syntaxe est la suivante :
ssh-copy-id username@remote_host
Il se peut que vous voyez apparaître un message similaire à celui-ci :
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Cela signifie simplement que votre ordinateur local ne reconnaît pas l’hôte distant. Cela se produira la première fois que vous vous connecterez à un nouvel hôte. Tapez « yes » et appuyez sur ENTER (ENTRÉE) pour continuer.
Ensuite, l’utilitaire recherchera sur votre compte local la clé id_rsa.pub
que nous avons créée précédemment. Lorsqu’il trouvera la clé, il vous demandera le mot de passe du compte de l’utilisateur distant :
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@111.111.11.111's password:
Saisissez le mot de passe (votre saisie ne s’affichera pas pour des raisons de sécurité) et appuyez sur ENTER. L’utilitaire se connectera au compte sur l’hôte distant en utilisant le mot de passe que vous avez fourni. Il copiera ensuite le contenu de votre clé ~/.ssh/id_rsa.pub
dans un fichier situé dans le répertoire de base ~/.ssh
du compte distant appelé authorized_keys
.
Vous obtiendrez un résultat similaire à ce qui suit :
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@111.111.11.111'"
and check to make sure that only the key(s) you wanted were added.
À ce stade, votre clé id_rsa.pub
a été téléchargée sur le compte distant. Vous pouvez passer à la section suivante.
Si vous ne disposez pas de ssh-copy-id
, mais que vous avez un accès SSH par mot de passe à un compte sur votre serveur, vous pouvez télécharger vos clés en utilisant une méthode SSH classique.
Nous pouvons le faire en extrayant le contenu de notre clé SSH publique sur notre ordinateur local et en l’acheminant par une connexion SSH vers le serveur distant. D’autre part, nous pouvons nous assurer que le répertoire ~/.ssh
existe bien sous le compte que nous utilisons pour ensuite générer le contenu que nous avons transmis dans un fichier appelé authorized_keys
dans ce répertoire.
Nous allons utiliser le symbole de redirection >>
pour ajouter le contenu au lieu d’écraser le contenu précédent. Cela nous permettra d’ajouter des clés sans détruire les clés précédemment ajoutées.
La commande ressemblera à ceci :
cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
Il se peut que vous voyez apparaître un message similaire à celui-ci :
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Cela signifie simplement que votre ordinateur local ne reconnaît pas l’hôte distant. Cela se produira la première fois que vous vous connecterez à un nouvel hôte. Tapez « yes » et appuyez sur ENTER (ENTRÉE) pour continuer.
Ensuite, vous serez invité à saisir le mot de passe du compte auquel vous tentez de vous connecter :
username@111.111.11.111's password:
Après avoir saisi votre mot de passe, le contenu de votre clé id_rsa.pub
sera copié à la fin du fichier authorized_keys
du compte de l’utilisateur distant. Si cela fonctionne, passez à la section suivante.
Si vous ne disposez pas d’un accès SSH à votre serveur protégé par un mot de passe, vous devrez suivre le processus décrit ci-dessus manuellement.
Le contenu de votre fichier id_rsa.pub
devra être ajouté à un fichier qui se trouvera dans ~/.ssh/authorized_keys
sur votre machine distante.
Pour afficher le contenu de votre clé id_rsa.pub
, tapez ceci dans votre ordinateur local :
cat ~/.ssh/id_rsa.pub
Vous verrez le contenu de la clé de manière similaire à ceci :
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Accédez à votre hôte distant en utilisant la méthode dont vous disposez. Par exemple, si votre serveur est un Droplet DigitalOcean, vous pouvez vous connecter à l’aide de la console Web disponible dans le panneau de configuration :
Une fois que vous avez accès à votre compte sur le serveur distant, vous devez vous assurer que le répertoire ~/.ssh
est créé. Cette commande va créer le répertoire si nécessaire, ou ne rien faire s’il existe déjà :
mkdir -p ~/.ssh
Maintenant, vous pouvez créer ou modifier le fichier authorized_keys
dans ce répertoire. Vous pouvez ajouter le contenu de votre fichier id_rsa.pub
à la fin du fichier authorized_keys
, en le créant si nécessaire, à l’aide de la commande suivante :
echo public_key_string >> ~/.ssh/authorized_keys
Dans la commande ci-dessus, remplacez la chaîne public_key_string
par la sortie de la commande cat ~/.ssh/id_rsa.pub
que vous avez exécutée sur votre système local. Elle devrait commencer par ssh-rsa AAAA...
.
Si cela fonctionne, vous pouvez passer à l’authentification sans mot de passe.
Si vous avez effectué avec succès l’une des procédures ci-dessus, vous devriez pouvoir vous connecter à l’hôte distant sans le mot de passe du compte distant.
Le processus de base est le même :
ssh username@remote_host
Si c’est la première fois que vous vous connectez à cet hôte (si vous avez utilisé la dernière méthode ci-dessus), vous verrez peut-être quelque chose comme ceci :
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Cela signifie simplement que votre ordinateur local ne reconnaît pas l’hôte distant. Tapez « yes » et appuyez sur ENTER pour continuer.
Si vous n’avez pas fourni de phrase de passe pour votre clé privée, vous serez immédiatement connecté. Si vous avez configuré une phrase de passe pour la clé privée au moment de la création de la clé, vous devrez la saisir maintenant. Ensuite, une nouvelle session shell devrait être générée pour vous avec le compte sur le système distant.
Si cela fonctionne, continuer pour savoir comment verrouiller le serveur.
Si vous avez pu vous connecter à votre compte en utilisant SSH sans mot de passe, vous avez réussi à configurer une authentification basée sur des clés SSH pour votre compte. Cependant, votre mécanisme d’authentification par mot de passe est toujours actif, ce qui signifie que votre serveur est toujours exposé aux attaques par force brute.
Avant de procéder aux étapes décrites dans cette section, vérifiez que vous avez bien configuré une authentification par clé SSH pour le compte root sur ce serveur, ou de préférence, que vous avez bien configuré une authentification par clé SSH pour un compte sur ce serveur avec un accès sudo
. Cette étape permettra de verrouiller les connexions par mot de passe. Il est donc essentiel de s’assurer que vous pourrez toujours obtenir un accès administratif.
Une fois les conditions ci-dessus satisfaites, connectez-vous à votre serveur distant avec les clés SSH, soit en tant que root, soit avec un compte avec des privilèges sudo
. Ouvrez le fichier de configuration du démon SSH :
sudo nano /etc/ssh/sshd_config
Dans le fichier, recherchez une directive appelée PasswordAuthentication
. Elle est peut-être commentée. Décommentez la ligne et réglez la valeur sur « no ». Cela désactivera votre capacité à vous connecter avec SSH en utilisant des mots de passe de compte :
PasswordAuthentication no
Enregistrez et fermez le fichier lorsque vous avez terminé. Pour implémenter effectivement les modifications que vous venez d’apporter, vous devez redémarrer le service.
Sur les machines Ubuntu ou Debian, vous pouvez lancer la commande suivante :
sudo service ssh restart
Sur les machines CentOS/Fedora, le démon s’appelle sshd
:
sudo service sshd restart
Une fois cette étape terminée, vous avez réussi à transiter votre démon SSH de manière à ce qu’il réponde uniquement aux clés SSH.
Vous devriez maintenant avoir une authentification basée sur une clé SSH configurée et active sur votre serveur, vous permettant de vous connecter sans fournir de mot de passe de compte. À partir de là, vos options sont multiples. Si vous souhaitez en savoir plus sur SSH, consultez notre Guide des fondamentaux SSH.
]]>Un servidor Linux, como cualquier otro equipo con el que pueda estar familiarizado, ejecuta aplicaciones. Para el equipo, estos se consideran “procesos”.
Mientras que Linux se encargará de la administración de bajo nivel, entre bastidores, en el ciclo de vida de un proceso, se necesitará una forma de interactuar con el sistema operativo para administrarlo desde un nivel superior.
En esta guía, explicaremos algunos aspectos sencillos de la administración de procesos. Linux proporciona una amplia colección de herramientas para este propósito.
Exploraremos estas ideas en un VPS de Ubuntu 12.04, pero cualquier distribución moderna de Linux funcionará de manera similar.
La forma más sencilla de averiguar qué procesos se están ejecutando en su servidor es ejecutar el comando top
:
top***
top - 15:14:40 up 46 min, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 316576k used, 703024k free, 7652k buffers Swap: 0k total, 0k used, 0k free, 258976k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 6 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0 8 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 cpuset 9 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 khelper 10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs
La parte superior de la información ofrece estadísticas del sistema, como la carga del sistema y el número total de tareas.
Se puede ver fácilmente que hay 1 proceso en ejecución y 55 procesos en reposo (es decir, inactivos/que no usan los recursos de la CPU).
La parte inferior tiene los procesos en ejecución y las estadísticas de uso.
Una versión mejorada de top
, llamada htop
, está disponible en los repositorios. Instálelo con este comando:
sudo apt-get install htop
Si ejecutamos el comando htop
, veremos que hay una pantalla más fácil de usar:
htop***
Mem[||||||||||| 49/995MB] Load average: 0.00 0.03 0.05 CPU[ 0.0%] Tasks: 21, 3 thr; 1 running Swp[ 0/0MB] Uptime: 00:58:11 PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 1259 root 20 0 25660 1880 1368 R 0.0 0.2 0:00.06 htop 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 /sbin/init 311 root 20 0 17224 636 440 S 0.0 0.1 0:00.07 upstart-udev-brid 314 root 20 0 21592 1280 760 S 0.0 0.1 0:00.06 /sbin/udevd --dae 389 messagebu 20 0 23808 688 444 S 0.0 0.1 0:00.01 dbus-daemon --sys 407 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.02 rsyslogd -c5 408 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 409 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.00 rsyslogd -c5 406 syslog 20 0 243M 1404 1080 S 0.0 0.1 0:00.04 rsyslogd -c5 553 root 20 0 15180 400 204 S 0.0 0.0 0:00.01 upstart-socket-br
Puede aprender más sobre cómo usar top y htop aquí.
top
y htop
proporcionan una buena interfaz para ver los procesos en ejecución, similar a la de un administrador de tareas gráfico.
Sin embargo, estas herramientas no siempre son lo suficientemente flexibles para cubrir adecuadamente todos los escenarios. Un poderoso comando llamado ps
generalmente es la respuesta a estos problemas.
Cuando se invoca sin argumentos, el resultado puede ser un poco escaso:
ps***
PID TTY TIME CMD 1017 pts/0 00:00:00 bash 1262 pts/0 00:00:00 ps
Este resultado muestra todos los procesos asociados con el usuario actual y la sesión de terminal. Eso tiene sentido porque, actualmente, solo estamos ejecutando bash
y ps
con este terminal.
Para obtener un panorama más completo de los procesos en este sistema, podemos ejecutar lo siguiente:
ps aux***
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 24188 2120 ? Ss 14:28 0:00 /sbin/init root 2 0.0 0.0 0 0 ? S 14:28 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S 14:28 0:00 [ksoftirqd/0] root 6 0.0 0.0 0 0 ? S 14:28 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 14:28 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S< 14:28 0:00 [cpuset] root 9 0.0 0.0 0 0 ? S< 14:28 0:00 [khelper] . . .
Estas opciones ordenan a ps
que muestre los procesos de propiedad de todos los usuarios (independientemente de su asociación con el terminal) en un formato fácil de usar.
Para ver una vista de estructura jerárquica, en la que se ilustran las relaciones jerárquicas, podemos ejecutar el comando con las siguientes opciones:
ps axjf***
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 0 2 0 0 ? -1 S 0 0:00 [kthreadd] 2 3 0 0 ? -1 S 0 0:00 \_ [ksoftirqd/0] 2 6 0 0 ? -1 S 0 0:00 \_ [migration/0] 2 7 0 0 ? -1 S 0 0:00 \_ [watchdog/0] 2 8 0 0 ? -1 S< 0 0:00 \_ [cpuset] 2 9 0 0 ? -1 S< 0 0:00 \_ [khelper] 2 10 0 0 ? -1 S 0 0:00 \_ [kdevtmpfs] 2 11 0 0 ? -1 S< 0 0:00 \_ [netns] . . .
Como puede ver, el proceso kthreadd
se muestra como proceso principal de kstadd/0
y los demás.
En Linux y sistemas tipo Unix, a cada proceso se le asigna un ID de proceso o PID. Esta es la forma en que el sistema operativo identifica y realiza un seguimiento de los procesos.
Una forma rápida de obtener el PID de un proceso es con el comando pgrep
:
pgrep bash***
1017
Esto simplemente consultará el ID del proceso y lo mostrará en el resultado.
El primer proceso que se generó en el arranque, llamado init, recibe el PID “1”.
pgrep init***
1
Entonces, este proceso es responsable de engendrar todos los demás procesos del sistema. Los procesos posteriores reciben números PID mayores.
Un proceso principal es el proceso que se encargó de generarlo. Los procesos principales tienen un PPID, que puede ver en los encabezados de las columnas en muchas aplicaciones de administración de procesos, incluidos top
, htop
y ps
.
Cualquier comunicación entre el usuario y el sistema operativo sobre los procesos implica traducir entre los nombres de procesos y los PID en algún momento durante la operación. Este es el motivo por el que las utilidades le indican el PID.
Para crear un proceso secundario se deben seguir dos pasos: fork(), que crea un nuevo espacio de direcciones y copia los recursos propiedad del principal mediante copy-on-write para que estén disponibles para el proceso secundario; y exec(), que carga un ejecutable en el espacio de direcciones y lo ejecuta.
En caso de que un proceso secundario muera antes que su proceso principal, el proceso secundario se convierte en un zombi hasta que el principal haya recopilado información sobre él o haya indicado al núcleo que no necesita esa información. Luego, los recursos del proceso secundario se liberarán. Sin embargo, si el proceso principal muere antes que el secundario, init adoptará el secundario, aunque también puede reasignarse a otro proceso.
Todos los procesos en Linux responden a señales. Las señales son una forma de decirle a los programas que terminen o modifiquen su comportamiento.
La forma más común de pasar señales a un programa es con el comando kill
.
Como es de esperar, la funcionalidad predeterminada de esta utilidad es intentar matar un proceso:
<pre>kill <span class=“highlight”>PID_of_target_process</span></pre>
Esto envía la señal TERM al proceso. La señal TERM indica al proceso debe terminar. Esto permite que el programa realice operaciones de limpieza y cierre sin problemas.
Si el programa tiene un mal comportamiento y no se cierra cuando se le da la señal TERM, podemos escalar la señal pasando la señal KILL
:
<pre>kill -KILL <span class=“highlight”>PID_of_target_process</span></pre>
Esta es una señal especial que no se envía al programa.
En su lugar, se envía al núcleo del sistema operativo, que cierra el proceso. Eso se utiliza para eludir los programas que ignoran las señales que se les envían.
Cada señal tiene un número asociado que puede pasar en vez del nombre. Por ejemplo, puede pasar “-15” en lugar de “-TERM” y “-9” en lugar de “-KILL”.
Las señales no solo se utilizan para cerrar programas. También pueden usarse para realizar otras acciones.
Por ejemplo, muchos demonios se reinician cuando reciben la señal HUP
o la señal de colgar. Apache es un programa que funciona así.
<pre>sudo kill -HUP <span class=“highlight”>pid_of_apache</span></pre>
El comando anterior hará que Apache vuelva a cargar su archivo de configuración y reanude sirviendo contenidos.
Puede enumerar todas las señales que puede enviar con kill escribiendo lo siguiente:
kill -l***
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM . . .
Aunque la forma convencional de enviar señales es mediante el uso de PID, también hay métodos para hacerlo con nombres de procesos regulares.
El comando pkill
funciona casi exactamente igual que kill
, pero funciona con un nombre de proceso en su lugar:
pkill -9 ping
El comando anterior es el equivalente a:
kill -9 `pgrep ping`
Si quiere enviar una señal a todas las instancias de un determinado proceso, puede utilizar el comando killall
:
killall firefox
El comando anterior enviará la señal TERM a todas las instancias de firefox que se estén ejecutando en el equipo.
A menudo, querrá ajustar qué procesos reciben prioridad en un entorno de servidor.
Algunos procesos pueden considerarse como una misión crítica para su situación, mientras que otros pueden ejecutarse siempre que haya recursos sobrantes.
Linux controla la prioridad a través de un valor llamado niceness.
Las tareas de alta prioridad se consideran menos buenas porque no comparten los recursos tan bien. Por otro lado, los procesos de baja prioridad son buenos porque insisten en tomar solo los recursos mínimos.
Cuando ejecutamos top
al principio del artículo, había una columna marcada como “NI”. Este es el valor bueno del proceso:
top***
Tasks: 56 total, 1 running, 55 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1019600k total, 324496k used, 695104k free, 8512k buffers Swap: 0k total, 0k used, 0k free, 264812k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1635 root 20 0 17300 1200 920 R 0.3 0.1 0:00.01 top 1 root 20 0 24188 2120 1300 S 0.0 0.2 0:00.56 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/0
Los valores buenos pueden oscilar entre “-19/-20” (máxima prioridad) y “19/20” (mínima prioridad) dependiendo del sistema.
Para ejecutar un programa con un determinado valor bueno, podemos usar el comando nice
:
<pre>nice -n 15 <span class=“highlight”>command_to_execute</span></pre>
Esto solo funciona cuando se inicia un nuevo programa.
Para alterar el valor bueno de un programa que ya se está ejecutando, usamos una herramienta llamada renice
:
<pre>renice 0 <span class=“highlight”>PID_to_prioritize</span></pre>
Nota: Mientras que nice funciona necesariamente con un nombre de comando, renice funciona invocando al PID del proceso
La administración de procesos es un tema que a veces resulta difícil de entender para los nuevos usuarios, debido a que las herramientas utilizadas son diferentes a sus contrapartes gráficas.
Sin embargo, las ideas son familiares e intuitivas, y, con un poco de práctica, se convertirá en algo natural. Dado que los procesos interviene en todo lo que se hace con un sistema informático, aprender a controlarlos de forma eficaz es una habilidad esencial.
<div class=“author”>Por Justin Ellingwood</div>
]]>La separación de privilegios es uno de los paradigmas de seguridad fundamentales implementados en Linux y en los sistemas operativos tipo Unix. Los usuarios regulares operan con privilegios limitados para reducir el alcance de su influencia en su propio entorno, y no en el sistema operativo en general.
Un usuario especial, llamado root, tiene privilegios de superusuario. Esta es una cuenta administrativa sin las restricciones que tienen los usuarios normales. Los usuarios pueden ejecutar comandos con privilegios de superusuario o root de varias maneras.
En este artículo, explicaremos cómo obtener privilegios root de forma correcta y segura, con un enfoque especial en la edición del archivo /etc/sudoers
.
Completaremos estos pasos en un servidor de Ubuntu 20.04, pero la mayoría de las distribuciones de Linux modernas, como Debian y CentOS, deberían funcionar de manera similar.
Esta guía asume que usted ya ha completado la configuración inicial del servidor que se mencionó aquí. Inicie sesión en su servidor como usuario no root regular y continúe como se indica abajo.
Nota: Este tutorial profundiza sobre la escalada de privilegios y el archivo sudoers
. Si solo desea añadir privilegios sudo
a un usuario, consulte nuestros tutoriales de inicio rápido Cómo crear un nuevo usuario habilitado para sudo en Ubuntu y CentOS.
Existen tres formas básicas de obtener privilegios root, que varían en su nivel de sofisticación.
El método más sencillo y directo para obtener privilegios root es iniciar sesión directamente en su servidor como usuario root.
Si está iniciando sesión en una máquina local (o usando una función de consola fuera de banda en un servidor virtual), introduzca root
como su nombre de usuario en mensaje de inicio de sesión e ingrese la contraseña root cuando se le solicite.
Si está iniciando sesión a través de SSH, especifique el usuario root antes de la dirección IP o el nombre de dominio en su cadena de conexión SSH:
- ssh root@server_domain_or_ip
Si no ha configurado las claves SSH para el usuario root, ingrese la contraseña root cuando se le solicite.
su
para convertirse en rootIniciar sesión directamente como root normalmente no suele ser recomendable, ya que es fácil comenzar a utilizar el sistema para tareas no administrativas, lo cual es peligroso.
La siguiente forma de obtener privilegios de superusuario le permite convertirse en usuario root en cualquier momento, cuando lo necesite.
Podemos hacerlo invocando el comando su
, que significa “usuario sustituto”. Para obtener privilegios root, escriba lo siguiente:
- su
Se le solicitará la contraseña del usuario root, después de lo cual, iniciará una sesión de shell root.
Cuando haya terminado las tareas que requieren privilegios root, vuelva a su shell normal escribiendo lo siguiente:
- exit
sudo
para ejecutar comandos como rootLa última forma de obtener privilegios root que explicaremos es con el comando sudo
.
El comando sudo
permite ejecutar comandos irrepetibles con privilegios root, sin necesidad de generar una shell nueva. Se ejecuta así:
- sudo command_to_execute
A diferencia de su
, el comando sudo
solicitará la contraseña del usuario actual, y no la contraseña root.
Debido a sus implicaciones de seguridad, no se concede acceso sudo
a los usuarios de manera predeterminada, y debe configurarse antes de que funcione correctamente. Consulte nuestros tutoriales de inicio rápido Cómo crear un nuevo usuario habilitado para sudo en Ubuntu y CentOS para aprender a configurar un usuario habilitado para sudo
.
En la siguiente sección, explicaremos en mayor detalle cómo modificar la configuración sudo
.
El comando sudo
se configura a través de un archivo ubicado en /etc/sudoers
.
Advertencia: Nunca edite este archivo con un editor de texto normal. ¡Siempre utilice el comando visudo
en su lugar!
Dado que una sintaxis inadecuada en el archivo /etc/sudoers
puede generar un sistema fallido, en el que es imposible obtener privilegios elevados, es importante utilizar el comando visudo
para editar el archivo.
El comando visudo
abre un editor de texto igual al normal, pero valida la sintaxis del archivo al guardarlo. Esto evita que los errores de configuración bloqueen las operaciones sudo
, que pueden ser su única forma de obtener privilegios root.
Tradicionalmente, visudo
abre el archivo /etc/sudoers
con el editor de texto vi
. Sin embargo, Ubuntu, ha configurado visudo
para utilizar el editor de texto nano
en su lugar.
Si desea cambiarlo de nuevo a vi
, emita el siguiente comando:
- sudo update-alternatives --config editor
OutputThere are 4 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /bin/nano 40 auto mode
1 /bin/ed -100 manual mode
2 /bin/nano 40 manual mode
3 /usr/bin/vim.basic 30 manual mode
4 /usr/bin/vim.tiny 10 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Seleccione el número que coincida con la elección que desea realizar.
En CentOS, puede cambiar este valor añadiendo la siguiente línea a su ~/.bashrc
:
- export EDITOR=`which name_of_editor`
Obtenga el archivo para implementar los cambios:
- . ~/.bashrc
Después de configurar visudo
, ejecute el comando para acceder al archivo /etc/sudoers
:
- sudo visudo
El archivo /etc/sudoers
aparecerá en su editor de texto seleccionado.
He copiado y pegado el archivo de Ubuntu 18.04, con los comentarios eliminados. El archivo /etc/sudoers
de CentOS tiene muchas más líneas, algunas de las que no explicaremos en esta guía.
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
root ALL=(ALL:ALL) ALL
%admin ALL=(ALL) ALL
%sudo ALL=(ALL:ALL) ALL
#includedir /etc/sudoers.d
Veamos qué hacen estas líneas.
La primera línea, “Defaults env_reset”, reinicia el entorno de la terminal para eliminar cualquier variable de usuario. Esta es una medida de seguridad que se utiliza para eliminar las variables de entorno potencialmente dañinas de la sesión sudo
.
La segunda línea, Defaults mail_badpass
, indica al sistema que envíe notificaciones por correo de intentos de contraseñas erróneas de sudo
al usuario mailto
configurado. De manera predeterminada, esta es la cuenta root.
La tercera línea, que comienza con “Defaults secure_path=…”, especifica el PATH
(las ubicaciones del sistema de archivos en los que el sistema operativo buscará aplicaciones) que se utilizarán para las operaciones sudo
. Esto evita el uso de rutas de usuario que puedan ser perjudiciales.
La cuarta línea, que dicta los privilegios sudo
del usuario root, es diferente de las líneas anteriores. Veamos qué significan los diferentes campos:
root ALL=(ALL:ALL) ALL
El primer campo indica el nombre de usuario al que se aplicará la regla (root).
root ALL=(ALL:ALL) ALL
El primer “ALL” indica que esta regla se aplica a todos los hosts.
root ALL=(ALL:ALL) ALL
Este “ALL” indica que el usuario root puede ejecutar comandos como todos los usuarios.
root ALL=(ALL:ALL) ALL
Este “ALL” indica que el usuario root puede ejecutar comandos como todos los grupos.
root ALL=(ALL:ALL) ALL
El último “ALL” indica que estas reglas se aplican a todos los comandos.
Esto significa que nuestro usuario root puede ejecutar cualquier comando usando sudo
, siempre que proporcione su contraseña.
Las siguientes dos líneas son similares a las líneas de privilegios de usuario, pero especifican reglas sudo
para los grupos.
Los nombres que comienzan con un %
indican los nombres de grupo.
Aquí, vemos que el grupo admin puede ejecutar cualquier comando como cualquier usuario en cualquier host. De manera similar, el grupo sudo tiene los mismos privilegios, pero también puede ejecutarse como cualquier grupo.
La última línea puede parecer un comentario a primera vista:
. . .
#includedir /etc/sudoers.d
**Comienza con un #
, que normalmente indica un comentario. Sin embargo, esta línea en realidad indica que los archivos dentro del directorio /etc/sudoers.d
también se consultarán y aplicarán.
Los archivos dentro de ese directorio siguen las mismas reglas que el archivo /etc/sudoers
. Cualquier archivo que no termine en ~
y que no tenga un .
se leerá y anexará a la configuración sudo
.
Esto está pensado principalmente para que las aplicaciones alteren los privilegios sudo
después de instalarse. Poner todas las reglas asociadas dentro de un solo archivo en el directorio /etc/sudoers.d
puede facilitar la tarea de ver qué privilegios están asociados con qué cuentas y revertir las credenciales fácilmente sin tener que manipular el archivo /etc/sudoers
directamente.
Al igual que con el archivo /etc/sudoers
, siempre debe editar los archivos dentro del directorio /etc/sudoers.d
con visudo
. La sintaxis para editar estos archivos sería la siguiente:
- sudo visudo -f /etc/sudoers.d/file_to_edit
La operación más común que los usuarios desean lograr al administrar los permisos sudo
es conceder a un nuevo usuario acceso sudo
general. Esto resulta útil si desea dar a una cuenta acceso administrativo completo al sistema.
La forma más sencilla de hacerlo en un sistema configurado con un grupo de administración de uso general, como el sistema Ubuntu en esta guía, es en realidad añadir el usuario en cuestión a ese grupo.
Por ejemplo, en Ubuntu 20.04, el grupo sudo
tiene privilegios completos de administrador. Podemos conceder a un usuario estos mismos privilegios añadiéndolo al grupo de la siguiente manera:
- sudo usermod -aG sudo username
También se puede usar el comando gpasswd
:
- sudo gpasswd -a username sudo
Ambos lograrán lo mismo.
En CentOS, normalmente es el grupo wheel
, en lugar del grupo sudo
:
- sudo usermod -aG wheel username
O usando gpasswd
:
- sudo gpasswd -a username wheel
En CentOS, si añadir el usuario al grupo no funciona inmediatamente, es posible que tenga que editar el archivo /etc/sudoers
para eliminar los comentarios del nombre del grupo:
- sudo visudo
. . .
%wheel ALL=(ALL) ALL
. . .
Ahora que nos familiarizamos con la sintaxis general del archivo, crearemos algunas reglas nuevas.
El archivo sudoers
puede organizarse más fácilmente agrupando las cosas con varios tipos de “alias”.
Por ejemplo, podemos crear tres grupos diferentes de usuarios, con miembros superpuestos:
. . .
User_Alias GROUPONE = abby, brent, carl
User_Alias GROUPTWO = brent, doris, eric,
User_Alias GROUPTHREE = doris, felicia, grant
. . .
Los nombres de los grupos deben comenzar con una letra mayúscula. Luego, podemos permitir que los miembros de GROUPTWO
actualicen la base de datos apt
creando una regla como esta:
. . .
GROUPTWO ALL = /usr/bin/apt-get update
. . .
Si no especificamos un usuario/grupo para ejecutarse, como en el caso anterior, sudo
será el usuario root de manera predeterminada.
Podemos permitir que los miembros de GROUPTHREE
apaguen y reinicien la máquina creando un “alias de comando” y usando eso en una regla para GROUPTHREE
:
. . .
Cmnd_Alias POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart
GROUPTHREE ALL = POWER
. . .
Creamos un alias de comando llamado POWER
que contiene comandos para apagar y reiniciar la máquina. Luego, permitimos que los miembros de GROUPTHREE
ejecuten estos comandos.
También podemos crear los alias “Ejecutar como”, que pueden sustituir la parte de la regla que especifica el usuario con el que se ejecuta el siguiente comando:
. . .
Runas_Alias WEB = www-data, apache
GROUPONE ALL = (WEB) ALL
. . .
Eso permitirá que cualquier miembro del GROUPONE
ejecute comandos como usuario www-data
o como usuario apache
.
Solo tenga en cuenta que las reglas posteriores anularán las reglas anteriores cuando haya un conflicto entre ambas.
Existen varias formas en las que se puede obtener más control sobre cómo sudo
reacciona a una llamada.
El comando updatedb
asociado con el paquete mlocate
es relativamente inofensivo en un sistema de un solo usuario. Si queremos que los usuarios lo ejecuten con privilegios root sin necesidad de escribir una contraseña, podemos crear una regla como la siguiente:
. . .
GROUPONE ALL = NOPASSWD: /usr/bin/updatedb
. . .
NOPASSWD
es un “etiqueta” que significa que no se solicitará ninguna contraseña. Tiene un comando compañero llamado PASSWD
, que es el comportamiento predeterminado. Una etiqueta es importante para el resto de la regla a menos que su etiqueta “gemela” la anule más adelante en la línea.
Por ejemplo, podemos tener una línea como la siguiente:
. . .
GROUPTWO ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill
. . .
Otra etiqueta útil es NOEXEC
, que puede usarse para evitar algunos comportamientos peligrosos en ciertos programas.
Por ejemplo, algunos programas, como less
, pueden generar otros comandos escribiendo esto desde su interfaz:
!command_to_run
Esto básicamente ejecuta cualquier comando que el usuario proporcione con los mismos permisos con los que se está ejecutando less
, lo que puede ser bastante peligroso.
Para restringir esto, podríamos usar una línea como la siguiente:
. . .
username ALL = NOEXEC: /usr/bin/less
. . .
Existen algunos otros datos que pueden ser útiles cuando se trata de sudo
.
Si especificó un usuario o un grupo para “ejecutar como” en el archivo de configuración, puede ejecutar comandos como esos usuarios usando los indicadores -u
y -g
, respectivamente:
- sudo -u run_as_user command
- sudo -g run_as_group command
Por conveniencia y de manera predeterminada, sudo
guardará sus detalles de autenticación durante un tiempo determinado en un solo terminal. Esto significa que no tendrá que escribir su contraseña de nuevo hasta que se agote el tiempo del temporizador.
Para cuestiones de seguridad, si desea eliminar dicho temporizador cuando haya terminado de ejecutar comandos administrativos, puede ejecutar:
- sudo -k
Si, por otro lado, desea “imprimir” el comando sudo
para que no se le solicite más adelante o para renovar su concesión sudo
, siempre puede escribir lo siguiente:
- sudo -v
Se le solicitará su contraseña, que se almacenará en caché para usos de sudo
posteriores hasta que el plazo de sudo
expire.
Si simplemente se pregunta qué tipo de privilegios están definidos para su nombre de usuario, puede escribir lo siguiente:
- sudo -l
Esto generará una lista de todas las reglas en el archivo /etc/sudoers
que se aplican a su usuario. Eso le dará una buena idea de lo que se podrá hacer o no con sudo
como cualquier usuario.
Habrá muchas veces en las que se ejecutará un comando y fallará porque se olvidó precederlo con sudo.
Para evitar tener que volver a escribir el comando, puede aprovechar una funcionalidad de bash que significa “repetir el último comando”:
- sudo !!
El doble signo de exclamación repetirá el último comando. Lo precedemos con sudo
para cambiar rápidamente de un comando sin privilegios a un comando con privilegios.
Para divertirse, puede añadir la siguiente línea a su archivo /etc/sudoers
con visudo
:
- sudo visudo
. . .
Defaults insults
. . .
Esto hará que sudo
devuelva un insulto tonto cuando un usuario escriba una contraseña incorrecta para sudo
. Podemos usar sudo -k
para eliminar la contraseña anterior sudo
del caché para probarla:
- sudo -k
- sudo ls
Output[sudo] password for demo: # enter an incorrect password here to see the results
Your mind just hasn't been the same since the electro-shock, has it?
[sudo] password for demo:
My mind is going. I can feel it.
Ahora, debería tener una compresión básica sobre cómo leer y modificar el archivo sudoers
, y una comprensión de los diferentes métodos que puede usar para obtener privilegios root.
Recuerde que los privilegios de superusuario no se dan a los usuarios regulares por un motivo. Es fundamental que entienda qué hace cada comando que ejecuta con privilegios root. No tome esa responsabilidad a la ligera. Aprenda la mejor forma de utilizar estas herramientas para su caso de uso, y bloquee cualquier funcionalidad que no necesite.
]]>ssh <newuser>@46.101.46.71 sudo mkdir .ssh sudo chmod 0700 .ssh sudo touch .ssh/authorized_keys sudo chmod 0644 .ssh/authorized_keys sudo chown <newuser> ~/.ssh -R sudo nano .ssh/authorized_keys // i paste the pub key here a goout
// saving file sudo nano /etc/ssh/sshd.config
// in this file I put that Port 222 PubKeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys PermitEmptyPasswords no PermitRootLogin yes // by the moment
// I go out saving file
sudo systemctl restart sshd.service // restart service exit
// now in my local ubuntu
ssh -p 222 <newuser>@46.101.46.71
// the response from server is that: debug1: connect to address 46.101.46.71 port 222: Resource temporarily unavailable // its the same that i treat to connect whit root
ssh -p 222 root@46.101.46.71
you can help me what´s happend?
]]>What is the default password without a terminal? or what is a password?
]]>Here is how you could do that!
]]>Here is how you could do that!
]]>SSH или защищенная оболочка — это шифрованный протокол, используемый для администрирования и связи с серверами. При работе с сервером Linux вы, скорее всего, проведете больше всего времени в сеансах терминала с подключением к серверу через SSH.
Хотя существует несколько разных способов входа на сервер SSH, в этом учебном модуле мы уделим основное внимание настройке ключей SSH. Ключи SSH обеспечивают простой, но при этом очень безопасный способ входа на сервер. Поэтому мы рекомендуем этот метод всем пользователям.
Сервер SSH может использовать много разных методов аутентификации клиентов. Наиболее простой метод — аутентификация с помощью пароля. Этот метод просто использовать, но он не является самым безопасным.
Хотя пароли отправляются на сервер в безопасном режиме, обычно они недостаточно сложные и длинные, чтобы обеспечить надежную защиту против упорных злоумышленников, совершающих многократные атаки. Вычислительная мощность современных систем и автоматизированные скрипты позволяют достаточно легко взломать учетную запись методом прямого подбора пароля. Хотя существуют и другие методы усиления мер безопасности (fail2ban
и т. д.), ключи SSH показали себя надежной и безопасной альтернативой.
Пары ключей SSH представляют собой два защищенных шифрованием ключа, которые можно использовать для аутентификации клиента на сервере SSH. Каждая пара ключей состоит из открытого ключа и закрытого ключа.
Закрытый ключ хранится клиентом и должен быть абсолютно защищен. Любое нарушение безопасности закрытого ключа позволит злоумышленникам входить на серверы с соответствующим открытым ключом без дополнительной аутентификации. В качестве дополнительной меры предосторожности ключ можно зашифровать на диске с помощью парольной фразы.
Соответствующий открытый ключ можно свободно передавать, не опасаясь негативных последствий. Открытый ключ можно использовать для шифрования сообщений, расшифровать которые можно только с помощью открытого ключа. Это свойство применяется как способ аутентификации с использованием пары ключей.
Открытый ключ выгружается на удаленный сервер, на который вы хотите заходить, используя SSH. Этот ключ добавляется в специальный файл ~/.ssh/authorized_keys
в учетной записи пользователя, которую вы используете для входа.
Когда клиент пытается пройти аутентификацию с использованием ключей SSH, сервер может протестировать клиент на наличие у него закрытого ключа. Если клиент может доказать, что у него есть закрытый ключ, сервер выполняет запрошенную команду или открывает сеанс соединения.
Первый шаг для настройки аутентификации ключей SSH на сервере заключается в том, чтобы сгенерировать пару ключей SSH на локальном компьютере.
Для этого мы можем использовать специальную утилиту ssh-keygen
, которая входит в стандартный набор инструментов OpenSSH. По умолчанию она создает пару 2048-битных ключей RSA, что подходит для большинства сценариев использования.
Сгенерируйте на локальном компьютере пару ключей SSH, введя следующую команду:
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Утилита предложит вам выбрать место размещения генерируемых ключей. По умолчанию ключи хранятся в каталоге ~/.ssh
внутри домашнего каталога вашего пользователя. Закрытый ключ будет иметь имя id_rsa
, а соответствующий открытый ключ будет иметь имя id_rsa.pub
.
На этом этапе лучше всего использовать каталог по умолчанию. Это позволит вашему клиенту SSH автоматически находить ключи SSH при попытке аутентификации. Если вы хотите выбрать нестандартный каталог, введите его сейчас, а в ином случае нажмите ENTER, чтобы принять значения по умолчанию.
Если ранее вы сгенерировали пару ключей SSH, вы можете увидеть следующий диалог:
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
Если вы решите перезаписать ключ на диске, вы больше не сможете выполнять аутентификацию с помощью предыдущего ключа. Будьте осторожны при выборе варианта yes, поскольку этот процесс уничтожает ключи, и его нельзя отменить.
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Далее вам будет предложено ввести парольную фразу для ключа. Это опциональная парольная фраза, которую можно использовать для шифрования файла закрытого ключа на диске.
Возможно вам будет интересно, в чем заключаются преимущества ключа SSH, если вам все равно нужна парольная фраза. Вот некоторые его преимущества:
Поскольку закрытый ключ недоступен через сеть и защищен системой разрешений, доступ к этому файлу будет только у вас (и у пользователя root). Парольная фраза служит дополнительным уровнем защиты на случай взлома одной из этих учетных записей.
Парольная фраза представляет собой необязательное дополнение. Если вы решите ее использовать, вам нужно будет вводить ее при каждом использовании соответствующего ключа (если вы не используете программный агент SSH, хранящий зашифрованный ключ). Мы рекомендуем использовать парольную фразу, но если вы не хотите ее задавать, вы можете просто нажать ENTER, чтобы пропустить этот диалог.
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Теперь у вас есть открытый и закрытый ключи, которые вы можете использовать для аутентификации. Следующим шагом будет размещение открытого ключа на сервере, что позволит использовать аутентификацию SSH для входа в систему.
Если вы создаете новый сервер DigitalOcean, вы можете автоматически встроить открытый ключ SSH в учетную запись root нового сервера.
Внизу страницы создания дроплета есть опция для добавления ключей SSH на ваш сервер:
Если вы уже добавили файл открытого ключа в учетную запись DigitalOcean, вы сможете выбрать данную опцию (в примере выше указаны два существующих ключа: “Work key” и “Home key”). Чтобы встроить существующий ключ, нажмите на него, чтобы его выделить. Вы можете встроить несколько ключей на один сервер:
Если в вашу учетную запись еще не выгружен открытый ключ SSH, или если вы хотите добавить новый ключ, нажмите кнопку “+ Add SSH Key”. При этом будет открыто диалоговое окно:
Вставьте содержимое открытого ключа SSH в поле “SSH Key content”. Если вы сгенерировали ключи, используя указанный выше метод, вы можете получить содержимое открытого ключа на локальном компьютере, введя следующую команду:
cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com
Вставьте это значение в более крупное поле целиком. В поле “Comment (optional)” вы можете выбрать ярлык для данного ключа. Этот ярлык будет отображаться как имя ключа в интерфейсе DigitalOcean:
При создании дроплета выбранные вами открытые ключи SSH будут помещены в файл ~/.ssh/authorized_keys
в учетной записи пользователя root. Это позволит вам входить на сервер с компьютера, используя ваш закрытый ключ.
Если у вас уже имеется сервер, и вы не встраивали ключи при его создании, вы все равно можете выгрузить открытый ключ и использовать его для аутентификации на сервере.
Используемый метод в основном зависит от доступных инструментов и от деталей текущей конфигурации. Следующие методы дают один и тот же конечный результат. Самый удобный и самый автоматизированный метод — это первый метод, а для каждого из следующих методов требуются дополнительные шаги, если вы не можете использовать предыдущие методы.
Самый удобный способ скопировать открытый ключ на существующий сервер — использовать утилиту под названием ssh-copy-id
. Поскольку этот метод очень простой, если он доступен, его рекомендуется использовать.
Инструмент ssh-copy-id
входит в пакеты OpenSSH во многих дистрибутивах, так что, возможно, он уже установлен на вашей локальной системе. Чтобы этот метод сработал, вы должны уже настроить защищенный паролем доступ к серверу через SSH.
Для использования той утилиты вам нужно только указать удаленный хост, к которому вы хотите подключиться, и учетную запись пользователя, к которой у вас есть доступ через SSH с использованием пароля. Это учетная запись, куда будет скопирован ваш открытый ключ SSH.
Синтаксис выглядит следующим образом:
ssh-copy-id username@remote_host
Вы можете увидеть следующее сообщение:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Это означает, что ваш локальный компьютер не распознает удаленный хост. Это произойдет при первом подключении к новому хосту. Введите «yes» и нажмите ENTER, чтобы продолжить.
Затем утилита проведет сканирование локальной учетной записи для поиска ранее созданного ключа id_rsa.pub
. Когда ключ будет найден, вам будет предложено ввести пароль учетной записи удаленного пользователя:
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@111.111.11.111's password:
Введите пароль (для безопасности вводимый текст не будет отображаться) и нажмите ENTER. Утилита подключится к учетной записи на удаленном хосте, используя указанный вами пароль. Затем содержимое ключа ~/.ssh/id_rsa.pub
будет скопировано в основной каталог ~/.ssh
удаленной учетной записи в файл с именем authorized_keys
.
Вы получите следующий результат:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@111.111.11.111'"
and check to make sure that only the key(s) you wanted were added.
Теперь ваш ключ id_rsa.pub
выгружен в удаленную учетную запись. Теперь вы можете перейти к следующему разделу.
Если у вас нет ssh-copy-id
, но вы активировали защищенный паролем доступ к учетной записи на вашем сервере через SSH, вы можете выгрузить ключи с помощью стандартного метода SSH.
Для выполнения этой задачи мы можем вывести содержимое нашего открытого ключа SSH на локальный компьютер и передать его через соединение SSH на удаленный сервер. С другой стороны, мы можем подтвердить существование каталога ~/.ssh
в используемой нами учетной записи и вывести переданные данные в файл authorized_keys
в этом каталоге.
Мы используем символ перенаправления >>
, чтобы дополнять содержимое, а не заменять его. Это позволяет добавлять ключи без уничтожения ранее добавленных ключей.
Полная команда выглядит следующим образом:
cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
Вы можете увидеть следующее сообщение:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Это означает, что ваш локальный компьютер не распознает удаленный хост. Это произойдет при первом подключении к новому хосту. Введите «yes» и нажмите ENTER, чтобы продолжить.
После этого вы увидите диалог, где нужно будет ввести пароль для учетной записи, к которой вы пытаетесь подключиться:
username@111.111.11.111's password:
После ввода пароля содержимое ключа id_rsa.pub
будет скопировано в конец файла authorized_keys
учетной записи удаленного пользователя. Если процедура будет успешно выполнена, переходите к следующему разделу.
Если у вас нет защищенного паролем доступа SSH к вашему серверу, вам нужно будет выполнить вышеуказанную процедуру вручную.
Содержимое файла id_rsa.pub
нужно будет каким-то образом добавить в файл ~/.ssh/authorized_keys
на удаленном компьютере.
Чтобы вывести содержимое ключа id_rsa.pub
, введите на локальном компьютере следующую команду:
cat ~/.ssh/id_rsa.pub
Вы увидите содержимое ключа, которое может выглядеть примерно так:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Откройте удаленный хост, используя любой доступный метод. Например, если вы используете дроплет DigitalOcean Droplet как сервер, вы можете выполнить вход, используя веб-консоль на панели управления:
Когда вы получите доступ к учетной записи на удаленном сервере, вам нужно будет убедиться, что каталог ~/.ssh
создан. При необходимости эта команда создаст каталог, а если каталог уже существует, команда ничего не сделает:
mkdir -p ~/.ssh
Теперь вы можете создать или изменить файл authorized_keys
в этом каталоге. Вы можете добавить содержимое файла id_rsa.pub
в конец файла authorized_keys
и, при необходимости, создать этот файл, используя следующую команду:
echo public_key_string >> ~/.ssh/authorized_keys
В вышеуказанной команде замените public_key_string
результатами команды cat ~/.ssh/id_rsa.pub
, выполненной на локальном компьютере. Она должна начинаться с ssh-rsa AAAA...
.
Если это сработает, вы можете попробовать установить аутентификацию без пароля.
Если вы успешно выполнили одну из вышеописанных процедур, вы сможете войти на удаленный хост без пароля учетной записи для удаленного хоста.
Базовый процесс выглядит аналогично:
ssh username@remote_host
Если вы подключаетесь к этому хосту первый раз (если вы используете указанный выше последний метод), вы сможете увидеть следующее:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Это означает, что ваш локальный компьютер не распознает удаленный хост. Введите «yes» и нажмите ENTER, чтобы продолжить.
Если вы не указывали пароль для своего закрытого ключа, вы войдете в систему немедленно. Если вы указали парольную фразу для закрытого ключа при его создании, вам нужно будет ввести ее сейчас. После этого для вас будет создан новый сеанс подключения через оболочку с учетной записью на удаленной системе.
Если эта процедура будет выполнена успешно, переходите к следующему разделу, чтобы узнать, как полностью защитить сервер.
Если вам удалось войти в свою учетную запись через SSH без ввода пароля, это означает, что вы успешно настроили для своей учетной записи аутентификацию на базе ключей SSH. Однако механизм аутентификации по паролю все еще активен, то есть ваш сервер может подвергнуться атаке посредством простого перебора паролей.
Прежде чем выполнять описанные в этом разделе шаги, убедитесь, что вы настроили аутентификацию на базе ключей SSH для учетной записи root на этом сервере или (что предпочтительно) вы настроили аутентификацию на базе ключей SSH для учетной записи с доступом sudo
на этом сервере. На этом шаге вы сможете заблокировать вход в систему на основе паролей, так что вам необходимо сохранить возможность доступа для администрирования.
Когда вышеуказанные условия будут выполнены, войдите на удаленный сервер с помощью ключей SSH с учетной записью root или с учетной записью с привилегиями sudo
. Откройте файл конфигурации демона SSH:
sudo nano /etc/ssh/sshd_config
Найдите в файле директиву PasswordAuthentication
. Она может быть помечена как комментарий. Удалите символ комментария в начале строки и установите значение «no». После этого вы потеряете возможность входа в систему через SSH с использованием паролей учетной записи:
PasswordAuthentication no
Сохраните файл и закройте его после завершения. Чтобы фактически активировать внесенные нами изменения, необходимо перезапустить службу.
На компьютерах под управлением Ubuntu или Debian можно использовать следующую команду:
sudo service ssh restart
На компьютерах под управлением CentOS/Fedora этот демон носит имя sshd
:
sudo service sshd restart
Выполнив этот шаг, вы успешно перенастроили демон SSH так, чтобы он реагировал только на ключи SSH.
Теперь на вашем сервере должна быть настроена и запущена аутентификация на базе ключей SSH, и вы должны иметь возможность входа в систему без ввода пароля учетной записи. После этого у вас появится множество вариантов дальнейших действий. Если вы хотите узнать больше о работе с SSH, посмотрите наше Руководство по основам SSH.
]]>O SSH, ou shell seguro, é um protocolo criptografado usado para administrar e se comunicar com servidores. Ao trabalhar com um servidor Linux, existem boas chances de você gastar a maior parte do seu tempo em uma sessão de terminal conectada ao seu servidor através do SSH.
Embora existam outras maneiras diferentes de fazer login em um servidor SSH, neste guia, iremos focar na configuração de chaves SSH. As chaves SSH oferecem uma maneira fácil e extremamente segura de fazer login no seu servidor. Por esse motivo, este é o método que recomendamos para todos os usuários.
Um servidor SSH pode autenticar clientes usando uma variedade de métodos diferentes. O mais básico deles é a autenticação por senha, que embora fácil de usar, mas não é o mais seguro.
Apesar de as senhas serem enviadas ao servidor de maneira segura, elas geralmente não são complexas ou longas o suficiente para resistirem a invasores persistentes. O poder de processamento moderno combinado com scripts automatizados torna possível forçar a entrada de maneira bruta em uma conta protegida por senha. Embora existam outros métodos para adicionar segurança adicional (fail2ban
, etc), as chaves SSH são comprovadamente uma alternativa confiável e segura.
Os pares de chaves SSH são duas chaves criptografadas e seguras que podem ser usadas para autenticar um cliente em um servidor SSH. Cada par de chaves consiste em uma chave pública e uma chave privada.
A chave privada é mantida pelo cliente e deve ser mantida em absoluto sigilo. Qualquer comprometimento da chave privada permitirá que o invasor faça login em servidores que estejam configurados com a chave pública associada sem autenticação adicional. Como uma forma de precaução adicional, a chave pode ser criptografada em disco com uma frase secreta.
A chave pública associada pode ser compartilhada livremente sem consequências negativas. A chave pública pode ser usada para criptografar mensagens que apenas a chave privada pode descriptografar. Essa propriedade é usada como uma maneira de autenticar usando o par de chaves.
A chave pública é enviada a um servidor remoto de sua preferência para que você possa fazer login via SSH. A chave é adicionada a um arquivo especial dentro da conta de usuário em que você estará fazendo login chamado ~/.ssh/authorized_keys
.
Quando um cliente tenta autenticar-se usando chaves SSH, o servidor testa o cliente para verificar se ele tem posse da chave privada. Se o cliente puder provar que possui a chave privada, a sessão do shell é gerada ou o comando solicitado é executado.
O primeiro passo para configurar a autenticação de chaves SSH para seu servidor é gerar um par de chaves SSH no seu computador local.
Para fazer isso, podemos usar um utilitário especial chamado ssh-keygen
, que vem incluso com o conjunto padrão de ferramentas do OpenSSH. Por padrão, isso criará um par de chaves RSA de 2048 bits, que é suficiente para a maioria dos usos.
No seu computador local, gere um par de chaves SSH digitando:
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
O utilitário irá solicitar que seja selecionado um local para as chaves que serão geradas. Por padrão, as chaves serão armazenadas no diretório ~/.ssh
dentro do diretório home do seu usuário. A chave privada será chamada de id_rsa
e a chave pública associada será chamada de id_rsa.pub
.
Normalmente, é melhor manter utilizar o local padrão neste estágio. Fazer isso permitirá que seu cliente SSH encontre automaticamente suas chaves SSH ao tentar autenticar-se. Se quiser escolher um caminho não padrão, digite-o agora. Caso contrário, pressione ENTER para aceitar o padrão.
Caso tenha gerado um par de chaves SSH anteriormente, pode ser que você veja um prompt parecido com este:
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
Se escolher substituir a chave no disco, você não poderá autenticar-se usando a chave anterior. Seja cuidadoso ao selecionar o sim, uma vez que este é um processo destrutivo que não pode ser revertido.
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Em seguida, você será solicitado a digitar uma frase secreta para a chave. Esta é uma frase secreta opcional que pode ser usada para criptografar o arquivo de chave privada no disco.
Você pode estar se perguntando sobre quais são as vantagens que uma chave SSH oferece se ainda é necessário digitar uma frase secreta. Algumas das vantagens são:
Como a chave privada nunca é exposta à rede e é protegida através de permissões de arquivos, este arquivo nunca deve ser acessível a qualquer um que não seja você (e o usuário root). A frase secreta serve como uma camada adicional de proteção caso essas condições sejam comprometidas.
Uma frase secreta é uma adição opcional. Se você inserir uma, será necessário fornecê-la sempre que for usar essa chave (a menos que você esteja executando um software de agente SSH que armazena a chave descriptografada). Recomendamos a utilização de uma frase secreta, mas se você não quiser definir uma, basta pressionar ENTER para ignorar este prompt.
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Agora, você tem uma chave pública e privada que pode usar para se autenticar. O próximo passo é colocar a chave pública no seu servidor para que você possa usar a autenticação baseada em chaves SSH para fazer login.
Se você estiver iniciando um novo servidor da DigitalOcean, é possível incorporar automaticamente sua chave pública SSH na nova conta raiz do seu servidor.
No final da página de criação do Droplet, há uma opção para adicionar chaves SSH ao seu servidor:
Se você já tiver adicionado um arquivo de chave pública à sua conta da DigitalOcean, verá ela aqui como uma opção selecionável (há duas chaves já existentes no exemplo acima: “Work key” e “Home key”). Para incorporar uma chave existente, basta clicar nela para que fique destacada. É possível incorporar várias chaves em um único servidor:
Caso ainda não tenha uma chave SSH pública carregada em sua conta, ou se quiser adicionar uma nova chave à sua conta, clique no botão “+ Add SSH Key”. Isso irá abrir um prompt:
Na caixa “SSH Key content”, cole o conteúdo da sua chave SSH pública. Se você tiver gerado suas chaves usando o método acima, é possível obter o conteúdo de sua chave pública em seu computador local digitando:
cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com
Cole este valor, em sua totalidade, na caixa maior. Na caixa “Comment (optional)”, você pode escolher um rótulo para a chave. Isso será exibido como o nome da chave na interface da DigitalOcean:
Ao criar seu Droplet, as chaves SSH públicas que você selecionou serão colocadas no arquivo ~/.ssh/authorized_keys
da conta do usuário root. Isso permitirá fazer login no servidor a partir do computador com sua chave privada.
Se você já tiver um servidor disponível e não incorporou chaves em sua criação, ainda é possível enviar sua chave pública e usá-la para autenticar-se no seu servidor.
O método a ser usado depende em grande parte das ferramentas disponíveis e dos detalhes da sua configuração atual. Todos os métodos a seguir geram o mesmo resultado final. O método mais fácil e automatizado é o primeiro e cada método depois dele necessita de passos manuais adicionais se você não conseguir usar os métodos anteriores.
A maneira mais fácil de copiar sua chave pública para um servidor existente é usando um utilitário chamado ssh-copy-id
. Por conta da sua simplicidade, este método é recomendado se estiver disponível.
A ferramenta ssh-copy-id
vem inclusa nos pacotes OpenSSH em muitas distribuições, de forma que você pode tê-la disponível em seu sistema local. Para que este método funcione, você já deve ter acesso via SSH baseado em senha ao seu servidor.
Para usar o utilitário, você precisa especificar apenas o host remoto ao qual gostaria de se conectar e a conta do usuário que tem acesso SSH via senha. Esta é a conta na qual sua chave SSH pública será copiada.
A sintaxe é:
ssh-copy-id username@remote_host
Pode ser que apareça uma mensagem como esta:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Isso significa que seu computador local não reconhece o host remoto. Isso acontecerá na primeira vez que você se conectar a um novo host. Digite “yes” e pressione ENTER para continuar.
Em seguida, o utilitário irá analisar sua conta local em busca da chave id_rsa.pub
que criamos mais cedo. Quando ele encontrar a chave, irá solicitar a senha da conta do usuário remoto:
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@111.111.11.111's password:
Digite a senha (sua digitação não será exibida para fins de segurança) e pressione ENTER. O utilitário se conectará à conta no host remoto usando a senha que você forneceu. Então, ele copiará o conteúdo da sua chave ~/.ssh/id_rsa.pub
em um arquivo no diretório da conta remota home ~/.ssh
chamado authorized_keys
.
Você verá um resultado que se parece com este:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@111.111.11.111'"
and check to make sure that only the key(s) you wanted were added.
Neste ponto, sua chave id_rsa.pub
foi enviada para a conta remota. Continue para a próxima seção.
Se não tiver o ssh-copy-id
disponível, mas tiver acesso SSH baseado em senha a uma conta do seu servidor, você pode fazer o upload das suas chaves usando um método SSH convencional.
É possível fazer isso resgatando o conteúdo da nossa chave SSH pública do nosso computador local e enviando-o através de uma conexão via protocolo SSH ao servidor remoto. Do outro lado, certificamo-nos de que o diretório ~/.ssh
existe na conta que estamos usando e então enviamos o conteúdo recebido em um arquivo chamado authorized_keys
dentro deste diretório.
Vamos usar o símbolo de redirecionamento >>
para adicionar o conteúdo ao invés de substituí-lo. Isso permitirá que adicionemos chaves sem destruir chaves previamente adicionadas.
O comando completo ficará parecido com este:
cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
Pode ser que apareça uma mensagem como esta:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Isso significa que seu computador local não reconhece o host remoto. Isso acontecerá na primeira vez que você se conectar a um novo host. Digite “yes” e pressione ENTER para continuar.
Depois disso, você será solicitado a inserir a senha da conta na qual está tentando se conectar:
username@111.111.11.111's password:
Após digitar sua senha, o conteúdo da sua chave id_rsa.pub
será copiado para o final do arquivo authorized_keys
da conta do usuário remoto. Continue para a próxima seção se o processo foi bem-sucedido.
Se o acesso SSH baseado em senha ao seu servidor ainda não estiver disponível, será necessário completar o processo acima manualmente.
O conteúdo do seu arquivo id_rsa.pub
precisará ser adicionado a um arquivo em ~/.ssh/authorized_keys
em sua máquina remota de alguma maneira.
Para exibir o conteúdo de sua chave id_rsa.pub
, digite o seguinte em seu computador local:
cat ~/.ssh/id_rsa.pub
Você verá o conteúdo da chave, que deve ser parecido com este:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Acesse seu host remoto usando algum método que você tenha disponível. Por exemplo, se seu servidor for um Droplet da DigitalOcean, faça login usando o console Web no painel de controle:
Assim que tiver acesso à sua conta no servidor remoto, certifique-se de que o diretório ~/.ssh
foi criado. Este comando criará o diretório se necessário, ou não fará nada se ele já existir:
mkdir -p ~/.ssh
Agora, você pode criar ou modificar o arquivo authorized_keys
dentro deste diretório. Você pode adicionar o conteúdo do seu arquivo id_rsa.pub
ao final do arquivo authorized_keys
, criando-o se for necessário, usando este comando:
echo public_key_string >> ~/.ssh/authorized_keys
No comando acima, substitua o public_key_string
pelo resultado do comando cat ~/.ssh/id_rsa.pub
que você executou no seu sistema local. Ela deve começar com ssh-rsa AAAA...
.
Se isso funcionar, continue para tentar autenticar-se sem uma senha.
Se tiver completado um dos procedimentos acima, você deve conseguir fazer login no host remoto sem a senha da conta remota.
O processo básico é o mesmo:
ssh username@remote_host
Se essa é a primeira vez que você se conecta a este host (caso tenha usado o último método acima), pode ser que veja algo como isso:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Isso significa que seu computador local não reconhece o host remoto. Digite “yes” e então pressione ENTER para continuar.
Se não forneceu uma frase secreta para sua chave privada, você será logado imediatamente. Se forneceu uma frase secreta para a chave privada quando a criou, você será solicitado a digitá-la agora. Depois disso, uma nova sessão de shell deve ser-lhe gerada com a conta no sistema remoto.
Caso isso dê certo, continue para descobrir como bloquear o servidor.
Se conseguiu logar na sua conta usando o SSH sem uma senha, você configurou com sucesso a autenticação baseada em chaves SSH na sua conta. Entretanto, seu mecanismo de autenticação baseado em senha ainda está ativo, o que significa que seu servidor ainda está exposto a ataques por força bruta.
Antes de completar os passos nesta seção, certifique-se de que você tenha uma autenticação baseada em chaves SSH configurada para a conta root neste servidor, ou, de preferência, que tenha uma autenticação baseada em chaves SSH configurada para uma conta neste servidor com privilégios sudo
. Este passo irá bloquear os logins baseados em senha. Por isso, garantir que você ainda terá acesso de administrador será essencial.
Assim que as condições acima forem verdadeiras, entre no seu servidor remoto com chaves SSH como root ou com uma conta com privilégios sudo
. Abra o arquivo de configuração do daemon do SSH:
sudo nano /etc/ssh/sshd_config
Dentro do arquivo, procure por uma diretiva chamada PasswordAuthentication
. Isso pode ser transformado em comentário. Descomente a linha e configure o valor em “no”. Isso irá desativar a sua capacidade de fazer login via SSH usando senhas de conta:
PasswordAuthentication no
Salve e feche o arquivo quando você terminar. Para realmente implementar as alterações que acabamos de fazer, reinicie o serviço.
Em máquinas Ubuntu ou Debian, emita este comando:
sudo service ssh restart
Em máquinas CentOS/Fedora, o daemon chama-se sshd
:
sudo service sshd restart
Após completar este passo, você alterou seu daemon do SSH com sucesso para responder apenas a chaves SSH.
Agora, você deve ter uma autenticação baseada em chaves SSH configurada no seu servidor, permitindo fazer login sem fornecer uma senha de conta. A partir daqui, há muitas direções em que você pode seguir. Se você quiser aprender mais sobre como trabalhar com o SSH, veja nosso Guia de Noções Básicas sobre SSH.
]]>SSH, o shell seguro, es un protocolo cifrado que se usa para administrar servidores y comunicarse con ellos. Al trabajar con un servidor de Linux, es probable que pase la mayor parte de su tiempo en una sesión de terminal conectada a su servidor a través de SSH.
Aunque existen varias formas de iniciar sesión en un servidor SSH, en esta guía nos centraremos en la configuración de las claves SSH. Las claves SSH proporcionan una forma fácil y extremadamente segura de iniciar sesión en su servidor. Por este motivo, este es el método que recomendamos a todos los usuarios.
Un servidor SSH puede autenticar a los clientes usando varios métodos diferentes. El más básico de estos es la autenticación por contraseña, que es fácil de usar, pero no es la más segura.
A pesar de que las contraseñas se envían al servidor de forma segura, generalmente no son lo suficientemente complejas o largas como para resistir a los constantes y persistentes atacantes. La potencia de procesamiento moderna combinada con scripts automatizados hace que sea muy posible forzar una cuenta protegida con una contraseña. A pesar de que existen otros métodos para añadir seguridad adicional (fail2ban
, etc.), las claves SSH son una alternativa confiable y segura.
Los pares de claves SSH son dos claves criptográficamente seguras que pueden usarse para autenticar a un cliente a un servidor SSH. Cada par de claves está compuesto por una clave pública y una clave privada.
El cliente mantiene la clave privada y debe mantenerla en absoluto secreto. Poner en riesgo la clave privada permitirá al atacante iniciar sesión en los servidores que están configurados con la clave pública asociada sin autenticación adicional. Como medida de precaución adicional, puede cifrar la clave en el disco con una frase de contraseña.
La clave pública asociada puede compartirse libremente sin ninguna consecuencia negativa. La clave pública puede usarse para cifrar mensajes que solo la clave privada puede descifrar. Esta propiedad se emplea como forma de autenticación mediante el uso del par de claves.
La clave pública se carga a un servidor remoto, en el que quiere iniciar sesión con SSH. La clave se añade a un archivo especial dentro de la cuenta de usuario con la que iniciará sesión, llamada ~/.ssh/authorized_keys
.
Cuando un cliente intente autenticarse usando claves SSH, el servidor puede comprobar si el cliente posee la clave privada. Si el cliente puede demostrar que posee la clave privada, se genera una sesión de shell o se ejecuta el comando solicitado.
El primer paso para configurar la autenticación con clave SSH en su servidor es generar un par de claves SSH en su computadora local.
Para ello, podemos usar una utilidad especial llamada ssh-keygen
, que se incluye con el conjunto de herramientas estándar de OpenSSH. Por defecto, esto creará un par de claves RSA de 2048 bits, lo cual es muy útil para la mayoría de usos.
En su computadora local, genere un par de claves SSH escribiendo lo siguiente:
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
La utilidad le solicitará que seleccione una ubicación para las claves que se generarán. Por defecto, las claves se almacenarán en el directorio ~/.ssh
dentro del directorio de inicio de su usuario. La clave privada se llamará id_rsa
, y la clave pública asociada se llamará id_rsa.pub
.
Normalmente, es mejor que conserve la ubicación predeterminada en esta etapa. Hacerlo permitirá a su cliente SSH encontrar automáticamente sus claves SSH al intentar autenticarse. Si desea elegir una ruta no estándar, introdúzcala ahora, de lo contrario, presione ENTER para aceptar la predeterminada.
Si generó previamente un par de claves SSH, es posible que vea un mensaje como el siguiente:
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
Si elige sobrescribir la clave en el disco, ya no podrá autenticar usando la clave anterior. Tenga mucho cuidado al convalidar la operación, ya que este es un proceso destructivo que no puede revertirse.
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
A continuación, se le solicitará que introduzca una frase de contraseña para la clave. Esta es una frase de contraseña opcional que puede usarse para cifrar el archivo de clave privada en el disco.
Es posible que se esté preguntando qué ventajas ofrece una clave SSH si aún así necesita introducir una frase de contraseña. Algunas de las ventajas son las siguientes:
Dado que la clave privada nunca se expone a la red y está protegida mediante permisos de archivo, nadie más que usted debería tener acceso a este archivo (y el usuario root). La frase de contraseña sirve como capa de protección adicional en caso de que estas condiciones se vean comprometidas.
Una frase de contraseña es un complemento opcional. Si introduce una, deberá proporcionarla cada vez que utilice esta clave (a menos que esté ejecutando un software de agente SSH que almacene la clave descifrada). Recomendamos usar una frase de contraseña, pero si no desea establecer una frase de contraseña, puede simplemente pulsar ENTER para omitir esta pregunta.
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Ahora dispondrá de una clave pública y privada que puede usar para realizar la autenticación. El siguiente paso es ubicar la clave pública en su servidor, a fin de poder utilizar la autenticación basada en la clave SSH para iniciar sesión.
Si está iniciando un nuevo servidor de DigitalOcean, puede incorporar automáticamente su clave pública SSH en la cuenta root de su nuevo servidor.
En la parte inferior de la página de creación de Droplet, existe una opción para añadir claves SSH a su servidor:
Si ya ha añadido un archivo de clave pública a su cuenta de DigitalOcean, lo verá aquí como una opción seleccionable (hay dos claves existentes en el ejemplo anterior: “clave de trabajo” y “clave de inicio”). Para incorporar una clave existente, simplemente haga clic en ella y se resaltará Puede incorporar múltiples claves en un solo servidor:
Si aún no tiene una clave SSH pública cargada en su cuenta o si quiere añadir una nueva clave a su cuenta, haga clic en el botón “+ Add SSH Key” (+ Añadir clave SSH). Esto se ampliará a un mensaje:
Pegue su clave pública SSH en el cuadro “Contenido de la clave SSH”. Suponiendo que haya generado sus claves usando el método anterior, puede obtener el contenido de la clave pública en su computadora local escribiendo lo siguiente:
cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com
Pegue este valor completo en el cuadro más grande. En el cuadro “Comment (optional)” (Comentario [opcional]), puede elegir una etiqueta para la clave. Esta se mostrará como el nombre de la clave en la interfaz de DigitalOcean:
Cuando cree su Droplet, las claves SSH públicas que haya seleccionado se ubicarán en el archivo ~/.ssh/authorized_keys
de la cuenta del usuario root. Esto le permitirá iniciar sesión en el servidor desde la computadora con su clave privada.
Si ya tiene un servidor disponible y no incorporó claves al crearlo, aún puede cargar su clave pública y usarla para autenticarse en su servidor.
El método que utilice depende en gran medida de las herramientas que tenga disponibles y los detalles de su configuración actual. Todos los siguientes métodos producen el mismo resultado final. El método más fácil y automatizado es el primero y los que se siguen requieren pasos manuales adicionales si no puede usar los métodos anteriores.
La forma más sencilla de copiar su clave pública a un servidor existente es usar una utilidad llamada ssh-copy-id
. Debido a su simplicidad, se recomienda este método si está disponible.
La herramienta ssh-copy-id
está incluida en los paquetes de OpenSSH en muchas distribuciones, por lo que es posible que esté disponible en su sistema local. Para que este método funcione, ya debe disponer de acceso con SSH basado en contraseña en su servidor.
Para usar la utilidad, solo necesita especificar el host remoto al que desee conectarse y la cuenta de usuario a la que tenga acceso SSH con contraseña. Esta es la cuenta donde se copiará su clave SSH pública.
La sintaxis es la siguiente:
ssh-copy-id username@remote_host
Es posible que vea un mensaje como el siguiente:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Esto solo significa que su computadora local no reconoce el host remoto. Esto sucederá la primera vez que establezca conexión con un nuevo host. Escriba “yes” y presione ENTER para continuar.
A continuación, la utilidad analizará su cuenta local en busca de la clave id_rsa.pub
que creamos antes. Cuando la encuentre, le solicitará la contraseña de la cuenta del usuario remoto:
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@111.111.11.111's password:
Escriba la contraseña (por motivos de seguridad, no se mostrará lo que escriba) y presione ENTER. La utilidad se conectará a la cuenta en el host remoto usando la contraseña que proporcionó. Luego, copie el contenido de su clave ~/.ssh/id_rsa.pub
a un archivo en el directorio principal de la cuenta remota ~/.ssh
, llamado authorized_keys
.
Verá un resultado similar a este:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@111.111.11.111'"
and check to make sure that only the key(s) you wanted were added.
En este punto, su clave id_rsa.pub
se habrá cargado en la cuenta remota. Puede continuar con la siguiente sección.
Si no tiene ssh-copy-id
disponible, pero tiene acceso de SSH basado en contraseña a una cuenta de su servidor, puede cargar sus claves usando un método de SSH convencional.
Para ello, podemos extraer el contenido de nuestra clave pública SSH en nuestra computadora local y canalizarlo a través de una conexión SSH al servidor remoto. Por otro lado, podemos asegurarnos de que el directorio ~/.sh
exista bajo la cuenta que estamos usando y luego salga el contenido que canalizamos en un archivo llamado authorized_keys
dentro de este directorio.
Usaremos el símbolo de redireccionamiento >>
para añadir el contenido en lugar de sobrescribirlo. Esto nos permitirá agregar claves sin eliminar las claves previamente agregadas.
El comando completo tendrá el siguiente aspecto:
cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
Es posible que vea un mensaje como el siguiente:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Esto solo significa que su computadora local no reconoce el host remoto. Esto sucederá la primera vez que establezca conexión con un nuevo host. Escriba “yes” y presione ENTER para continuar.
A continuación, se le solicitará la contraseña de la cuenta a la que está intentando conectarse:
username@111.111.11.111's password:
Una vez que ingrese su contraseña, el contenido de su clave id_rsa.pub
se copiará al final del archivo authorized_keys
de la cuenta del usuario remoto. Continúe con la siguiente sección si se realizó exitosamente.
Si no dispone de acceso SSH con contraseña a su servidor, deberá completar el proceso anterior de forma manual.
El contenido de su archivo id_rsa.pub
deberá agregarse a un archivo en ~/.ssh/authorized_keys
en su máquina remota de alguna forma.
Para mostrar el contenido de su clave id_rsa.pub
, escriba esto en su computadora local:
cat ~/.ssh/id_rsa.pub
Verá el contenido de la clave, que debería tener un aspecto similar a este:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Acceda a su host remoto usando el método que esté a su disposición. Por ejemplo, si su servidor es un Droplet de DigitalOcean, puede iniciar sesión usando la consola web del panel de control:
Una vez que tenga acceso a su cuenta en el servidor remoto, debe asegurarse de que el directorio ~/.ssh
haya sido creado. Con este comando se creará el directorio, si es necesario. Si este último ya existe, no se creará:
mkdir -p ~/.ssh
Ahora, podrá crear o modificar el archivo authorized_keys
dentro de este directorio. Puede agregar el contenido de su archivo id_rsa.pub
al final del archivo authorized_keys
y, si es necesario, crearlo usando el siguiente comando:
echo public_key_string >> ~/.ssh/authorized_keys
En el comando anterior, reemplace public_key_string
por el resultado del comando cat ~/.ssh/id_rsa.pub
que ejecutó en su sistema local. Debería iniciar con ssh-rsa AAAA...
.
Si esto funciona, puede continuar con la autenticación sin contraseña.
Si completó con éxito uno de los procedimientos anteriores, debería poder iniciar sesión en el host remoto sin la contraseña de la cuenta remota.
El proceso básico es el mismo:
ssh username@remote_host
Si es la primera vez que establece conexión con este host (si empleó el último método anterior), es posible que vea algo como esto:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Esto solo significa que su computadora local no reconoce el host remoto. Escriba “yes” y presione ENTER para continuar.
Si no proporcionó una frase de contraseña para su clave privada, se iniciará sesión de inmediato. Si proporcionó una frase de contraseña para la clave privada al crearla, se le solicitará que la introduzca ahora. A continuación, se debería generar una nueva sesión de shell con la cuenta en el sistema remoto.
Si lo realiza con éxito, continúe para averiguar cómo bloquear el servidor.
Si pudo iniciar sesión en su cuenta usando SSH sin una contraseña, habrá configurado con éxito la autenticación basada en claves de SSH para su cuenta. Sin embargo, su mecanismo de autenticación basado en contraseña sigue activo. Esto significa que su servidor sigue expuesto a ataques de fuerza bruta.
Antes de completar los pasos de esta sección, asegúrese de tener configurada la autenticación basada en claves SSH para la cuenta root en este servidor o, preferiblemente, la autenticación basada en clave SSH para una cuenta no root en este servidor con acceso sudo
. Con este paso, se bloquearán los inicios de sesión basados en contraseñas. Por lo tanto, es fundamental que se asegure de que aún podrá obtener acceso administrativo.
Una vez que se cumplan las condiciones anteriores, inicie sesión en su servidor remoto con claves SSH, ya sea como root o con una cuenta con privilegios sudo
. Abra el archivo de configuración del demonio SSH:
sudo nano /etc/ssh/sshd_config
Dentro del archivo, busque una directiva llamada PasswordAuthentication
. Puede insertar comentarios sobre esto. Elimine los comentarios de la línea y fije el valor en “no”. Esto inhabilitará su capacidad de iniciar sesión a través de SSH usando contraseñas de cuenta:
PasswordAuthentication no
Guarde y cierre el archivo cuando termine. Para implementar realmente los cambios que acabamos de realizar, debe reiniciar el servicio.
En las máquinas Ubuntu o Debian, puede emitir este comando:
sudo service ssh restart
En las máquinas CentOS/Fedora, el demonio se llama sshd
:
sudo service sshd restart
Después de completar este paso, ha trasladado correctamente a su demonio SHH para que responda únicamente a claves SSH.
De esta manera, la autenticación basada en claves SSH debería quedar configurada y funcionando en su servidor, lo que le permitirá iniciar sesión sin proporcionar una contraseña de cuenta. Desde aquí, hay muchas direcciones a las que puede dirigirte. Si desea obtener más información sobre cómo trabajar con SSH, consulte nuestra Guía de aspectos básicos de SSH.
]]>SSH oder Secure Shell ist ein verschlüsseltes Protokoll zur Verwaltung und Kommunikation mit Servern. Wenn Sie mit einem Linux-Server arbeiten, verbringen Sie wahrscheinlich die meiste Zeit in einer Terminalsitzung, die über SSH mit Ihrem Server verbunden ist.
Es gibt zwar einige verschiedene Möglichkeiten, sich bei einem SSH-Server anzumelden, doch in diesem Leitfaden konzentrieren wir uns auf die Einrichtung von SSH-Schlüsseln. SSH-Schlüssel bieten eine einfache, aber extrem sichere Möglichkeit der Anmeldung an Ihrem Server. Aus diesem Grund ist dies die Methode, die wir für alle Benutzer empfehlen.
Ein SSH-Server kann Clients mit einer Vielzahl von verschiedenen Methoden authentifizieren. Die grundlegendste davon ist die Passwort-Authentifizierung, die einfach zu verwenden, aber nicht die sicherste ist.
Obwohl Passwörter auf sichere Weise an den Server gesendet werden, sind sie im Allgemeinen nicht komplex oder lang genug, um wiederholten, hartnäckigen Angreifern zu widerstehen. Moderne Verarbeitungsleistung kombiniert mit automatisierten Skripten machen das Brute-Forcing eines passwortgeschützten Kontos sehr gut möglich. Obwohl es andere Methoden gibt, um zusätzliche Sicherheit (fail2ban
usw.) hinzuzufügen, erweisen sich SSH-Schlüssel als eine zuverlässige und sichere Alternative.
SSH-Schlüsselpaare sind zwei kryptografisch sichere Schlüssel, die zur Authentifizierung eines Clients gegenüber einem SSH-Server verwendet werden können. Jedes Schlüsselpaar besteht aus einem öffentlichen Schlüssel und einem privaten Schlüssel.
Der private Schlüssel wird vom Client aufbewahrt und sollte absolut geheim gehalten werden. Jede Kompromittierung des privaten Schlüssels ermöglicht es dem Angreifer, sich ohne zusätzliche Authentifizierung an Servern anzumelden, die mit dem zugehörigen öffentlichen Schlüssel konfiguriert sind. Als zusätzliche Vorsichtsmaßnahme kann der Schlüssel auf der Festplatte mit einer Passphrase verschlüsselt werden.
Der zugehörige öffentliche Schlüssel kann ohne negative Folgen geteilt werden. Der öffentliche Schlüssel kann zur Verschlüsselung von Nachrichten verwendet werden, die nur der private Schlüssel entschlüsseln kann. Diese Eigenschaft wird als eine Möglichkeit zur Authentifizierung mit dem Schlüsselpaar verwendet.
Der öffentliche Schlüssel wird auf einen Remote-Server hochgeladen, an dem Sie sich mit SSH anmelden möchten. Der Schlüssel wird zu einer speziellen Datei innerhalb des Benutzerkontos hinzugefügt, bei dem Sie sich anmelden werden, namens ~/.ssh/authorized_keys
.
Wenn ein Client die Authentifizierung mit SSH-Schlüsseln versucht, kann der Server den Client daraufhin testen, ob er im Besitz des privaten Schlüssels ist. Wenn der Client beweisen kann, dass er den privaten Schlüssel besitzt, wird eine Shell-Sitzung erzeugt oder der angeforderte Befehl wird ausgeführt.
Der erste Schritt zur Konfiguration der SSH-Schlüsselauthentifizierung an Ihrem Server ist die Erstellung eines SSH-Schlüsselpaares auf Ihrem lokalen Computer.
Dazu können wir ein spezielles Dienstprogramm namens ssh-keygen
verwenden, das in der standardmäßigen OpenSSH-Suite von Tools enthalten ist. Standardmäßig erstellt dies ein RSA-Schlüsselpaar mit 2048 Bit, das für die meisten Verwendungen ausreichend ist.
Erstellen Sie auf Ihrem lokalen Computer ein SSH-Schlüsselpaar durch Eingabe von:
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/username/.ssh/id_rsa):
Das Dienstprogramm fordert Sie auf, einen Speicherort für die zu erzeugenden Schlüssel auszuwählen. Standardmäßig werden die Schlüssel im Verzeichnis ~/.ssh
innerhalb des Home-Verzeichnisses Ihres Benutzers gespeichert. Der private Schlüssel wird id_rsa
genannt und der zugehörige öffentliche Schlüssel wird id_rsa.pub
genannt.
In der Regel ist es am besten, an dieser Stelle den Standardspeicherort beizubehalten. Auf diese Weise kann Ihr SSH-Client Ihre SSH-Schlüssel automatisch finden, wenn Sie versuchen, sich zu authentifizieren. Wenn Sie einen nicht standardmäßigen Pfad auswählen möchten, geben Sie diesen jetzt ein, andernfalls drücken Sie ENTER, um die Standardeinstellung zu akzeptieren.
Wenn Sie zuvor ein SSH-Schlüsselpaar erstellt hatten, sehen Sie möglicherweise eine Eingabeaufforderung, die wie folgt aussieht:
/home/username/.ssh/id_rsa already exists.
Overwrite (y/n)?
Wenn Sie den Schlüssel auf der Festplatte überschreiben, können Sie sich nicht mehr mit dem vorherigen Schlüssel authentifizieren. Seien Sie sehr vorsichtig bei der Auswahl von „Ja“, da dies ein destruktiver Prozess ist, der nicht rückgängig gemacht werden kann.
Created directory '/home/username/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Als Nächstes werden Sie aufgefordert, eine Passphrase für den Schlüssel einzugeben. Dies ist eine optionale Passphrase, die zur Verschlüsselung der privaten Schlüsseldatei auf der Festplatte verwendet werden kann.
Sie fragen sich vielleicht, welche Vorteile ein SSH-Schlüssel bietet, wenn Sie trotzdem eine Passphrase eingeben müssen. Einige der Vorteile sind:
Da der private Schlüssel niemals dem Netzwerk ausgesetzt wird und durch Dateiberechtigungen geschützt ist, sollte diese Datei niemals für jemand anderen als Sie (und den Root-Benutzer) zugänglich sein. Die Passphrase dient als eine zusätzliche Schutzschicht, falls diese Bedingungen kompromittiert werden.
Eine Passphrase ist eine optionale Ergänzung. Wenn Sie eine eingeben, müssen Sie sie jedes Mal bei Verwendung dieses Schlüssels angeben (es sei denn, Sie verwenden eine SSH-Agentensoftware, die den entschlüsselten Schlüssel speichert). Wir empfehlen die Verwendung einer Passphrase, aber wenn Sie keine Passphrase festlegen möchten, können Sie einfach ENTER drücken, um diese Eingabeaufforderung zu umgehen.
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Sie haben jetzt einen öffentlichen und privaten Schlüssel, die Sie zur Authentifizierung verwenden können. Der nächste Schritt besteht darin, den öffentlichen Schlüssel auf Ihrem Server abzulegen, damit Sie sich mithilfe der SSH-Schlüsselauthentifizierung anmelden können.
Wenn Sie einen neuen DigitalOcean-Server einrichten, können Sie Ihren öffentlichen SSH-Schlüssel automatisch in das Root-Konto Ihres neuen Servers einbinden.
Am Ende der Droplet-Erstellungsseite gibt es eine Option, mit der Sie SSH-Schlüssel zu Ihrem Server hinzufügen können:
Wenn Sie bereits eine öffentliche Schlüsseldatei zu Ihrem DigitalOcean-Konto hinzugefügt haben, sehen Sie diese hier als auswählbare Option (im obigen Beispiel gibt es zwei vorhandene Schlüssel: „Arbeitsschlüssel“ und „Heimschlüssel“). Um einen vorhandenen Schlüssel einzubetten, klicken Sie ihn einfach an und er wird hervorgehoben. Sie können mehrere Schlüssel auf einem einzigen Server einbetten:
Wenn Sie noch keinen öffentlichen SSH-Schlüssel in Ihr Konto hochgeladen haben oder wenn Sie einen neuen Schlüssel zu Ihrem Konto hinzufügen möchten, klicken Sie auf die Schaltfläche „+ Add SSH Key“ (SSH-Schlüssel hinzufügen). Daraufhin erscheint eine Eingabeaufforderung:
Fügen Sie im Feld „SSH Key content“ (SSH-Schlüsselinhalt) den Inhalt Ihres öffentlichen SSH-Schlüssels ein. Angenommen, Sie haben Ihre Schlüssel mit der oben beschriebenen Methode erzeugt, können Sie den Inhalt Ihres öffentlichen Schlüssels auf Ihrem lokalen Computer abrufen, indem Sie eingeben:
cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNqqi1mHLnryb1FdbePrSZQdmXRZxGZbo0gTfglysq6KMNUNY2VhzmYN9JYW39yNtjhVxqfW6ewc+eHiL+IRRM1P5ecDAaL3V0ou6ecSurU+t9DR4114mzNJ5SqNxMgiJzbXdhR+j55GjfXdk0FyzxM3a5qpVcGZEXiAzGzhHytUV51+YGnuLGaZ37nebh3UlYC+KJev4MYIVww0tWmY+9GniRSQlgLLUQZ+FcBUjaqhwqVqsHe4F/woW1IHe7mfm63GXyBavVc+llrEzRbMO111MogZUcoWDI9w7UIm8ZOTnhJsk7jhJzG2GpSXZHmly/a/buFaaFnmfZ4MYPkgJD username@example.com
Fügen Sie diesen Wert in seiner Gesamtheit in das größere Feld ein. Im Feld „Comment (optional)“ (Kommentar (optional)) können Sie eine Bezeichnung für den Schlüssel auswählen. Diese wird als Schlüsselname in der DigitalOcean-Oberfläche angezeigt:
Wenn Sie Ihr Droplet erstellen, werden die von Ihnen ausgewählten öffentlichen SSH-Schlüssel in der Datei ~/.ssh/authorized_keys
des Root-Benutzerkontos abgelegt. Dadurch können Sie sich von dem Computer mit Ihrem privaten Schlüssel an dem Server anmelden.
Wenn Sie bereits über einen Server verfügen und bei der Erstellung keine Schlüssel eingebettet haben, können Sie trotzdem Ihren öffentlichen Schlüssel hochladen und zur Authentifizierung an Ihrem Server verwenden.
Welche Methode Sie verwenden, hängt weitgehend von den verfügbaren Tools und den Details Ihrer aktuellen Konfiguration ab. Die folgenden Methoden führen alle zu demselben Ergebnis. Die einfachste, am meisten automatisierte Methode steht an erster Stelle und die folgenden erfordern jeweils zusätzliche manuelle Schritte, wenn Sie die vorangegangenen Methoden nicht verwenden können.
Die einfachste Methode, Ihren öffentlichen Schlüssel auf einen vorhandenen Server zu kopieren, ist die Verwendung eines Dienstprogramms namens ssh-copy-id
. Aufgrund ihrer Einfachheit wird diese Methode empfohlen, wenn sie verfügbar ist.
Das Tool ssh-copy-id
ist in vielen Distributionen in den OpenSSH-Paketen enthalten, sodass Sie es möglicherweise auf Ihrem lokalen System zur Verfügung haben. Damit diese Methode funktioniert, müssen Sie bereits über einen passwortbasierten SSH-Zugriff auf Ihren Server verfügen.
Um das Utility zu verwenden, müssen Sie lediglich den Remote-Host, zu dem Sie eine Verbindung herstellen möchten, und das Benutzerkonto angeben, auf das Sie mit einem Passwort SSH-Zugriff haben. Dies ist das Konto, auf das Ihr öffentlicher SSH-Schlüssel kopiert werden soll.
Die Syntax lautet:
ssh-copy-id username@remote_host
Sie sehen eventuell eine Nachricht wie diese:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Dies bedeutet nur, dass Ihr lokaler Computer den Remote-Host nicht erkennt. Dies geschieht, wenn Sie zum ersten Mal eine Verbindung zu einem neuen Host herstellen. Geben Sie „yes“ ein und drücken Sie ENTER, um fortzufahren.
Als nächstes durchsucht das Dienstprogramm Ihr lokales Konto nach dem Schlüssel id_rsa.pub
, den wir zuvor erstellt haben. Wenn der Schlüssel gefunden wurde, werden Sie zur Eingabe des Passworts für das Konto des Remotebenutzers aufgefordert:
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@111.111.11.111's password:
Geben Sie das Passwort ein (Ihre Eingabe wird aus Sicherheitsgründen nicht angezeigt) und drücken Sie ENTER. Das Utility stellt mit dem von Ihnen angegebenen Passwort eine Verbindung zum Konto auf dem Remote-Host her. Anschließend wird der Inhalt Ihres Schlüssels ~/.ssh/id_rsa.pub
in eine Datei im Stammverzeichnis ~/.ssh
des Remote-Kontos namens authorized_keys
kopiert.
Sie werden eine Ausgabe sehen, die wie folgt aussieht:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@111.111.11.111'"
and check to make sure that only the key(s) you wanted were added.
Zu diesem Zeitpunkt wurde Ihr Schlüssel id_rsa.pub
in das Remote-Konto hochgeladen. Sie können mit dem nächsten Abschnitt fortfahren.
Wenn Sie nicht über ssh-copy-id
verfügen, aber einen passwortbasierten SSH-Zugriff auf ein Konto auf Ihrem Server haben, können Sie Ihre Schlüssel mit einer herkömmlichen SSH-Methode hochladen.
Wir können dies tun, indem wir den Inhalt unseres öffentlichen SSH-Schlüssels auf unserem lokalen Computer ausgeben und ihn über eine SSH-Verbindung an den Remote-Server leiten. Auf der anderen Seite können wir sicherstellen, dass das Verzeichnis ~/.ssh
unter dem von uns verwendeten Konto vorhanden ist und dann den von uns geleiteten Inhalt in eine Datei namens authorized_keys
innerhalb dieses Verzeichnisses ausgeben.
Wir werden das Umleitungssymbol >>
verwenden, um den Inhalt anzuhängen, anstatt ihn zu überschreiben. Dadurch können wir Schlüssel hinzufügen, ohne zuvor hinzugefügte Schlüssel zu zerstören.
Der vollständige Befehl sieht wie folgt aus:
cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
Sie sehen eventuell eine Nachricht wie diese:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Dies bedeutet nur, dass Ihr lokaler Computer den Remote-Host nicht erkennt. Dies geschieht, wenn Sie zum ersten Mal eine Verbindung zu einem neuen Host herstellen. Geben Sie „yes“ ein und drücken Sie ENTER, um fortzufahren.
Danach werden Sie aufgefordert, das Passwort des Kontos einzugeben, mit dem Sie versuchen, eine Verbindung herzustellen:
username@111.111.11.111's password:
Nach Eingabe Ihres Passworts wird der Inhalt Ihres Schlüssels id_rsa.pub
an das Ende der Datei authorized_keys
des Kontos des Remote-Benutzers kopiert. Fahren Sie mit dem nächsten Abschnitt fort, wenn dies erfolgreich war.
Wenn Sie keinen passwortbasierten SSH-Zugang zu Ihrem Server zur Verfügung haben, müssen Sie den obigen Vorgang manuell ausführen.
Der Inhalt Ihrer Datei id_rsa.pub
muss irgendwie zu einer Datei unter ~/.ssh/authorized_keys
auf Ihrem Remote-Rechner hinzugefügt werden.
Um den Inhalt Ihres Schlüssels id_rsa.pub
anzuzeigen, geben Sie Folgendes in Ihren lokalen Computer ein:
cat ~/.ssh/id_rsa.pub
Sie werden den Inhalt des Schlüssels sehen, der etwa so aussehen kann:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Greifen Sie mit einer beliebigen verfügbaren Methode auf Ihren Remote-Host zu. Wenn Ihr Server beispielsweise ein DigitalOcean-Droplet ist, können Sie sich über die Web-Konsole im Bedienfeld anmelden:
Sobald Sie Zugriff auf Ihr Konto auf dem Remote-Server haben, sollten Sie sicherstellen, dass das Verzeichnis ~/.ssh
angelegt ist. Dieser Befehl erstellt bei Bedarf das Verzeichnis oder unternimmt nichts, wenn es bereits vorhanden ist:
mkdir -p ~/.ssh
Jetzt können Sie die Datei authorized_keys
in diesem Verzeichnis erstellen oder ändern. Sie können den Inhalt Ihrer Datei id_rsa.pub
an das Ende der Datei authorized_keys
anfügen und diese bei Bedarf mit folgendem Befehl erstellen:
echo public_key_string >> ~/.ssh/authorized_keys
Ersetzen Sie im obigen Befehl public_key_string
durch die Ausgabe des Befehls cat~/.ssh/id_rsa.pub
, den Sie auf Ihrem lokalen System ausgeführt haben. Sie sollte mit ssh-rsa AAAA...
beginnen.
Wenn dies funktioniert, können Sie mit der Authentifizierung ohne Passwort fortfahren.
Wenn Sie eines der oben genannten Verfahren erfolgreich abgeschlossen haben, sollten Sie sich beim Remote-Host anmelden können,* ohne* das Passwort des Remote-Kontos zu verwenden.
Der grundlegende Prozess ist der gleiche:
ssh username@remote_host
Wenn Sie zum ersten Mal eine Verbindung zu diesem Host herstellen (wenn Sie die letzte Methode oben verwendet haben), wird möglicherweise Folgendes angezeigt:
The authenticity of host '111.111.11.111 (111.111.11.111)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Dies bedeutet nur, dass Ihr lokaler Computer den Remote-Host nicht erkennt. Geben Sie „yes“ ein und drücken Sie dann ENTER, um fortzufahren.
Wenn Sie keine Passphrase für Ihren privaten Schlüssel angegeben haben, werden Sie sofort angemeldet. Wenn Sie eine Passphrase für den privaten Schlüssel bei der Erstellung des Schlüssels angeben haben, werden Sie aufgefordert, ihn nun einzugeben. Anschließend sollte eine neue Shell-Sitzung mit dem Konto auf dem Remote-System für Sie erzeugt werden.
Wenn dies erfolgreich war, fahren Sie fort, um zu erfahren, wie Sie den Server sperren können.
Wenn Sie sich mit SSH ohne Passwort bei Ihrem Konto anmelden konnten, haben Sie die auf SSH-Schlüssel-basierte Authentifizierung für Ihr Konto erfolgreich konfiguriert. Ihr passwortbasierter Authentifizierungsmechanismus ist jedoch weiterhin aktiv. Dies bedeutet, dass Ihr Server weiterhin Brute-Force-Angriffen ausgesetzt ist.
Stellen Sie vor dem Ausführen der in diesem Abschnitt beschriebenen Schritte sicher, dass entweder die SSH-Schlüssel-basierte Authentifizierung für das Root-Konto auf diesem Server konfiguriert ist oder vorzugsweise, dass Sie die SSH-Schlüssel-basierte Authentifizierung für ein Konto auf diesem Server mit sudo
-Zugriff konfiguriert haben. Dieser Schritt sperrt passwortbasierte Anmeldungen. Daher ist es von entscheidender Bedeutung, dass Sie weiterhin über administrativen Zugriff verfügen.
Sobald die obigen Bedingungen erfüllt sind, melden Sie sich mit SSH-Schlüsseln an Ihrem Remote-Server entweder als „root“ oder mit einem Konto mit sudo
-Berechtigungen an. Öffnen Sie die Konfigurationsdatei des SSH-Daemons:
sudo nano /etc/ssh/sshd_config
Suchen Sie in der Datei nach einer Anweisung namens PasswordAuthentication
. Dies kann auskommentiert werden. Kommentieren Sie die Zeile aus und setzen Sie den Wert auf „Nein“. Dadurch wird Ihre Fähigkeit, sich über SSH mit Kontopasswörtern anzumelden, deaktiviert:
PasswordAuthentication no
Wenn Sie fertig sind, speichern und schließen Sie die Datei. Um die gerade vorgenommenen Änderungen tatsächlich zu implementieren, müssen Sie den Dienst neu starten.
Auf Ubuntu- oder Debian-Rechnern können Sie diesen Befehl eingeben:
sudo service ssh restart
Auf CentOS/Fedora-Rechnern wird der Daemon sshd
genannt:
sudo service sshd restart
Nach Abschluss dieses Schritts haben Sie Ihren SSH-Daemon erfolgreich so umgestellt, dass er nur noch auf SSH-Schlüssel reagiert.
Sie sollten nun die SSH-Schlüssel-basierte Authentifizierung auf Ihrem Server konfiguriert haben, sodass Sie sich ohne Angabe eines Kontopasswortes anmelden können. Von hier aus gibt es viele Richtungen, in die Sie gehen können. Wenn Sie mehr über das Arbeiten mit SSH erfahren möchten, sehen Sie sich unseren Leitfaden über SSH-Grundlagen an.
]]>A website’s performance depends partially on the size of all the files that a user’s browser must download. Reducing the size of those transmitted files can make your website faster. It can also make your website cheaper for those who pay for their bandwidth usage on metered connections.
gzip
is a popular data compression program. You can configure Nginx to use gzip
to compress the files it serves on the fly. Those files are then decompressed by the browsers that support it upon retrieval with no loss whatsoever, but with the benefit of a smaller amount of data to transfer between the web server and browser. The good news is that compression support is ubiquitous among all major browsers, and there is no reason not to use it.
Because of the way compression works in general and how gzip
works, certain files compress better than others. For example, text files compress very well, often ending up over two times smaller. On the other hand, images such as JPEG or PNG files are already compressed by their nature, and second compression using gzip
yields little or no results. Compressing files use up server resources, so it is best to compress only files that will benefit from the size reduction.
In this tutorial, you will configure Nginx to use gzip
compression. This will reduce the size of content sent to your website’s visitors and improve performance.
To follow this tutorial, you will need:
One Ubuntu 20.04 server with a regular, non-root user with sudo privileges. You can learn how to prepare your server by following this initial server setup tutorial.
Nginx installed on your server by following our tutorial, How To Install Nginx on Ubuntu 20.04.
In this step, we will create several test files in the default Nginx directory. We’ll use these files later to check Nginx’s default behavior for gzip
’s compression and test that the configuration changes have the intended effect.
To infer what kind of file is served over the network, Nginx does not analyze the file contents; that would be prohibitively slow. Instead, it looks up the file extension to determine the file’s MIME type, which denotes its purpose.
Because of this behavior, the content of our test files is irrelevant. By naming the files appropriately, we can trick Nginx into thinking that, for example, one entirely empty file is an image and another is a stylesheet.
Create a file named test.html
in the default Nginx directory using truncate
. This extension denotes that it’s an HTML page:
- sudo truncate -s 1k /var/www/html/test.html
Let’s create a few more test files in the same manner: one jpg
image file, one css
stylesheet, and one js
JavaScript file:
- sudo truncate -s 1k /var/www/html/test.jpg
- sudo truncate -s 1k /var/www/html/test.css
- sudo truncate -s 1k /var/www/html/test.js
The next step is to check how Nginx behaves with respect to compressing requested files on a fresh installation with the files we have just created.
Let’s check if the HTML file named test.html
is served with compression. The command requests a file from our Nginx server and specifies that it is fine to serve gzip
compressed content by using an HTTP header (Accept-Encoding: gzip
):
- curl -H "Accept-Encoding: gzip" -I http://localhost/test.html
In response, you should see several HTTP response headers:
OutputHTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 09 Feb 2021 19:04:25 GMT
Content-Type: text/html
Last-Modified: Tue, 09 Feb 2021 19:03:41 GMT
Connection: keep-alive
ETag: W/"6022dc8d-400"
Content-Encoding: gzip
In the last line, you can see the Content-Encoding: gzip
header. This tells us that gzip
compression was used to send this file. That’s because Nginx has gzip
compression enabled automatically even on the fresh Ubuntu 20.04 installation.
However, by default, Nginx compresses only HTML files. Every other file will be served uncompressed, which is less than optimal. To verify that, you can request our test image named test.jpg
in the same way:
- curl -H "Accept-Encoding: gzip" -I http://localhost/test.jpg
The result should be slightly different than before:
OutputHTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 09 Feb 2021 19:05:49 GMT
Content-Type: image/jpeg
Content-Length: 1024
Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT
Connection: keep-alive
ETag: "6022dc91-400"
Accept-Ranges: bytes
There is no Content-Encoding: gzip
header in the output, which means the file was served without any compression.
You can repeat the test with the test CSS stylesheet:
- curl -H "Accept-Encoding: gzip" -I http://localhost/test.css
Once again, there is no mention of compression in the output:
OutputHTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 09 Feb 2021 19:06:04 GMT
Content-Type: text/css
Content-Length: 1024
Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT
Connection: keep-alive
ETag: "6022dc91-400"
Accept-Ranges: bytes
In the next step, we’ll tell Nginx to compress all sorts of files that will benefit from using gzip
.
gzip
SettingsTo change the Nginx gzip
configuration, open the main Nginx configuration file in nano
or your favorite text editor:
- sudo nano /etc/nginx/nginx.conf
Find the gzip
settings section, which looks like this:
. . .
##
# `gzip` Settings
#
#
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
. . .
You can see that gzip
compression is indeed enabled by the gzip on
directive, but several additional settings are commented out with #
sign and have no effect. We’ll make several changes to this section:
#
at the beginning of the line)gzip_min_length 256;
directive, which tells Nginx not to compress files smaller than 256 bytes. Very small files barely benefit from compression.gzip_types
directive with additional file types denoting web fonts, icons, XML feeds, JSON structured data, and SVG images.After these changes have been applied, the settings section should look like this:
. . .
##
# `gzip` Settings
#
#
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
application/atom+xml
application/geo+json
application/javascript
application/x-javascript
application/json
application/ld+json
application/manifest+json
application/rdf+xml
application/rss+xml
application/xhtml+xml
application/xml
font/eot
font/otf
font/ttf
image/svg+xml
text/css
text/javascript
text/plain
text/xml;
. . .
Save and close the file to exit.
To enable the new configuration, restart Nginx:
- sudo systemctl restart nginx
Next, let’s make sure our new configuration works.
Execute the same request as before for the test HTML file:
- curl -H "Accept-Encoding: gzip" -I http://localhost/test.html
The response will stay the same since compression has already been enabled for that filetype:
OutputHTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 09 Feb 2021 19:04:25 GMT
Content-Type: text/html
Last-Modified: Tue, 09 Feb 2021 19:03:41 GMT
Connection: keep-alive
ETag: W/"6022dc8d-400"
Content-Encoding: gzip
However, if we request the previously uncompressed CSS stylesheet, the response will be different:
- curl -H "Accept-Encoding: gzip" -I http://localhost/test.css
Now gzip
is compressing the file:
OutputHTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 09 Feb 2021 19:21:54 GMT
Content-Type: text/css
Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT
Connection: keep-alive
Vary: Accept-Encoding
ETag: W/"6022dc91-400"
Content-Encoding: gzip
From all test files created in step 1, only the test.jpg
image file should stay uncompressed. We can test this the same way:
- curl -H "Accept-Encoding: gzip" -I http://localhost/test.jpg
There is no gzip
compression:
OutputHTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 09 Feb 2021 19:25:40 GMT
Content-Type: image/jpeg
Content-Length: 1024
Last-Modified: Tue, 09 Feb 2021 19:03:45 GMT
Connection: keep-alive
ETag: "6022dc91-400"
Accept-Ranges: bytes
Here the Content-Encoding: gzip
header is not present in the output as expected.
If that is the case, you have configured gzip
compression in Nginx successfully.
Changing Nginx configuration to utilize gzip
compression is easy, but the benefits can be immense. Not only will visitors with limited bandwidth receive the site faster, but all other users will also see noticeable speed gains. Search engines will be happy about the site loading quicker too. Loading speed is now a crucial metric in how the search engines rank websites, and using gzip
is one big step to improve it.
/
partition was running very low on disk space, but I also had an additional disk mounted at /home/
with plenty of disk space.
However as by default Docker stores everything at /var/lib/docker
my /
partition was nearly full.
To fix that I moved the default /var/lib/docker
to another directory on the /home
partition.
In case that you are in the same situation, here is how to do that on a Linux server!
]]>A Linux container is a set of processes that is separated from the rest of the system. To the end-user, a Linux container functions as a virtual machine, but it’s much more light-weight. You don’t have the overhead of running an additional Linux kernel, and the containers don’t require any CPU hardware virtualization support. This means you can create more containers than virtual machines on the same server.
Imagine that you have a server that should run multiple web sites for your customers. On the one hand, each web site could be a virtual host/server block of the same instance of the Apache or Nginx web server. On the other hand, when using virtual machines, you would create a separate nested virtual machine for each website. Linux containers sit somewhere between virtual hosts and virtual machines.
LXD lets you create and manage these containers. LXD provides a hypervisor service to manage the entire life cycle of containers. In this tutorial, you’ll configure LXD and use it to run Nginx in a container. You’ll then route traffic from the internet to the container to make a sample web page accessible.
To complete this tutorial, you’ll need the following:
A server running Ubuntu 18.04. To set up a server, including a non-root sudo user and a firewall, you can create a DigitalOcean Droplet running Ubuntu 18.04 and then follow our Initial Server Setup Guide. Note your server’s public IP address. We will refer to it later as your_server_ip
.
At least 5GB of block storage. To set this up, you can follow DigitalOcean’s Block Storage Volumes Quickstart. In the configuration of the Block Storage, select Manually Format & Mount
in order to allow LXD to prepare it as required. You will use this to store all data related to the containers.
Note: LXD is pre-installed in Ubuntu 18.04, and the installed LXD package is a deb package. But beginning with Ubuntu 20.04, newer versions of LXD are now only available as snap packages.
Therefore, Ubuntu 18.04 is the last Ubuntu version that has LXD as a deb package. This LXD deb package has standard support until 2023 and End of Life in 2028. See the table below to help you decide on the package format.
Feature | deb package | snap package |
---|---|---|
available LXD versions | 3.0 | 2.0, 3.0, 4.0, 4.x |
memory requirements | minimal | moderate, for snapd service |
upgrade considerations | you can decide not to upgrade LXD | can defer LXD upgrade up to 60 days |
ability to switch from the other package format | not supported | can switch from deb to snap |
Follow the rest of this tutorial to use LXD from the deb package in Ubuntu 18.04. If, however, you want to use the LXD snap package in Ubuntu 18.04, see [TODO-TUTORIAL-FOR-LXD-IN-UBUNTU-20.04].
LXD is available as a deb package in Ubuntu 18.04. It comes pre-installed, but you must configure it before you can use it. LXD is composed of the LXD service and the default client utility that helps you configure the service. This client utility is lxc
. The client utility can access the LXD service if you either run it as root
, or if your non-root account is a member of the lxd
Unix group. In the following, we show how to add your non-root user account to the lxd
Unix group and then continue with the configuration of the storage backend.
lxd
Unix groupWhen setting up your non-root account, add them to the lxd
group using the following command. The adduser
command takes as arguments the user account and the Unix group in order to add the user account into the existing Unix group:
- sudo adduser sammy lxd
Now apply the new membership:
- su sammy
Enter your password and press ENTER
.
Finally, confirm that your user is now added to the lxd
group:
- id -nG
You will receive an output like this:
- sammy sudo lxd
Now you are ready to continue configuring LXD.
To begin, you will configure the storage backend.
The recommended storage backend for LXD when you run it on Ubuntu is the ZFS filesystem. ZFS also works very well with DigitalOcean Block Storage. To enable ZFS support in LXD, first update your package list and then install the zfsutils-linux
auxiliary package:
- sudo apt update
- sudo apt install -y zfsutils-linux
We are almost ready to run the LXD initialization script.
Before you do, you must identify and take a note of the device name for your block storage.
To do so, use ls
to check the /dev/disk/by-id/
directory:
- ls -l /dev/disk/by-id/
In this specific example, the full path of the device name is /dev/disk/by-id/scsi-0DO_Volume_volume-fra1-0
:
Outputtotal 0
lrwxrwxrwx 1 root root 9 Sep 16 20:30 scsi-0DO_Volume_volume-fra1-0 -> ../../sda
Note down the full file path for your storage device. You will use it in the following subsection.
You are now ready to initialize LXD. Initialize LXD using the sudo lxd init
command:
- sudo lxd init
A prompt will appear. The next two sections will walk you through the appropriate answers to each question.
First, the program will ask if you want to enable LXD clustering. For the purposes of this tutorial, press ENTER
to accept the default no
, or type no
and then press ENTER
. LXD clustering is an advanced topic that enables high availability for your LXD setup and requires at least three LXD servers running in a cluster:
OutputWould you like to use LXD clustering? (yes/no) [default=no]: no
The next six prompts deal with the storage pool. Give the following responses:
ENTER
to configure a new storage pool.ENTER
to accept the default storage pool name.ENTER
to accept the default zfs
storage backend.ENTER
to create a new ZFS pool.yes
to use an existing block device./dev/disk/by-id/device_name
).Your answers will look like the following:
OutputDo you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (btrfs, dir, lvm, zfs) [default=zfs]: zfs
Create a new ZFS pool? (yes/no) [default=yes]: yes
Would you like to use an existing block device? (yes/no) [default=no]: yes
Path to the existing block device: /dev/disk/by-id/scsi-0DO_Volume_volume-fra1-01
You have now configured the storage backend for LXD. Continuing with LXD’s init
script, you will now configure some networking options.
LXD now asks whether you want to connect to a MAAS (Metal As A Server) server. MAAS is software that makes a bare-metal server appear as, and be handled as if, a virtual machine.
We are running LXD in standalone mode, therefore accept the default and answer no
:
OutputWould you like to connect to a MAAS server? (yes/no) [default=no]: no
You are then asked to configure a network bridge for LXD containers. This enables the following features:
When asked to create a new local network bridge, choose yes
:
OutputWould you like to create a new local network bridge? (yes/no) [default=yes]: yes
Then accept the default name, lxdbr0
:
OutputWhat should the new bridge be called? [default=lxdbr0]: lxdbr0
Accept the automated selection of private IP address range for the bridge:
OutputWhat IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: auto
Finally, LXD asks the following miscellaneous questions:
When asked if you want to manage LXD over the network, press ENTER
or answer no
:
OutputWould you like LXD to be available over the network? (yes/no) [default=no]: no
When asked if you want to update stale container images automatically, press ENTER
or answer yes
:
OutputWould you like stale cached images to be updated automatically? (yes/no) [default=yes] yes
When asked if you want to view and keep the YAML configuration you just created, answer yes
if you do. Otherwise, you press ENTER
or answer no
:
OutputWould you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: no
You have now configured your network and storage options for LXD. Next you will create your first LXD container.
Now that you have successfully configured LXD, you are ready to create and manage your first container. In LXD, you manage containers using the lxc
command followed by an action, such as list
, launch
, start
, stop
and delete
.
Use lxc list
to view the available installed containers:
- lxc list
Since this is the first time that the lxc
command communicates with the LXD hypervisor, it shows some information about how to launch a container. Finally, the command shows an empty list of containers. This is expected because we haven’t created any yet:
Output of the "lxd list" commandTo start your first container, try: lxc launch ubuntu:18.04
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
Now create a container that runs Nginx. To do so, first use the lxc launch
command to create and start an Ubuntu 18.04 container named webserver
.
Create the webserver
container. The 18.04
in ubuntu:18.04
is a shortcut for Ubuntu 18.04. ubuntu:
is the identifier for the preconfigured repository of LXD images. You could also use ubuntu:bionic
for the image name:
- lxc launch ubuntu:18.04 webserver
Note: You can find the full list of all available Ubuntu images by running lxc image list ubuntu:
and other Linux distributions by running lxc image list images:
. Both ubuntu:
and images:
are repositories of container images. For each container image, you can get more information with the command lxc image info ubuntu:18.04
. While we launch a container with Ubuntu 18.04 for the purposes of this tutorial, you may select any available Ubuntu version for your own projects.
Because this is the first time you’ve created a container, this command downloads the container image from the internet and caches it. You’ll see this output once your new container finishes downloading:
OutputCreating webserver
Starting webserver
With the webserver
container started, use the lxc list
command to show information about it. We added --columns ns4
in order to show only the columns for name
, state
and IPv4
address. The default lxc list
command shows three more columns: the IPv6 address, whether the container is persistent or ephemeral, and whether there are snapshots available for each container:
- lxc list --columns ns4
The output shows a table with the name of each container, its current state, its IP address, and its type:
Output+-----------+---------+------------------------------------+
| NAME | STATE | IPV4 |
+-----------+---------+------------------------------------+
| webserver | RUNNING | your_webserver_container_ip (eth0) |
+-----------+---------+------------------------------------+
LXD’s DHCP server provides this IP address and in most cases it will remain the same even if the server is rebooted. However, in the following steps you will create iptables
rules to forward connections from the internet to the container. Therefore, you should instruct LXD’s DHCP server to always give the same IP address to the container.
The following set of commands will configure the container to obtain a static IP assignment. First, you will override the network configuration for the eth0
device that is inherited from the default LXD profile. This allows you to set a static IP address, which ensures proper communication of web traffic into and out of the container.
Specifically, lxc config device
is a command that performs the config
action to configure a device
. The first line has the sub-action override
to override the device eth0
from the container webserver
. The second line has the sub-action to set the ipv4.address
field of the eth0
device of the webserver
container to the IP address that was given by the DHCP server in the beginning.
Run the first config
command:
- lxc config device override webserver eth0
You will receive an output like this:
OutputDevice eth0 overridden for webserver
Now set the static IP:
- lxc config device set webserver eth0 ipv4.address your_webserver_container_ip
If the command is successful you will receive no output.
Restart the container:
- lxc restart webserver
Now check the status of the container:
- lxc list
You should see that the container is RUNNING
and the IPV4
address is your static address.
You are ready to install and configure Nginx inside the container.
In this step you will connect to the webserver
container and configure the web server.
Connect to the container with lxc shell
command, which takes the name of the container and starts a shell inside the container:
- lxc shell webserver
Once inside the container, your shell prompt will look like the following:
- [environment second]
This shell, even if it is a root shell, is limited to the container. Anything that you run in this shell stays in the container and cannot escape to the host server.
Note: When getting a shell into a container, you may see a warning such as mesg: ttyname failed: No such device
. This message is produced when the shell in the container tries to run the command mesg
from the configuration file /root/.profile
. You can safely ignore it. To avoid seeing it, you may remove the command mesg n || true
from /root/.profile
.
Once inside your container, update the package list and install Nginx:
- apt update
- apt install nginx
With Nginx installed, you will now edit the default Nginx web page. Specifically, you will add two lines of text so that it is clear that this site is hosted inside the webserver
container.
Using nano
or your preferred editor, open the file /var/www/html/index.nginx-debian.html
:
- nano /var/www/html/index.nginx-debian.html
Add the two highlighted phrases to the file:
/var/www/html/index.nginx-debian.html<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on LXD container webserver!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx on LXD container webserver!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
...
You have edited the file in two places and specifically added the text on LXD container webserver
. Save the file and exit your text editor.
Now log out of the container:
- logout
Once the server’s default prompt returns, use curl
to test that the web server in the container is working. To do this, you’ll need the IP address of the web container, which you found using the lxc list
command earlier.
Use curl
to test your web server:
- curl http://your_webserver_container_ip
You will receive the Nginx default HTML welcome page as output. Note that it includes your edits:
Output<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on LXD container webserver!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx on LXD container webserver!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
...
The web server is working but you can only access it while on the host using the private IP. In the next step, you will route external requests to this container so the world can access your web site through the internet.
Now that you have configured Nginx, it’s time to connect the webserver container to the internet. To begin, you need to set up the server to forward any connections that it may receive on port 80
to the webserver
container. To do this, you’ll create an iptables
rule to forward network connections. You can learn more about IPTables in our tutorials, How the IPtables Firewall Works and IPtables Essentials: Common Firewall Rules and Commands.
This iptables
command requires two IP addresses: the public IP address of the server (your_server_ip
) and the private IP address of the webserver
container (your_webserver_container_ip
), which you can obtain using the lxc list
command.
Execute this command to create a new IPtables rule:
- PORT=80 PUBLIC_IP=your_server_ip CONTAINER_IP=your_container_ip IFACE=eth0 sudo -E bash -c 'iptables -t nat -I PREROUTING -i $IFACE -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
Let’s study that command:
-t nat
specifies that we’re using the nat
table for address translation.-I PREROUTING
specifies that we’re adding the rule to the PREROUTING chain.-i $IFACE
specifies the interface eth0
, which is the default public network interface on the host for Droplets.-p TCP
says we’re using the TCP protocol.-d $PUBLIC_IP
specifies the destination IP address for the rule.--dport $PORT
: specifies the destination port (such as 80
).-j DNAT
says that we want to perform a jump to Destination NAT (DNAT).--to-destination $CONTAINER_IP:$PORT
says that we want the request to go to the IP address of the specific container and the destination port.Note: You can reuse this command to set up forwarding rules. Reset the variables PORT
, PUBLIC_IP
, CONTAINER_IP
and IFACE
at the start of the line. Just change the highlighted values.
Now list your IPTables rules:
- sudo iptables -t nat -L PREROUTING
You’ll see output like this:
[secondary_label Output]
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
DNAT tcp -- anywhere your_server_ip tcp dpt:http /* forward to this container */ to:your_container_ip:80
...
Now test that the webserver is accessible from the internet
Use the curl
command from your local machine to test the connections:
- curl --verbose 'http://your_server_ip'
You’ll see the headers followed by the contents of the web page you created in the container:
Output* Trying your_server_ip...
* Connected to your_server_ip (your_server_ip) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.10.0 (Ubuntu)
...
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on LXD container webserver!</title>
<style>
body {
...
This confirms that the requests are going to the container.
Finally, you will save the firewall rule so that it reapplies after a reboot.
To do so, first install the iptables-persistent
package:
- sudo apt install iptables-persistent
When installing the package, the application will prompt you to save the current firewall rules. Accept and save all current rules.
When you reboot your machine, the firewall rule will load. In addition, the Nginx service in your LXD container will automatically restart.
You’ve successfully configured LXD. In the final step you will learn how to stop and destroy the service.
You may decide that you want to take down the container and delete it. In this step you will stop and remove your container.
First, stop the container:
- lxc stop webserver
Use the lxc list
command to verify the status:
- lxc list
You will see that the container’s state reads STOPPED
:
Output+-----------+---------+------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-----------+---------+------+------+------------+-----------+
| webserver | STOPPED | | | PERSISTENT | 0 |
+-----------+---------+------+------+------------+-----------+
To remove the container, use lxc delete
:
- lxc delete webserver
Running lxc list
again shows that there’s no container running:
- lxc list
The command will output the following:
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
Use the lxc help
command to see additional options.
To remove the firewall rule that routes traffic to the container, first locate the rule in the list of rules with this command, which associates a line number with each rule:
- sudo iptables -t nat -L PREROUTING --line-numbers
You’ll see your rule, prefixed with a line number, like this:
OutputChain PREROUTING (policy ACCEPT)
num target prot opt source destination
1 DNAT tcp -- anywhere your_server_ip tcp dpt:http /* forward to the Nginx container */ to:your_container_ip
Use that line number to remove the rule:
- sudo iptables -t nat -D PREROUTING 1
List the rules again to ensure removal:
- sudo iptables -t nat -L PREROUTING --line-numbers
The rule is removed:
OutputChain PREROUTING (policy ACCEPT)
num target prot opt source destination
Now save the changes so that the rule doesn’t come back when you restart your server:
- sudo netfilter-persistent save
You can now bring up another container with your own settings and add a new firewall rule to forward traffic to it.
In this tutorial, you installed and configured LXD. You then created a website using Nginx running inside an LXD container and made it publicly available us IPtables.
From here, you could configure more websites, each confined to its own container, and use a reverse proxy to direct traffic to the appropriate container. The tutorial How to Host Multiple Web Sites with Nginx and HAProxy Using LXD on Ubuntu 16.04 walks you through that setup.
See the LXD reference documentation for more information on how to use LXD.
To practice with LXD, you can try LXD online and follow the web-based tutorial.
To get user support on LXD, visit the LXD discussion forum.
]]>Thank You
]]>Does anyone have a command on hand which could show the remote URL of a specific local git repository?
I usually use the cat
command to check the content of the .git/config
and look for the remote origin
section in there.
But is there a better way of doing this?
Thanks!
]]>[77874.518094] xiccd[66634]: segfault at e4 ip 0000555677e9b4f6 sp 00007ffcd9acab10 error 4 in xiccd[555677e97000+5000]
[77874.518103] Code: 00 eb 91 90 48 89 ef e8 e8 f8 ff ff eb c5 e8 01 cc ff ff 90 f3 0f 1e fa 55 53 48 89 fb 48 83 ec 08 e8 ce f8 ff ff 48 8b 7b 08 <8b> 97 e4 00 00 00 85 d2 7e 34 31 ed 66 0f 1f 44 00 00 48 89 e8 ba
[77875.096547] traps: light-locker[66653] trap int3 ip:7f630c02e0d5 sp:7fff171b8fa0 error:0 in libglib-2.0.so.0.6400.3[7f630bff2000+84000]
journalctl:
Jan 16 04:16:55 do at-spi-bus-launcher[66249]: X connection to :1 broken (explicit kill or server shutdown).
Jan 16 04:16:55 do xfce4-notifyd[66294]: xfce4-notifyd: Fatal IO error 11 (Resource temporarily unavailable) on X server :1.
Jan 16 04:16:55 do systemd[56102]: xfce4-notifyd.service: Main process exited, code=exited, status=1/FAILURE
Jan 16 04:16:55 do systemd[56102]: xfce4-notifyd.service: Failed with result 'exit-code'.
Jan 16 04:17:10 do dbus-daemon[66552]: [session uid=1000 pid=66550] AppArmor D-Bus mediation is enabled
Jan 16 04:17:10 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.a11y.Bus' requested by ':1.0' (uid=1000 pid=66548 comm="xfce4-session " label="unconfined")
Jan 16 04:17:10 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.a11y.Bus'
Jan 16 04:17:10 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.xfce.Xfconf' requested by ':1.2' (uid=1000 pid=66548 comm="xfce4-session " label="unconfined")
Jan 16 04:17:10 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.xfce.Xfconf'
Jan 16 04:17:10 do org.a11y.Bus[66559]: dbus-daemon[66559]: Activating service name='org.a11y.atspi.Registry' requested by ':1.0' (uid=1000 pid=66548 comm="xfce4-session " label="unconfined")
Jan 16 04:17:10 do org.a11y.Bus[66559]: dbus-daemon[66559]: Successfully activated service 'org.a11y.atspi.Registry'
Jan 16 04:17:10 do org.a11y.Bus[66569]: SpiRegistry daemon is running with well-known name - org.a11y.atspi.Registry
Jan 16 04:17:11 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.freedesktop.systemd1' requested by ':1.5' (uid=1000 pid=66576 comm="dbus-update-activation-environment --systemd SSH_A" label="unconfined")
Jan 16 04:17:11 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activated service 'org.freedesktop.systemd1' failed: Process org.freedesktop.systemd1 exited with status 1
Jan 16 04:17:11 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.Daemon' requested by ':1.6' (uid=1000 pid=66581 comm="xfwm4 " label="unconfined")
Jan 16 04:17:11 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.Daemon'
Jan 16 04:17:12 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.freedesktop.Notifications' requested by ':1.15' (uid=1000 pid=66606 comm="/usr/lib/x86_64-linux-gnu/xfce4/panel/wrapper-2.0 " label="unconfined")
Jan 16 04:17:12 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.freedesktop.Notifications'
Jan 16 04:17:12 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.freedesktop.thumbnails.Thumbnailer1' requested by ':1.11' (uid=1000 pid=66604 comm="xfdesktop " label="unconfined")
Jan 16 04:17:12 do pulseaudio[57899]: XOpenDisplay() failed
Jan 16 04:17:12 do pulseaudio[57899]: Failed to load module "module-x11-publish" (argument: "display=:1.0 xauthority="): initialization failed.
Jan 16 04:17:13 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='ca.desrt.dconf' requested by ':1.20' (uid=1000 pid=66653 comm="light-locker " label="unconfined")
Jan 16 04:17:13 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'ca.desrt.dconf'
Jan 16 04:17:14 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gnome.evolution.dataserver.Sources5' requested by ':1.26' (uid=1000 pid=66643 comm="/usr/libexec/evolution-data-server/evolution-alarm" label="unconfined")
Jan 16 04:17:14 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gnome.OnlineAccounts' requested by ':1.28' (uid=1000 pid=66686 comm="/usr/libexec/evolution-source-registry " label="unconfined")
Jan 16 04:17:14 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gnome.evolution.dataserver.Sources5'
Jan 16 04:17:14 do goa-daemon[66704]: goa-daemon version 3.36.0 starting
Jan 16 04:17:14 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gnome.Identity' requested by ':1.30' (uid=1000 pid=66704 comm="/usr/libexec/goa-daemon " label="unconfined")
Jan 16 04:17:14 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gnome.OnlineAccounts'
Jan 16 04:17:14 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gnome.Identity'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gnome.evolution.dataserver.Calendar8' requested by ':1.26' (uid=1000 pid=66643 comm="/usr/libexec/evolution-data-server/evolution-alarm" label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gnome.evolution.dataserver.Calendar8'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gnome.evolution.dataserver.AddressBook10' requested by ':1.32' (uid=1000 pid=66719 comm="/usr/libexec/evolution-calendar-factory " label="unconfined")
Jan 16 04:17:15 do org.freedesktop.thumbnails.Thumbnailer1[66635]: Registered thumbnailer /usr/bin/gdk-pixbuf-thumbnailer -s %s %u %o
Jan 16 04:17:15 do org.freedesktop.thumbnails.Thumbnailer1[66635]: Registered thumbnailer /usr/bin/gdk-pixbuf-thumbnailer -s %s %u %o
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gnome.evolution.dataserver.AddressBook10'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.UDisks2VolumeMonitor' requested by ':1.29' (uid=1000 pid=66635 comm="/usr/lib/x86_64-linux-gnu/tumbler-1/tumblerd " label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.UDisks2VolumeMonitor'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.GoaVolumeMonitor' requested by ':1.29' (uid=1000 pid=66635 comm="/usr/lib/x86_64-linux-gnu/tumbler-1/tumblerd " label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.GoaVolumeMonitor'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.AfcVolumeMonitor' requested by ':1.29' (uid=1000 pid=66635 comm="/usr/lib/x86_64-linux-gnu/tumbler-1/tumblerd " label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.AfcVolumeMonitor'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.GPhoto2VolumeMonitor' requested by ':1.29' (uid=1000 pid=66635 comm="/usr/lib/x86_64-linux-gnu/tumbler-1/tumblerd " label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.GPhoto2VolumeMonitor'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.MTPVolumeMonitor' requested by ':1.29' (uid=1000 pid=66635 comm="/usr/lib/x86_64-linux-gnu/tumbler-1/tumblerd " label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.MTPVolumeMonitor'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.freedesktop.thumbnails.Thumbnailer1'
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Activating service name='org.gtk.vfs.Metadata' requested by ':1.11' (uid=1000 pid=66604 comm="xfdesktop " label="unconfined")
Jan 16 04:17:15 do dbus-daemon[66552]: [session uid=1000 pid=66550] Successfully activated service 'org.gtk.vfs.Metadata'
My xstartup script looks like this currently:
#!/bin/bash
#/etc/X11/Xsession
#xrdb $HOME/.Xresources
#startxfce4 &
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
dbus-launch xfce4-session
Please let me know if you have any ideas.
]]>I am working on a Bash script and I wanted to change the color of some of the output to emphasize on some specific words.
For example, I want to be able to print the text in green when the script is successful and in red when the script fails.
Does anyone know an easy way of doing so?
]]>I have created a user in linux and I’m trying to restrict them in creating sub folders in their home directory, but they should still able to upload or create files.
Hope someone can help me here.
]]>My objective is set up a simple OpenVPN account. What is the simplest way to achieve this?
]]>dnf update
.
So I followed an earlier suggestion found in this forum to persist the resolv.conf file.
# Generated by NetworkManager
nameserver 8.8.8.8
nameserver 8.8.4.4
/etc/NetworkManager/NetworkManager.conf
[main]
plugins = ifcfg-rh,
dns=none
sudo systemctl restart NetworkManager.service
I tried ping after this. No go. Still not working.
So I rebooted this droplet.
Still the same, ping not working. AND the /etc/resolv.conf
file got changed to the default again:
nameserver 127.0.0.53
options edns0 trust-ad
What can I do? Is this related to cloud.init? Something unique in Fedora? Never happened to me in CentOS. RedHat suggests to make resolv.conf a symlink instead, is that something we need to do in Fedora??
]]>Hope that this helps!
]]>du
command is essential!
You can use the du
command which estimates the directory space usage.
For example, let’s say that we want to check the size of the directories located in the /home
directory. To do that let’s first cd
into the /home
folder:
- cd /home/
And then run the du
command:
- du -h --max-depth=1
[secondary_label Output]
16K ./demo
5.0G ./bobby
28K ./sammy
1.8G ./dev
7.1G .
I like running the du
command with the --max-depth=1
argument which only shows the size of the folders located inside the current directory and not the subfolders.
As you can see from the output the size is not sorted by size, here is how to do that!
]]>We’d be running 2 machines, 1 being Kali linux and the tools within and the other would be Windows 10 and we’d be running Vulnerability Assessement tools like Nessus and BurpSuite. We’re wondering if Digital Ocean is right for us.
Currently we’re looking for:
Here is how you could transfer a Docker image from one server to another without pushing the image to an external Docker repository!
]]>If you wanted to see the SSL certificate information for a specific website, you could do that via your browser, by clicking on the green padlock and then click on Certificate
which would open a modal with all of the information about the SSL certificate like the Common Names, the Organization that issued the certificate, the expiry date and etc.
Here’s how to do the same thing via your command line directly!
]]>Here is an example of how to generate a Unix timestamp using the date
command:
date +%s
This will output a long string of numbers like this:
1606319820
This number represents the seconds since 00:00:00 UTC on 1 January 1970.
So here’s how to convert Unix timestamps to human readable date format in Bash!
]]>Here is an example script, where we expect the user to enter a value when asked to do so:
#!/bin/bash
echo "Enter your name: "
read name
echo "Hello ${name}"
If you run the script, you will be able to press enter, and the output would be just Hello
.
Here’s how to check if a variable is empty in a Bash script!
]]>I need to commit an empty directory to my Git project, but when I create a new directory with:
- mkdir my_dir
And then check the status with:
- git status
Git says that there is nothing to commit, so running git add .
does not do anything.
How can I add an empty directory/folder to my Git repository?
]]>git checkout -b branch_name
and I would realize that I’ve made a typo or I would come up with a better name for the branch later on.
If I have just created the branch it is ok as I can create a new one, but sometimes I would notice this after a couple of commits.
So here’s how you could rename a local Git branch via your command line!
]]>The sar
command allows you to capture the utilization of your resources like RAM, CPU, Disk I/O and etc.
In this post, I will show you how to install and configure sar
!
cat
and tail
to check your server logs.
Here I will show you how to check the logs of your Kubernetes pods for both running and crashed pods using the kubectl
command.
sudo dnf install mariadb-server
command does not install the latest stable MariaDB version.
Here’s how you could install the latest MariaDB available on CentOS 8!
]]>Here are the steps that you need to follow in order to get the newest changes from the original repository pulled into your fork!
]]>I am supporting a few projects on GitHub, I’m at a point where I have hundreds of branches across all projects and deleting the branches manually is not really an option.
Does anyone have a script on hand to delete all merged branches which do not have any new commits since a specific date?
Any help will be appreciated!
]]>ps: AWS doesn’t have this annoyance.
]]>When I try to run “mysql” command through bash window I got the error below. Most people say it is because of missing configuration in SQL ports. But when I restart the droplet everything returns back to normal, at least for a while until another random crash occurs.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
I’ve also checked SQL logs. There is a reoccuring pattern. The below error is logged everytime I have this problem. Can anybody give me some pointers about what it means?
2020-08-10 4:13:38 0 [Note] /usr/sbin/mysqld (initiated by: unknown): Normal shutdown
2020-08-10 4:13:38 0 [Note] Event Scheduler: Purging the queue. 0 events
2020-08-10 4:13:38 0 [Note] InnoDB: FTS optimize thread exiting.
2020-08-10 4:13:38 0 [Note] InnoDB: Starting shutdown...
2020-08-10 4:13:38 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2020-08-10 4:13:38 0 [Note] InnoDB: Buffer pool(s) dump completed at 200810 4:13:38
2020-08-10 4:13:40 0 [Note] InnoDB: Shutdown completed; log sequence number 231485352; transaction id 514172
2020-08-10 4:13:40 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2020-08-10 4:13:40 0 [Note] /usr/sbin/mysqld: Shutdown complete
]]>What I want is, total 125GB storage accessible under root / how can I do that?
]]>grub rescue>
When you try to load common or linux.mod (or any consequential modules, the ones that give you the kernel command), I get the error -
error: symbol grub_calloc
not found
I could fix this on a VM of my own, or anything I could attach USB media to, but IDK what to do on this droplet…
]]>I’m trying to connect my computer to my droplet via SSH but not able to
OpenSSH is set up, I’m able to connect via SFTP (FileZilla) just fine, I’ve added my public key to the authorized key file directly on the DO console, and am now trying to connect via terminal log in.
Here is the sshd_config file from terminal and DO console all have PermitRootLogIn and Password Authentification set to yes
I tried restarting sshd from terminal and DO console using “sudo systemctl restart sshd” and got the following errors
terminal “System has not been booted with systemd as init system (PID 1). Can’t operate. Failed to connect to bus: Host is down”
debug “sudo sshd -vvf /etc/ssh/sshd_config unknown option – v OpenSSH_8.2p1 Ubuntu-4, OpenSSL 1.1.1f 31 Mar 2020 usage: sshd [-46DdeiqTt] [-C connection_spec] [-c host_cert_file] [-E log_file] [-f config_file] [-g login_grace_time] [-h host_key_file] [-o option] [-p port] [-u len]”
DO console “Job for ssh.service failed because the control process exited with error code.”
systemctl status ssh.service Active: failed (Result: exit-code) Process: (code = exited, status = 255)
Failed to start OpenBSD Secure Shell serv
(sidenote: anyone have advice on how to copy from the DO console?)
]]>What is better for my droplet Ubuntu 18.04.3 LTS or Ubuntu 20.04 LTS because I have read something about Ubuntu 18 vs 20
and I found 20 better but I don’t know why Digitalocean select 18 by default
Thank you in advance
]]>I tried:
Going to plugins and upload with no success since there is no such plugin available with that name (the only ones are CONTENT MANAGER, CONTENT TYPE BUILDER, EMAIL, MEDIA LIBRARY, ROLES & PERMISSIONS.
Going to the console and trying to change strapi configs, but i cant access with sudo since all the passwords i try are incorrect (i used all of them, none work)
I am now clueless of what i am suposed to do.
]]>