A port is a communication endpoint. Within an operating system, a port is opened or closed to data packets for specific processes or network services.
Typically, ports identify a specific network service assigned to them. This can be changed by manually configuring the service to use a different port, but in general, the defaults can be used.
The first 1024 ports (port numbers 0
to 1023
) are referred to as well-known port numbers and are reserved for the most commonly used services. These include SSH (port 22
), HTTP (port 80
), HTTPS (port 443
).
Port numbers above 1024 are referred to as ephemeral ports.
1024
to 49151
are called the registered/user ports.49152
to 65535
are called the dynamic/private ports.In this tutorial, you will open an ephemeral port on Linux, since the most common services use the well-known ports.
Deploy your applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To complete this tutorial, you will need:
Before opening a port on Linux, you must check the list of all open ports, and choose an ephemeral port to open that is not on that list.
Use the netstat
command to list all open ports, including TCP and UDP, which are the most common protocols for packet transmission in the network layer.
- netstat -lntu
This will print:
-l
)-n
)-t
)-u
)OutputActive Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 ::1:5432 :::* LISTEN
tcp6 0 0 ::1:6379 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
udp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
Note: If your distribution doesn’t have netstat
, you can use the ss
command to display open ports by checking for listening sockets.
Verify that you are receiving consistent outputs using the ss
command to list listening sockets with an open port:
- ss -lntu
This will print:
OutputNetid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:*
tcp LISTEN 0 128 127.0.0.1:5432 0.0.0.0:*
tcp LISTEN 0 128 127.0.0.1:27017 0.0.0.0:*
tcp LISTEN 0 128 127.0.0.1:6379 0.0.0.0:*
tcp LISTEN 0 128 127.0.0.53%lo:53 0.0.0.0:*
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
tcp LISTEN 0 128 [::1]:5432 0.0.0.0:*
tcp LISTEN 0 128 [::1]:6379 0.0.0.0:*
tcp LISTEN 0 128 [::]:22 0.0.0.0:*
This gives more or less the same open ports as netstat
.
Now, open a closed port and make it listen for TCP connections.
For the purposes of this tutorial, you will be opening port 4000
. However, if that port is not open in your system, feel free to choose another closed port. Just make sure that it’s greater than 1023
.
Ensure that port 4000
is not used using the netstat
command:
- netstat -na | grep :4000
Or the ss
command:
- ss -na | grep :4000
The output must remain blank, thus verifying that it is not currently used, so that you can add the port rules manually to the system iptables firewall.
ufw
-based SystemsUse ufw
- the command line client for the Uncomplicated Firewall.
Your commands will resemble:
- sudo ufw allow 4000
Refer to How to Setup a ufw
Firewall on Ubuntu for more details.
firewalld
-based SystemsUse firewall-cmd
- the command line client for the firewalld
daemon.
Your commands will resemble:
- firewall-cmd --add-port=4000/tcp
Refer to How to Set Up firewalld
on CentOS for more details.
Use iptables
to change the system IPv4 packet filter rules.
- iptables -A INPUT -p tcp --dport 4000 -j ACCEPT
Refer to How To Set Up A Firewall Using iptables
for more details.
Now that you have successfully opened a new TCP port, it is time to test it.
First, start netcat (nc
) and listen (-l
) on port (-p
) 4000
, while sending the output of ls
to any connected client:
- ls | nc -l -p 4000
Now, after a client has opened a TCP connection on port 4000
, they will receive the output of ls
. Leave this session alone for now.
Open another terminal session on the same machine.
Since you opened a TCP port, use telnet
to check for TCP Connectivity. If the command doesn’t exist, install it using your package manager.
Input your server IP and the port number (4000
in this example) and run this command:
- telnet localhost 4000
This command tries to open a TCP connection on localhost
on port 4000
.
You’ll get an output similar to this, indicating that a connection has been established with the listening program (nc
):
OutputTrying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
while.sh
The output of ls
(while.sh
, in this example) has also been sent to the client, indicating a successful TCP Connection.
Use nmap
to check if the port (-p
) is open:
- nmap localhost -p 4000
This command will check the open port:
OutputStarting Nmap 7.60 ( https://nmap.org ) at 2020-01-18 21:51 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00010s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
4000/tcp open remoteanything
Nmap done: 1 IP address (1 host up) scanned in 0.25 seconds
The port has been opened. You have successfully opened a new port on your Linux system.
Note: nmap
only lists opened ports that have a currently listening application. If you don’t use any listening application, such as netcat, this will display the port 4000
as closed since there isn’t any application listening on that port currently. Similarly, telnet
won’t work either since it also needs a listening application to bind to. This is the reason why nc
is such a useful tool. This simulates such environments in a simple command.
But this is only temporary, as the changes will be reset every time you reboot the system.
The approach presented in this article will only temporarily update the firewall rules until the system shuts down or reboots. So similar steps must be repeated to open the same port again after a restart.
ufw
Firewallufw
rules do not reset on reboot. This is because it is integrated into the boot process, and the kernel saves the firewall rules using ufw
by applying appropriate config files.
firewalld
If you want to add the port to the firewall’s persistent configuration and apply the changes immediately, you can use the --permanent
and --reload
flags:
sudo firewall-cmd --permanent --add-port=4000/tcp
sudo firewall-cmd --reload
Refer to How to Set Up firewalld
for more details.
iptables
You will need to save the configuration rules and use the iptables-persistent
command.
Refer to How To Set Up a Firewall Using iptables
for more details.
In this tutorial, you learned how to open a new port on Linux and set it up for incoming connections. You also used netstat
, ss
, telnet
, nc
, and nmap
.
Continue your learning with How the Iptables Firewall Works, A Deep Dive into Iptables and Netfilter Architecture, Understanding Sockets, and How To Use Top, Netstat, Du, & Other Tools to Monitor Server Resources.
]]>What should I do?
]]>144 updates can be installed immediately. 2 of these updates are security updates. To see these additional updates run: apt list --upgradable
New release ‘22.04.3 LTS’ available. Run ‘do-release-upgrade’ to upgrade to it.
1 updates could not be installed automatically. For more details, see /var/log/unattended-upgrades/unattended-upgrades.log
*** System restart required ***
could someone advice to how to do this?
]]>If I want to learn about writing code on Linux is this a good starting point?
I’ve had a LAMP-Ubuntu droplet with digital ocean for a few years now but so far its only been used for web hosting. Now I’d like to explore the possibilities.
]]>uptime
just gives load average as load average: 0.0, 0.0, 0.0
whereas in both my local dev (WSL) and docker container has the proper load avg which I’m later using in my node app with func: os.loadavg().
I assume there are permission errors but isn’t there any way to get system load info i.e. cpu, mem, etc. in a web app env or will I need to recreate as a droplet? In the tutorial they also run uptime
with no issues.
Thanks
]]>After web and ssh is up, I turn on the firewall (# systemctl start firewalld) but the disconnection happenes again after few hours, I had to disable the firewall and it will work again. This just started one month ago. Please help on this.
]]>I get to the point where I can’t install packages because I get an error stating the mirror can’t be found.
Earlier today I tried installing zsh and gert the same response.
The droplet is running Ububtu 22.04 Failed to fetch http://mirrors.digitalocean.com/ubuntu/pool/main/u/ubuntu-advantage-tools/ubuntu-advantage-tools_28.1~22.10_amd64.deb 404 Not Found [IP: 104.21.29.13 80]
even when I run apt update I get: The repository ‘http://security.ubuntu.com/ubuntu kinetic-security Release’ no longer has a Release file.
How do I start getting packages available again?
]]>Thanks.
]]>I have a Wordpress blog/small ecommerce site that I run on a DigitalOcean droplet. A friend of mine initially set it up a few years ago because I needed SSL and I wasn’t familiar with how to do that. Anyway, he got it setup and working just fine. This past week I wanted to add another site to the server. I created a new vhost file in sites-available, enabled it, reloaded/restarted Apache and tried to hit the URL. Whenever I go to the URL of the new site it just redirects to the original site.
Myself and another person spent a couple hours last night trying to figure out what was going on and, on a whim, I just decided to combine the two vhost files and it started working. That was late last night and I went to bed. Started working on it again this morning and it’s again redirecting to the original site. The only odd thing I noticed about the original setup is that there are two vhosts and both are enabled. I’ll put contents of each vhost below. Any help is greatly appreciated. Thank you.
Original Site: Vhost 1
<VirtualHost *:80>
ServerName originalsite.com
DocumentRoot /var/www/originalsite
RewriteEngine On
RewriteCond %{HTTP_HOST} ^(.*)$ [NC]
RewriteRule ^(.*)$ http://www\.originalsite\.com$1 [R=permanent,L]
RewriteCond %{SERVER_NAME} =originalsite.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
<VirtualHost *:80>
ServerName www.originalsite.com
DocumentRoot /var/www/originalsite
<Directory /var/www/originalsite>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
</Directory>
CustomLog /http-logs/orig.log combined
ErrorLog /http-logs/orig.err.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
RewriteEngine on
RewriteCond %{SERVER_NAME} =www.originalsite.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>
Original Site: Vhost 2
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName originalsite.com
DocumentRoot /var/www/originalsite
RewriteEngine On
RewriteCond %{HTTP_HOST} ^(.*)$ [NC]
RewriteRule ^(.*)$ http://www\.originalsite\.com$1 [R=permanent,L]
SSLCertificateFile /etc/letsencrypt/live/originalsite.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/originalsite.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName www.originalsite.com
DocumentRoot /var/www/originalsite
<Directory /var/www/originalsite>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
</Directory>
CustomLog /http-logs/orig.log combined
ErrorLog /http-logs/orig.err.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg. LogLevel warn
SSLCertificateFile /etc/letsencrypt/live/originalsite.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/originalsite.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>
Second Site (redirecting to original)
<VirtualHost *:80>
ServerName www.second.com
DocumentRoot /var/www/second
<Directory /var/www/second>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
</Directory>
CustomLog /http-logs/second.log combined
ErrorLog /http-logs/second.err.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg. LogLevel warn
</VirtualHost>
]]>I have users set up but if they use my root ssh login and password they get permission denied. Also, none of this is going to be a production site. I’m just trying out some things and sharing it with another user to help me get it right. What can I do to get them permission?
]]>LABEL=cloudimg-rootfs / ext4 discard,errors=remount-ro 0 1
LABEL=UEFI /boot/efi vfat umask=0077 0 1
/dev/sda /home/newfolder ext4 defaults 0 0
after a reboot /home/newfolder
is replaced by
/dev/sda 50G 16K 47G 1% /mnt/volume_lon1_01
until I umount /dev/sda
then df -h
shows this
/dev/sda 50G 16K 47G 1% /home/newfolder
How do I get /dev/sda
to mount on /home/newfolder
on reboot?
After successful completion of renewing cert, my roundcube login page will no longer display and I’m getting the following error when trying to access the website via Google Chrome:
This page isn’t working mail.cnic-n9portal.net redirected you too many times. Try clearing your cookies. ERR_TOO_MANY_REDIRECTS
Sorry as I’m rather new to this. Is something pointing in the wrong direction in a config file perhaps?
Please let me know if providing screenshots will help. Thank you in advance!
]]>│ Error: Error creating droplet: POST https://api.digitalocean.com/v2/droplets: 422 (request "b17f42e6-326a-43be-a2a0-539c344ee746") You specified an invalid image for Droplet creation.
│
│ with digitalocean_droplet.njogubless,
│ on main.tf line 7, in resource "digitalocean_droplet" "njogubless":
│ 7: resource "digitalocean_droplet" "njogubless" {
]]>However, up till now I was using a root user. For security purposes, I’m trying to disable root access: https://www.digitalocean.com/community/tutorials/how-to-disable-root-login-on-ubuntu-20-04
But when I ssh with a non-root user, I’m only able to view files with VS Code. If I want to edit/save my changes, I have to use sudo, like so: sudo code filename
, and this throws me an error: sudo: code: command not found
I found a couple of threads discussing this issue, but no resolutions (see below). If anyone has any pointers, I’d appreciate it. I’d really hate having to go back to nano and Vim :)
https://github.com/microsoft/vscode-remote-release/issues/1688 https://github.com/microsoft/vscode/issues/48659
]]>Step 1. Get device major:minor numbers:
$ stat /path/to/some/file/on/device
...
Device: fd00h/64768d
...
Step 2. Use the device numbers to find the device under /sys
:
$ ls -l /sys/dev/block/7:26
lrwxrwxrwx 1 root root 0 Apr 15 07:25 /sys/dev/block/7:26 -> ../../devices/virtual/block/loop26
where 7 is the major device number and 26 is the minor. From the above numbers, you can compute these using:
$ expr 64768 / 256 # major
253
$ expr 64768 % 256 # minor
0
Step 3. Find the rotational
file
Now, if I look at the ../../devices/virtual/block/loop26
folder, I can search and find a rotational file. But first I rebuild the path properly. This means I want a full path rather than a relative path. As we can see, there are two ..
which means we go up two directories. The result is:
/sys/devices/virtual/block/loop26
Lets look inside that folder:
$ ls /sys/devices/virtual/block/loop26
...
queue
...
We see a queue sub-directory, let’s look inside:
$ ls /sys/devices/virtual/block/loop26/queue
...
rotational
...
The rotational
file exists, let’s print the content:
$ cat /sys/devices/virtual/block/loop26/queue/rotational
1
The loop device is considered to be a rotational (HDD) device!
IMPORTANT NOTE: In some cases, the queue sub-directory is one or two directories up. So the first ls
command may not show it. Try again after removing one sub-directory in your path. Repeat until you find a queue
sub-directory or the path is only 3 segments (/sys/devices/virtual
in my example).
So now I have a way to check the rotational file. When I test against my SSD drives on my server at home, I get 0
as expected. The disk is not rotational.
When I check on DigitalOcean, I get a 1
as if the drive in my VPS was a rotational (HDD) file. It should return 0
. Just in case, I tested in VirtualBox on my server, and I get the same effect. If I create a disk in my VirtualBox server which is on an SSD, it does not recognize it as an SSD within VirtualBox. So I am thinking that both systems are using the same driver to simulate hard drives within the VPS…
Are there plans to fix this issue at DigitalOcean? I would need to know whether drive A is SSD or HDD and drive B is SSD or HDD and I would prefer not to have to indicate that information manually since this is prone to mistakes over time.
]]>server {
listen 80;
server_name IP_ADDRESS;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/user/pyapps/test_project;
}
location /media/ {
root /home/user/pyapps/test_project;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Gunicorn.service
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=user
Group=www-data
WorkingDirectory=/home/user/pyapps/test_project
ExecStart=/home/user/pyapps/venv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
test.wsgi:application
[Install]
WantedBy=multi-user.target
sudo systemctl status gunicorn.socket gives:
● gunicorn.socket - gunicorn socket
Loaded: loaded (/etc/systemd/system/gunicorn.socket; enabled; vendor preset: enabled)
Active: failed (Result: service-start-limit-hit) since Thu 2023-03-30 06:50:20 UTC; 18min ago
Triggers: ● gunicorn.service
Listen: /run/gunicorn.sock (Stream)
Mar 30 06:46:29 ... systemd[1]: Listening on gunicorn socket.
Mar 30 06:50:20 ... systemd[1]: gunicorn.socket
sudo systemctl status gunicorn.service gives:
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service_; disabled; vendor preset: enabled)_
Active: failed (Result: exit-code) since Thu 2023-03-30 06:50:20 UTC_; 21min ago_
TriggeredBy: ● gunicorn.socket
Process: 46876 ExecStart=/home/.../pyapps/venv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock ....wsgi:application (code=exited, status=203/EXEC)
Main PID: 46876 (code=exited, status=203/EXEC)
Mar 30 06:50:20 ... systemd[1]: Started gunicorn daemon.
Mar 30 06:50:20 ... systemd[46876]: gunicorn.service: Failed to execute command: No such file or directory
Mar 30 06:50:20 ... systemd[46876]: gunicorn.service: Failed at step EXEC spawning /home/.../pyapps/venv/bin/gunicorn: No such file or directory
Mar 30 06:50:20 ... systemd[1]: gunicorn.service: Main process exited, code=exited, status=203/EXEC
Mar 30 06:50:20 ... systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Mar 30 06:50:20 ... systemd[1]: gunicorn.service: Start request repeated too quickly.
Mar 30 06:50:20 ... systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Mar 30 06:50:20 ... systemd[1]: Failed to start gunicorn daemon.
How can I fix the error of “gunicorn.service: Failed to execute command: No such file or directory”.
]]>Everything was working, but one day everything shut down and I saw server errors: 1 When I tried to access the page - 502 # Bad Gateway nginx/1.18.0 (Ubuntu)
2 When I entered the command “nginx” - 2.1 nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) 2.2 nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) I tried entering the server address directly, specifying different ports, as well as killing existing processes and then running nginx, but it didn’t help…
3 The command “journalctl -xe” gives the following errors: 3.1 [UFW BLOCK] IN OUT MAC SRC DST LEN TOS PREC TTL ID PROTO SPT DPT WINDOW RES SYN URGP - I only listed the property names (keys) without their values. 3.2 machine sshd[19370]: Invalid user NAME from IP port PORT machine sshd[19370]: Received disconnect from IP port PORT:VAL: Bye Bye [preauth] machine sshd[19370]: Disconnected from invalid user NAME IP port PORT [preauth] It seems like someone is trying to connect to me ??? I haven’t given my IP address or any other means of connecting to me to anyone, so I use it only for development purposes.
I tried to create a completely new machine (server). I set everything up from scratch, it worked for 5 minutes, and then stopped.
What could be the problem? Thanks for any help
If you are interested and you need any other information, let me know what I can provide you!
]]>Below command was issued:
sudo apt-get install software-properties-common sudo add-apt-repository ppa:ondrej/php sudo apt-get update sudo apt-get install php8.1
But the application keep on responding with such:
E: Unable to locate package php8.0 E: Couldn’t find any package by glob ‘php8.0’ E: Couldn’t find any package by regex ‘php8.0’
Below are my OS details (VPS):
NAME=“Ubuntu” VERSION=“16.04.7 LTS (Xenial Xerus)” ID=ubuntu ID_LIKE=debian PRETTY_NAME=“Ubuntu 16.04.7 LTS” VERSION_ID=“16.04” HOME_URL=“http://www.ubuntu.com/” SUPPORT_URL=“http://help.ubuntu.com/” BUG_REPORT_URL=“http://bugs.launchpad.net/ubuntu/” VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial
If I am required to upgrade my VPS, can someone guide me to do so? I am not used to develoment stuff and I have WordPress on my VPS. So how can I upgrade (if required) without losing all those stuff there?
Thank you in advance.
Regards, Jasveer
]]>What should I do next to diagnose this problem?
]]>My client’s server has recently started having performance issues. The single website hosted on the server loads very slowly. It is built with WordPress using a custom theme. There are a minimum amount of plugins installed, and none were added in the last few weeks when the performance issues began.
The server has 2 CPUs and 2GB of memory. The server stack is LAMP with Ubuntu 20.04 and PHP FPM. All of the installed software is kept up to date.
Both of the CPUs constantly use more than 80% while running the WordPress site.
The free -m command returns the following: Total 1983, Used 1734, Free 64, Shared 132, Buff/Cache 184, and Available 12.
Rebooting the server temporarily resolves these issues.
Redis Server is installed for object caching. No other caching or optimization plugins are currently installed on the server or in WordPress.
Any suggestions on how to address these issues?
I am novice server administrator, but keen to learn.
Cheers,
]]>The Rust programming language, also known as rust-lang, is a powerful general-purpose programming language. Rust is syntactically similar to C++ and is used for a wide range of software development projects, including browser components, game engines, and operating systems.
In this tutorial, you’ll install the latest version of Rust on Ubuntu 20.04, and then create, compile, and run a test program. The examples in this tutorial show the installation of Rust version 1.66.
Note: This tutorial also works for Ubuntu 22.04, however, you might be presented with interactive dialogs for various questions when you run apt upgrade
. For example, you might be asked if you want to automatically restart services when required or if you want to replace a configuration file that you’ve modified. The answers to these questions depend on your software and preferences and are outside the scope of this tutorial.
To complete this tutorial, you’ll need an Ubuntu 20.04 server with a sudo-enabled non-root user and a firewall. You can set this up by following our Initial Server Setup with Ubuntu 20.04 tutorial.
rustup
ToolAlthough there are several different ways to install Rust on Linux, the recommended method is to use the rustup
command line tool.
Run the command to download the rustup
tool and install the latest stable version of Rust:
- curl --proto '=https' --tlsv1.3 https://sh.rustup.rs -sSf | sh
You’re prompted to choose the type of installation:
Outputsammy@ubuntu:~$ curl --proto '=https' --tlsv1.3 https://sh.rustup.rs -sSf | sh
info: downloading installer
Welcome to Rust!
This will download and install the official compiler for the Rust
programming language, and its package manager, Cargo.
Rustup metadata and toolchains will be installed into the Rustup
home directory, located at:
/home/sammy/.rustup
This can be modified with the RUSTUP_HOME environment variable.
The Cargo home directory is located at:
/home/sammy/.cargo
This can be modified with the CARGO_HOME environment variable.
The cargo, rustc, rustup and other commands will be added to
Cargo's bin directory, located at:
/home/sammy/.cargo/bin
This path will then be added to your PATH environment variable by
modifying the profile files located at:
/home/sammy/.profile
/home/sammy/.bashrc
You can uninstall at any time with rustup self uninstall and
these changes will be reverted.
Current installation options:
default host triple: x86_64-unknown-linux-gnu
default toolchain: stable (default)
profile: default
modify PATH variable: yes
1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
>
This tutorial uses the default option 1. However, if you’re familiar with the rustup
installer and want to customize your installation, you can choose option 2. Type your selection and press Enter
.
The output for option 1 is:
Outputinfo: profile set to 'default'
info: default host triple is x86_64-unknown-linux-gnu
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
info: latest update on 2023-01-10, rust version 1.66.1 (90743e729 2023-01-10)
info: downloading component 'cargo'
info: downloading component 'clippy'
info: downloading component 'rust-docs'
info: downloading component 'rust-std'
info: downloading component 'rustc'
67.4 MiB / 67.4 MiB (100 %) 40.9 MiB/s in 1s ETA: 0s
info: downloading component 'rustfmt'
info: installing component 'cargo'
6.6 MiB / 6.6 MiB (100 %) 5.5 MiB/s in 1s ETA: 0s
info: installing component 'clippy'
info: installing component 'rust-docs'
19.1 MiB / 19.1 MiB (100 %) 2.4 MiB/s in 7s ETA: 0s
info: installing component 'rust-std'
30.0 MiB / 30.0 MiB (100 %) 5.6 MiB/s in 5s ETA: 0s
info: installing component 'rustc'
67.4 MiB / 67.4 MiB (100 %) 5.9 MiB/s in 11s ETA: 0s
info: installing component 'rustfmt'
info: default toolchain set to 'stable-x86_64-unknown-linux-gnu'
stable-x86_64-unknown-linux-gnu installed - rustc 1.66.1 (90743e729 2023-01-10)
Rust is installed now. Great!
To get started you may need to restart your current shell.
This would reload your PATH environment variable to include
Cargo's bin directory ($HOME/.cargo/bin).
To configure your current shell, run:
source "$HOME/.cargo/env"
sammy@ubuntu:~$
Next, run the following command to add the Rust toolchain directory to the PATH environment variable:
- source $HOME/.cargo/env
Verify the Rust installation by requesting the version:
- rustc --version
The rustc --version
command returns the version of the Rust programming language installed on your system. For example:
Outputsammy@ubuntu:~$ rustc --version
rustc 1.66.1 (90743e729 2023-01-10)
sammy@ubuntu:~$
Rust requires a linker program to join compiled outputs into one file. The GNU Compiler Collection (gcc
) in the build-essential
package includes a linker. If you don’t install gcc
, then you might get the following error when you try to compile:
error: linker `cc` not found
|
= note: No such file or directory (os error 2)
error: aborting due to previous error
You’ll use apt
to install the build-essential
package.
First, update the Apt package index:
- sudo apt update
Enter your password to continue if prompted. The apt update
command outputs a list of packages that can be upgraded. For example:
Outputsammy@ubuntu:~$ sudo apt update
[sudo] password for sammy:
Hit:1 http://mirrors.digitalocean.com/ubuntu focal InRelease
Get:2 http://mirrors.digitalocean.com/ubuntu focal-updates InRelease [114 kB]
Hit:3 https://repos-droplet.digitalocean.com/apt/droplet-agent main InRelease
Get:4 http://mirrors.digitalocean.com/ubuntu focal-backports InRelease [108 kB]
Get:5 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:6 http://mirrors.digitalocean.com/ubuntu focal-updates/main amd64 Packages [2336 kB]
Get:7 http://mirrors.digitalocean.com/ubuntu focal-updates/main Translation-en [403 kB]
Get:8 http://mirrors.digitalocean.com/ubuntu focal-updates/main amd64 c-n-f Metadata [16.2 kB]
Get:9 http://mirrors.digitalocean.com/ubuntu focal-updates/restricted amd64 Packages [1560 kB]
Get:10 http://mirrors.digitalocean.com/ubuntu focal-updates/restricted Translation-en [220 kB]
Get:11 http://mirrors.digitalocean.com/ubuntu focal-updates/restricted amd64 c-n-f Metadata [620 B]
Get:12 http://mirrors.digitalocean.com/ubuntu focal-updates/universe amd64 Packages [1017 kB]
Get:13 http://mirrors.digitalocean.com/ubuntu focal-updates/universe Translation-en [236 kB]
Get:14 http://mirrors.digitalocean.com/ubuntu focal-updates/universe amd64 c-n-f Metadata [23.2 kB]
Get:15 http://mirrors.digitalocean.com/ubuntu focal-updates/multiverse amd64 Packages [25.2 kB]
Get:16 http://mirrors.digitalocean.com/ubuntu focal-updates/multiverse Translation-en [7408 B]
Get:17 http://mirrors.digitalocean.com/ubuntu focal-updates/multiverse amd64 c-n-f Metadata [604 B]
Get:18 http://mirrors.digitalocean.com/ubuntu focal-backports/main amd64 Packages [45.7 kB]
Get:19 http://mirrors.digitalocean.com/ubuntu focal-backports/main Translation-en [16.3 kB]
Get:20 http://mirrors.digitalocean.com/ubuntu focal-backports/main amd64 c-n-f Metadata [1420 B]
Get:21 http://mirrors.digitalocean.com/ubuntu focal-backports/universe amd64 Packages [24.9 kB]
Get:22 http://mirrors.digitalocean.com/ubuntu focal-backports/universe Translation-en [16.3 kB]
Get:23 http://mirrors.digitalocean.com/ubuntu focal-backports/universe amd64 c-n-f Metadata [880 B]
Get:24 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [1960 kB]
Get:25 http://security.ubuntu.com/ubuntu focal-security/main Translation-en [320 kB]
Get:26 http://security.ubuntu.com/ubuntu focal-security/main amd64 c-n-f Metadata [11.7 kB]
Get:27 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [1463 kB]
Get:28 http://security.ubuntu.com/ubuntu focal-security/restricted Translation-en [207 kB]
Get:29 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 c-n-f Metadata [624 B]
Get:30 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [786 kB]
Get:31 http://security.ubuntu.com/ubuntu focal-security/universe Translation-en [152 kB]
Get:32 http://security.ubuntu.com/ubuntu focal-security/universe amd64 c-n-f Metadata [16.9 kB]
Get:33 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [22.2 kB]
Get:34 http://security.ubuntu.com/ubuntu focal-security/multiverse Translation-en [5464 B]
Get:35 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 c-n-f Metadata [516 B]
Fetched 11.2 MB in 5s (2131 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
103 packages can be upgraded. Run 'apt list --upgradable' to see them.
sammy@ubuntu:~$
Next, upgrade any out-of-date packages:
- sudo apt upgrade
Enter Y
if prompted to continue the upgrades.
When the upgrades are complete, install the build-essential
package:
- sudo apt install build-essential
Enter Y
when prompted to continue the installation. The installation is complete when your terminal returns to the command prompt with no error messages.
In this step, you’ll create a test program to try out Rust and verify that it’s working properly.
Start by creating some directories to store the test script:
- mkdir ~/rustprojects
- cd ~/rustprojects
- mkdir testdir
- cd testdir
Use nano
, or your favorite text editor, to create a file in testdir
to store your Rust code:
- nano test.rs
You need to use the .rs
extension for all your Rust programs.
Copy the following code into test.rs
and save the file:
fn main() {
println!("Congratulations! Your Rust program works.");
}
Compile the code using the rustc
command:
- rustc test.rs
Run the resulting executable:
- ./test
The program prints to the terminal:
Outputsammy@ubuntu:~/rustprojects/testdir$ ./test
Congratulations! Your Rust program works.
sammy@ubuntu:~/rustprojects/testdir$
It’s a good idea to update your installation of Rust on Ubuntu regularly.
Enter the following command to update Rust:
- rustup update
You can also remove Rust from your system, along with its associated repositories.
Enter the following command to uninstall Rust:
- rustup self uninstall
You’re prompted to enter Y
to continue the uninstall process:
Outputammy@ubuntu:~/rustprojects/testdir$ rustup self uninstall
Thanks for hacking in Rust!
This will uninstall all Rust toolchains and data, and remove
$HOME/.cargo/bin from your PATH environment variable.
Continue? (y/N)
Enter Y
to continue:
OutputContinue? (y/N) y
info: removing rustup home
info: removing cargo home
info: removing rustup binaries
info: rustup is uninstalled
sammy@ubuntu:~/rustprojects/testdir$
Rust is removed from your system.
Now that you’ve installed and tested out Rust on Ubuntu, continue your learning with more Ubuntu tutorials.
]]>Status: active
Logging: on (low)
Default: allow (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
80/tcp (Apache) ALLOW IN Anywhere
80 ALLOW IN Anywhere
443 ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
80/tcp (Apache (v6)) ALLOW IN Anywhere (v6)
80 (v6) ALLOW IN Anywhere (v6)
443 (v6) ALLOW IN Anywhere (v6)
If I do ufw insert 1 deny from IP
it does not work, the IP is still allowed, I assumed it is because default incoming is allow! But should it though? Should not this rue override the default rule?
And whenever I run ufw default deny incoming
, which is the default configuration, I cannot access my server anymore, regardless of all the custom rules I added.
I ran ufw reset
and also iptables -F
, and did the following:
ufw allow apache
ufw allow ssh
And I could not connect unless I changed ufw default incoming to allow
Note: I think, maybe this is because I ran iptables -F
, I had to because I added some custom rules to iptables directly, not through ufw, and I wanted to start over
Please advise.
]]>I’m trying to connect to droplet with SSH using DigitalOcean CLI and I’m following this tutorial now: How to Connect to Droplets with SSH :: DigitalOcean Documentation
In step 4, after I run this command:
doctl compute ssh <My Droplet Name>
I got this warning
Warning: Identity file /root/.ssh/id_rsa not accessible: No such file or directory. root@<My Server IP address>: Permission denied (publickey).
However, I can access my public key using cat /home/xichen/.ssh/id_rsa.pub, and I created and uploaded my SSH key successfully on DigitalOcean.
Can anybody help me with please?
Kind regards, Xi
]]>I’m trying to upload my SSH key to the DigitalOcean server using command
doctl compute ssh-key create <key-name> [flags]
I got a question about this [flags], apparently, I need to pass my public SSH key into this [flags], but what’s the format of it? I mean do I just input
doctl compute ssh-key create my-key-name <SSH KEY content>
or should I input
doctl compute ssh-key create my-key-name --public-key <SSH KEY CONTENT>
and for the SSH KEY content, mine is something like this:
ssh-rsa <a long string> username@hostname
My question is, which part is the ssh-key that I should upload? the <a long string> or the whole string? Because I noticed that there are spaces between these three parts.
Really looking for your help! Thank you so much!
Regards, Xi
]]>Shell is a command-line interpreter that allows the user to interact with the system. It is responsible for taking inputs from the user and displaying the output.
Shell scripts are a series of commands written in order of execution. These scripts can contain functions, loops, commands, and variables. Scripts are useful for simplifying a complex series of commands and repetitive tasks.
In this article, you will learn how to create and execute shell scripts for the command line in Linux.
To complete this tutorial, you will need:
chmod
, mkdir
, and cd
.A shell script needs to be saved with the extension .sh
.
The file needs to begin with the shebang line (#!
) to let the Linux system know which interpreter to use for the shell script.
For environments that support bash
, use:
#!/bin/bash
For environments that support shell
, use:
#!/bin/sh
This tutorial assumes that your environment supports bash
.
Shell scripts can also have comments to increase readability. A good script always contains comments that help a reader understand exactly what the script is doing and the reasoning behind a design choice.
You can create a shell script using the vi editor, a cat
command, or a text editor.
For this tutorial, you will learn about creating a shell script with vi
:
- vi basic_script.sh
This starts the vi
editor and creates a basic_script.sh
file.
Then, press i
on the keyboard to start INSERT MODE
. Add the following lines:
#!/bin/bash
whoami
date
This script runs the commands whoami
and date
. whoami
displays the active username. date
displays the current system timestamp.
To save and exit the vi
editor:
ESC
:
(colon character)wq
ENTER
Finally, you can run the script with the following command:
- bash basic_script.sh
You may get output that resembles the following:
Outputroot
Fri Jun 19 16:59:48 UTC 2020
The first line of output corresponds to the whoami
command. The second line of output corresponds to the date
command.
You can also run a script without specifying bash
:
- ./basic_script.sh
Running the file this way might require the user to give permission first. Running it with bash
doesn’t require this permission.
Output~bash: ./basic_script.sh: Permission denied
The command bash filename
only requires the read permission from the file.
Whereas the command ./filename
, runs the file as an executable and requires the execute permission.
To execute the script, you will need to update the permissions.
- chmod +x basic_script.sh
This command applies chmod
and gives x
(executable) permissions to the current user.
Scripts can include user-defined variables. In fact, as scripts get larger in size, it is essential to have variables that are clearly defined and that have self-descriptive names.
Add the following lines to the script:
#!/bin/bash
# This is a comment
# defining a variable
GREETINGS="Hello! How are you"
echo $GREETINGS
GREETINGS
is the variable defined and later accessed using $
(dollar sign symbol. There should be no space in the line where variables are being assigned a value.
Run the script:
- bash basic_script.sh
This prints out the value assigned to the variable:
OutputHello! How are you
When the script is run, GREETINGS
is defined and accessed.
Shell scripts can be made interactive with the ability to accept input from the command line. You can use the read
command to store the command line input in a variable.
Add the following lines to the script:
#!/bin/bash
# This is a comment
# defining a variable
echo "What is your name?"
# reading input
read NAME
# defining a variable
GREETINGS="Hello! How are you"
echo $NAME $GREETINGS
A variable NAME
has been used to accept input from the command line. This script waits for the user to provide input for NAME
. Then it prints NAME
and GREETINGS
.
OutputWhat is your name?
Sammy
Sammy Hello! How are you
In this example, the user has provided the prompt with the name: Sammy
.
Users can define their own functions in a script. These functions can take multiple arguments.
Add the following lines to the script:
#!/bin/bash
#This is a comment
# defining a variable
echo "What is the name of the directory you want to create?"
# reading input
read NAME
echo "Creating $NAME ..."
mkcd ()
{
mkdir "$NAME"
cd "$NAME"
}
mkcd
echo "You are now in $NAME"
This script asks the user for a directory name. Then, it uses mkdir
to create the directory and cd
into it.
OutputWhat is the name of the directory you want to create?
test_dir
Creating test_dir ...
You are now in test_dir
In this example, the user has provided the prompt with the input: test_dir
. Next, the script creates a new directory with that name. Finally, the script changes the user’s current working directory to test_dir
.
In this article, you learned how to create and execute shell scripts for the command line in Linux.
Consider some repetitive or time-consuming tasks that you frequently perform that could benefit from a script.
Continue your learning with if-else
, arrays, and arguments in the command line.
What’s the best way to manage Moodle sites, hosted on DO server? We recently took over a client with two Moodle sites. We have never managed these type of sites before and moved the two sites from a shared hosting environment (with cPanel) to a new DO server. We’ve since worked with a freelance sysdev who claims that it would have been better to keep the sites on the old, shared server, as cPanel is better when it comes to things such as updating the Moodle app. Is he correct and what can we do now, seeing as we don’t really want to setup a new shared hosting server just for these two Moodle sites. The sysdev has told me that he spent up to 4 hours doing backups, updating the two Moodle sites (from 4.0.2 - 4.1) and also setting cron jobs, where she said it would only have taken 15 minutes with cPanel. Is he right about cPanel being better/easier for Moodle and what can we do (within DO) to make future updates / service easier and faster?
Thanks!
]]>I want to connect and sync my all files from dropbox to droplet. I really don’t know how to do that. Because I’m new to linux and ubuntu. I have used some of the articles and managed to add install dropbox. But I’m not sure about this. Can you please help me out. I wanted to show my dropbox’s app into the droplet’s IP address
Thank you
]]>sudo ufw allow 6333 sudo ufw allow 6333/tcp
Can anyone tell me what I am doing wrong?
]]>I then followed this guide to add a GUI - https://www.digitalocean.com/community/questions/how-to-install-graphical-interface
All the steps worked, until the step to establish a secure connection (Step 3 — Connecting to the VNC Desktop Securely).
I ran the code that was provided (changing the user parameter accordingly) -
> ssh -L 59000:localhost:5901 -C -N -l sammy your_server_ip
But I got the following error - Permission denied (publickey).
I do have SSH keys as the authentication method on root, and my non root user can login successfully. So I assume the key was copied successfully to that user.
I was required to set a password for the non root user, and I am not sure if that could be the cause of the error.
How can I fix this error and establish the SSH tunnel?
]]>Systemd Description=goapp
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/home/.../goapp/main
[Install]
WantedBy=multi-user.target
I got this error
goapp.service - rediate
Loaded: loaded (/lib/systemd/system/goapp.service; disabled;
vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Thu
2022-09-29 08:14:10 UTC; 66ms ago
Process: 21628
ExecStart=/home/.../go/goapp/main (code=exited, status=2)
Main PID: 21628 (code=exited, status=2)
CPU: 9ms
]]>Main domain: example.com
example.com => droplet_1
foo.example.com => droplet_2
bar.example.com => droplet_3
Any suggestion/idea would be helpful. Thanks in advance.
]]>Do you love the dock on macOS but also are very loyal to Linux? Fret not because Docky solves this for you. This is the best Linux dock that will give you a macOS like feel while working on Linux.
Who should use Docky? - People who want an out of the box, easy to use experience (and also those who love the macOS Dock)
Website: Docky Official Download
Don’t underestimate the Tint2 dock by just this screenshot above. This is the best Linux dock for customization and can literally be configured to look and feel the way you want. You’re only limited by your imagination and command-line skills when configuring this Linux dock for your system.
You can download Tint2 directly with the use of your package manager. But if you’re adventurous and would like to compile from source, you can use the link below.
Who should use tint2? - Customization nerds who love the look of beautiful text and font-icons (font-awesome anyone?) with the ability to customize the entire dock from scratch.
Website: Tint2 Official Gitlab
If you want something that has all the bells and whistles and configuration options, while still being easy to use for a beginner, this is the dock you need. All of these options are configurable with the use of the dock preferences.
The default set up is good too, but with the features and configuration options provided by this dock, you might not want to look at anything else.
Who should use Cairo Dock? - People who want a highly configurable dock, without having to tinker with code or command line.
Website: Cairo Dock Official Website
So this dock is not limited to XFCE users but it certainly is the most popular among them. One thing to note is that if all you want is a default dock with icons that launch applications, the XFCE Panels are more than enough for the job.
The DockBarX offers additional features that the XFCE Panels don’t; like identifying which app windows are open with some icon-hinting or showing app-specific menus wherever available.
Who should use DockBarX? - If you’re an XFCE fan, but aren’t satisfied with the XFCE Panel’s functionalities, DockBarX is the dock you should choose.
Website: DockBarX Github
Based on the plasma frameworks, Latte dock offers an intuitive experience for your open tasks and widgets. The content is animated with the use of parabolic zoom and the dock also features auto-hide to save screen space.
This dock was made with the K Desktop Environment in mind so most of the features work right out of the box on KDE. As it is with any Linux package, this dock can run on any other desktop environment, provided you configure it and provide it with the dependencies to run.
Who should use Latte Dock? People who are KDE users and love the beauty and features that the desktop environment offers and would love their dock to have similar features out of the box
Website: Latte Dock KDE Github
Did we say easiest? Yes, we did. This is actually the easiest Linux dock to set up and get started with. Not everyone is interested in configuring a lot of options just to get a dock show up correctly on their systems.
Though most of the Linux docks that we talked about above also work out of the box, but the sheer number of features and options make them really advanced for someone who just wants something to work.
With just the right amount of configuration options, this dock makes it easy for its users to build beautiful Linux docks.
Who should use Plank Dock? People who want easy-to-use, and out-of-the-box good looking dock which simply fits right into the current system theme.
Website: Plank Dock Launchpad
Just because it’s the oldest, doesn’t make it ugly or out-dated. In fact, it’s a very feature-rich, stable, and tried and tested Linux dock. If you are someone who loves stability over everything else, then you won’t go wrong with the AWN Linux dock.
The number of years it took to mature have made the code robust and you can use all of its features without breaking the dock.
Who should use Avant Window Navigator? People who love tried and tested technology.
Website: AWN Launchpad
The i3status is like an empty slate. You decide exactly what goes on it from start to finish. You decide on which side of the screen it sticks, the size of the bar, the colors that it uses, the fonts that are used, everything.
Initially built for i3-Window Manager, this status bar was liked and accepted by many users using other window managers. If you look at the screenshot above, you’ll notice how there’s a section that you can define for each element that’s visible on the bar.
Who should use i3status? - Command-line fans who want to display nerdy outputs of different commands on their docks.
Website: i3status Official Page
This is yet another bar that allows the configuration of each of the elements while being really lightweight on the system resources.
Who should use lemonbar? People who want to keep their system lightweight and functional at the same time.
Website: Lemonbar Github
This KDE specific dock can be extended with the use of KDE Plasma widgets. This Linux dock has a lot of UI animations and transitions with which you can make it look and feel attractive.
Not everyone would love this dock as it’s really heavy on the system due to the number of dependencies that it has. But for those who are not concerned with the resource usage, and just want a lot of animations, this is your go-to Linux dock.
Who should use KSmoothDock? For people who want to use and try a lot of animations and themes to make their dock look beautiful. NOTE: This dock depends on a lot of KDE packages and if you try to install this on a different desktop environment, you’ll end up installing all those KDE dependencies too.
Website: KSmoothDock Github
Come to think of it, a dock is a really beautiful addition to your existing Linux desktop setup and with the right dock, you can make your entire setup stand out.
We’ve tried to make sure we add the most popular Linux docks to this article. But let us know which Linux dock you use and like in the comments!
]]>People choose Linux for three main reasons:
And the low-resource usage part is what we’re focusing on today.
In a previous article, we talked about some of the best Linux distros overall. So if resource usage is not a concern for you, feel free to look through that article!
Today, I’ve chosen for you the 10 best Linux distributions that offer superb stability, easy to use interfaces, and some even offering an out-of-the-box beautiful look. All of this while being extremely lightweight!
Linux users love the simplicity that Linux brings along with it. That’s one of the major reasons why most Linux-“fans” don’t go for other operating systems that restrict control.
So even if you are on a powerful computer system, using a lightweight Linux distribution gives you the empty space to fill in with what matters.
Compared to a lot of other operating systems that take up 40-60% of the system resources, you are left with a lot more resources of your system to work on tasks that are important to you.
As beginner Linux users turn into power users with continued usage, they tend to understand that most of the utilities offered by the beginner-friendly distributions are unnecessary for daily use.
That’s when distributions focused at power users allow them with the flexibility to pick and install what they need.
The ability to have only what you need in the system with no unnecessary resource usage, is what makes a system lightweight.
In a typical lightweight Linux system, you can expect the memory, the disk space, and the CPU time is used only for what’s truly necessary to you. Not what’s pre-added.
Apart from that, the distros that do come pre-built, are built with lightweight desktop environments like XFCE, LXDE, Openbox, and the likes.
Now let’s get right into the list of the best lightweight Linux distros here.
Crunchbang++ is a minimal Debian-based distro with the Openbox window manager. Its configuration can be set by the user during installation.
After installing the system, a basic system can be set up in a few minutes. Most of the essential packages for Openbox desktop environment are also included. Other required packages can be installed on your system with the apt command.
Crunchbang Plus Plus is perfect for anyone who wants a close-to-barebones experience with GUI Debian Linux, without manually setting things up.
Manjaro offers cutting-edge software packages based on the Arch User Repository (AUR). Apart from using the Arch repositories, the Manjaro community also maintains its own repository for the latest software packages.
So you not only get great support for the top-of-the-line software, you also get enhanced stability because of the added layer of repository checks done by the community.
Though this distribution is available on multiple different desktop environments, the XFCE variant is their main one.
Manjaro Linux is perfect for those looking for an extremely flexible, fast, dependable, and cutting-edge Linux distribution. With its base support from Arch Linux, Manjaro brings new life to your low-end hardware without the added complexities of setting up a minimal Linux desktop
Sparky Linux is a Debian-based very lightweight Linux distribution. It offers a variety of pre-built desktop environments for ease of use. The default desktop environments are LXQt, MATE, and XFCE but users can install other desktops via ‘Sparky APTus’.
Sparky is based on the stable and testing branches of Debian. It also offers a collection of scripts to handle the day-to-day system administration.
There are multiple versions that Sparky Linux offers to serve different purposes. These are:
Sparky is perfect for anyone who wants a fast, lightweight and fully customizable OS for a specific purpose. The Sparky variations are built to serve the needs of different categories of users.
Linux Mint came into popularity when the Ubuntu people ditched GNOME for Unity. Mint community offered MATE which was a smoother, lightweight continuation of the GNOME Desktop environment and that caught on really quickly within the community.
Mint is built to be extremely userfriendly, so much so, a user switching from Windows can start using Linux Mint without a learning curve.
The distro ships without bloatware and an easy to use package manager.
Linux Mint is perfect for those who are migrating from a Windows or Mac OS and want something really simple and stable to use. It’s also great for anyone who just doesn’t want to touch the terminal and prefers the GUI!
Zorin OS Lite is a perfect example of how beautiful, the XFCE desktop environment can look!
Zorin OS was already designed to look extremely beautiful and the Lite version makes it possible to have the same snappy, yet beautiful experience possible on ancient hardware.
Unlike many other Linux distributions, Zorin isn’t built for servers. Instead, it is built with the desktop users in mind every aspect of its user interface reflects the same.
Zorin OS Lite is the best lightweight Linux distribution if you want a fast and stable system without compromising on the looks.
Bodhi Linux is the most lightweight Ubuntu-based Linux distribution on the list if you want an out-of-the-box one. Its desktop environment is called “Moksha”. The Moksha desktop environment offers an extremely lightweight and fast UI with idle RAM usage of just over 150-200megs.
Since it’s based on Ubuntu, you will rarely have a shortage of pre-built binaries to install on your system.
The philosophy behind creating Bodhi was to offer users with a barebones Linux distribution that users can populate with their favorite software packages as required.
MX Linux is a Debian-based Linux distribution. Out of all the distros that we have on the Linux MX has the least good-looking set up by default. This distribution is aimed at power users as it offers so much more control over what you can do with your OS.
MX is a pure performance-driven system which comes with the Debian stability. There is a set of applications here that fall under the MX tools category.
These tools streamlined some advanced actions that are otherwise not easy to do. You get simple graphical interfaces here to sort out things like fixing GPG, key issues, and some other things.
If you don’t like playing with the terminal, MX provides very fine control over package management. You get to choose different versions of the same package from Debian, stable or testing repositories.
Talking about the performance of the OS on old hardware, MX Linux is a distribution that does not offer animations or transitions in the UI. This lowers the resource usage. So even if you want to revive an old laptop or desktop with as low as 512megs of RAM, this distribution can work really well.
The overall balance of being lightweight with Debian’s stability, superior control over your system, and delivering a performance-oriented experience is perfect for power-users.
Linux Lite is based on Ubuntu LTS, but built to be lightweight yet easy to use for Windows users. It is a ‘gateway operating system’.
This means, if you are a first-time user, you are highly likely to enjoy and be able to easily transition from Windows to Linux with this distro. It also does not collect the user data that Ubuntu does in the background.
Linux Lite is perfect for people who are new to Linux and want a lightweight environment that is also fully functional.
Peppermint is Lubuntu-based distro built on long term support (LTS) codebase. As well as being customizable to your heart’s content, the distribution will be ready to go right from the installation.
Peppermint OS comes pre-installed with few native applications and a traditional desktop interface. What originally made Peppermint unique is its approach to creating a hybrid desktop that integrates both cloud and local applications.
This is made possible with their use of ICE applications. It’s quite similar to the chrome apps or “add site to desktop” option on Android. The added benefit is that each application is an isolated browser, completely independent of other apps or browsers.
Peppermint OS is perfect for anyone who has tried a lot of lightweight distros before, but finds something lacking.
Xubuntu was developed by the Ubuntu lovers who preferred the core and the repository support but did not want the heavy UI that accompanied the distro by default.
So, Xubuntu carries all the features of Ubuntu while shedding the heavy UI elements fo the same.
Xubuntu is an elegant and easy to use distribution. It comes with Xfce, which is a stable, light, and configurable desktop environment.
Xubuntu is perfect for those who want the most out of their desktops, laptops, and netbooks with a modern look and enough features for efficient, daily usage. It works really well on older hardware too.
Alright, now this list may seem overwhelming if you’ve never used Linux before. And that’s perfectly fine.
When users are starting out, we suggest them to go with the most beautiful looking distribution that they find. In case of more powerful hardware, you have the freedom to use some user-friendly distributions like Ubuntu.
But since you’re planning to get a low powered device revived, the choices will be limited.
In this case, as a first-timer, go with:
If both of these do not fancy you, feel free to look through the other 8 in the list. I’m sure you’ll find something that you like. And if nothing else, I really enjoy using Peppermint. You might as well end up liking the distro too?
We hope you enjoyed this quick list of all the most lightweight Linux distros out there…
These were the top 10 that we loved and enjoyed using on our older hardware devices. Which one is your favorite? Is it something from this list or something else altogether?
]]>There are four standard repositories in Ubuntu:
First, We need to back up the corrupted source file by moving it to another location. Open a terminal by pressing Ctrl+Alt+T and enter the following command to change the directory to where the source file is located:
cd /etc/apt
Now, move the corrupted file to other location:
sudo mv sources.list <location>
sudo mv sources.list /sid/home/Desktop
Create a new file using the touch command:
sudo touch /etc/apt/sources.list
Now, Open the Software & Updates application using the search bar or the app drawer. Change the server to the main server and enable the restriced repository. You can also enable universe and multiverse repositories if needed.
To enable the updates, Under the Updates tab, select All updates or at least security updates in Subscribed to drop-down menu and click on close.
Click on Reload. The software repositories will be updated.
To verify, open a terminal by pressing Ctrl+Alt+T. Open the /etc/apt/sources.list file by running the following command:
sudo vi /etc/apt/sources.list
If there are entries without the # as shown below, the repositories has been added.
Finally, Update the repositories by executing the following command:
sudo apt update
So, We learned how to restore default repositories in Ubuntu. Follow Journaldev.com for even more tutorials on Linux, Python and more!
]]>Let us quickly delve into the processes of reducing image file size.
Before we move onto the application of this command, let us make sure it is present in the system.
The convert
command comes under the ImageMagick
package. Debian/Ubuntu users can install ImageMagick
by running:
sudo apt install imagemagick
Once the package is installed we can run man convert
to take a look at the variety of operations supports by the command.
The simplest way of reducing the size of the image is by degrading the quality of the image.
convert <INPUT_FILE> -quality 10% <OUTPUT_FILE>
There is a significant reduce in the quality of the image using the convert
command. In case we want to examine the new file size, we can do so by:
du -h jd_logo*
The du command provides the amount of disk used by files in Linux. In the above command, we display the amount of space occupied by all the versions of “jd_logo”.
The file size of image can be reduced if we reduce the amount of pixels it holds. For this purpose, we need to provide the new width and height.
convert <INPUT_FILE> -resize 200x200 <OUTPUT_FILE>
The reduction in the quality of the reduced image can be observed when we stretch its dimensions.
The aspect ratio of the image is restored even though the dimensions provided in the command violated the original aspect ratio. The idea behind the conversion is that the reduced image must fit inside the specified dimensions.
In order to reduce the image into the exact dimensions, and neglecting the aspect ratio, '!'
must be used after the resize
parameter.
convert <INPUT_FILE> -resize 200x200! <OUTPUT_FILE>
Some websites only support specific file extensions, therefore convert
command provides the facility to convert the image format.
convert <INPUT_FILE> <OUTPUT_FILE>
The reduction in quality is 92% if no parameter is provided. In the above snippet, we converted a ‘.png’ image file into a ‘.jpg’ file.
The convert
command has hundreds of applications like rotating an image, applying effects or drawing stuff on an image. We can refer the manual pages by man convert
to master the image formatting tool.
In order to convert multiple files, we need a bash script that runs a loop for all images. There is an alternative for processing multiple image files, which is mogrify
that comes within the ImageMagick
package.
mogrify [OPTIONS] [FILE_LIST]
The main difference between convert
and mogrify
command is that mogrify
command applies the operations on the original image file, whereas convert
does not.
Moreover, mogrify
command supports expressions to queue in multiple files. For instance:
mogrify -quality 10 *.jpg
The applications for convert
and mogrify
are identical as they are derived from the same package.
pngcrush
is a PNG (Portable Network Graphics) file optimizer. It reduces the file size of the image by passing it through various compression methods and filters.
Debian/Ubuntu users can run the following command for installation.
sudo apt get install pngcrush
Users of other Linux distributions can install it using their standard installation commands followed by pngcrush
.
After the installation is done, we can reduce the size of PNG file by running:
pngcrush -brute <INPUT_FILE> <OUTPUT_FILE>
The '-brute'
option takes the file through 114 filter/compression methods. The extended process consumes few seconds. Instead of applying the brute force approach, users can select filters, levels and strategies for optimization.
The types of filters and other properties can be learnt through the manual pages - man pngcrush
.
jpegoptim
is a JPG (Joint Photographic Group) file compressor. This command supports percentage and target file size as a parameter to reduce the image size.
The installation is pretty simple.
sudo apt install jpegoptim
Once the installation is finished, we can run:
jpegoptim --size=<TARGET_SIZE> <INPUT_FILE>
The jpegoptim
utility overwrites the original image therefore it is advised to keep a backup image file. The best feature of this tool is that it accepts the target file size, which can be a life-saver for uploading images of specific sizes.
In the above figure, we compressed a 260 KB file into a 20KB image.
The quality of the image is intact, even though there is a massive 90% reduction in size. The command supports compression on the basis of percentages too.
We can learn more about the command from the manual pages through - man jpegoptim
.
The trimage
GUI Tool is a basic drag and drop software. The added files are automatically compressed to the possible lossless file size.
The installation is similar to the previous methods.
sudo apt install trimage
After the installation is complete, we can access it by searching “trimage” on the system. The trimage window looks like the following image:
The supported columns are:
The tool overwrites the original image. The compression is minimal due to the fact that the compression is lossless.
GIMP (GNU Image Manipulation Program) is a good alternative for a GUI-based image size reduction, but it is definitely an over-kill.
The simplest and the most effective way to reduce file size of images in Linux is using the commands provided by the ImageMagick
package.
We hope the article was interesting as well as informative. Thank you for reading.
These shell scripts can receive input from the user in the form of arguments.
When we pass arguments to a shell script, we can use them for local sequencing of the script. We can use these arguments to obtain outputs and even modify outputs just like variables in shell scripts.
Command-line arguments are parameters that are passed to a script while executing them in the bash shell.
They are also known as positional parameters in Linux.
We use command-line arguments to denote the position in memory where the command and it’s associated parameters are stored. Understanding the command-line arguments is essential for people who are learning shell scripting.
In this article, we will go over the concept of command-line arguments along with their use in a shell script.
Command-line arguments help make shell scripts interactive for the users. They help a script identify the data it needs to operate on. Hence, command-line arguments are an essential part of any practical shell scripting uses.
The bash shell has special variables reserved to point to the arguments which we pass through a shell script. Bash saves these variables numerically ($1, $2, $3, … $n)
Here, the first command-line argument in our shell script is $1, the second $2 and the third is $3. This goes on till the 9th argument. The variable $0 stores the name of the script or the command itself.
We also have some special characters which are positional parameters, but their function is closely tied to our command-line arguments.
The special character $# stores the total number of arguments. We also have $@ and $* as wildcard characters which are used to denote all the arguments. We use $$ to find the process ID of the current shell script, while $? can be used to print the exit code for our script.
Now we have developed an understanding of the command-line arguments in Linux. Now it’s time to use this knowledge for practical application of the netstat command.
For this tutorial, we will go over an example to learn how to use the command-line arguments in your shell script.
First, we will create a shell script to demonstrate the working of all the reserved variables which we discussed in the previous section. Use nano or any preferred editor of your choice and copy the following.
This is the shell script which we plan to use for this purpose.
#!/bin/sh
echo "Script Name: $0"
echo "First Parameter of the script is $1"
echo "The second Parameter is $2"
echo "The complete list of arguments is $@"
echo "Total Number of Parameters: $#"
echo "The process ID is $$"
echo "Exit code for the script: $?"
Once we are done, we will save the script as PositionalParameters.sh and exit our text editor.
Now, we will open the command line on our system and run the shell script with the following arguments.
./PositionalParameters.sh learning command line arguments
The script will run with our specified arguments and use positional parameters to generate an output. If you followed the step correctly, you should see the following screen.
Our output shows the correct output by substituting the reserved variables with the appropriate argument when we called it.
The process was run under the process ID 14974 and quit with the exit code 0.
Being able to read command-line arguments in shell scripts is an essential skill as it allows you to create scripts that can take input from the user and generate output based on a logical pathway.
With the help of command-line arguments, your scripts can vastly simplify the repetitive task which you may need to deal with on a daily basis, create your own commands while saving you both time and effort.
We hope this article was able to help you understand how to read the command line arguments in a shell script. If you have any comments, queries or suggestions, feel free to reach out to us in the comments below.
]]>mod_php
for running a PHP script. The main advantage of using PHP-FPM is that it uses a considerable amount of less memory and CPU as compared with any other methods of running PHP. The primary reason is that it demonizes PHP, thereby transforming it to a background process while providing a CLI script for managing PHP request.
Nginx doesn’t know how to run a PHP script of its own. It needs a PHP module like PHP-FPM to efficiently manage PHP scripts. PHP-FPM, on the other hand, runs outside the NGINX environment by creating its own process. Therefore when a user requests a PHP page the nginx server will pass the request to PHP-FPM service using FastCGI. The installation of php-fpm in Ubuntu 18.04 depends on PHP and its version. Check the documentation of installed PHP before proceeding with installing FPM in your server. Assuming you have already installed the latest PHP 7.3, then you can install FPM using the following apt-get command.
# apt-get install php7.3-fpm
The FPM service will start automatically, once the installation is over. You can verify that using the following systemd command:
# systemctl status php7.3-fpm
● php7.3-fpm.service - The PHP 7.3 FastCGI Process Manager
Loaded: loaded (/lib/systemd/system/php7.3-fpm.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2019-02-17 06:29:31 UTC; 30s ago
Docs: man:php-fpm7.3(8)
Main PID: 32210 (php-fpm7.3)
Status: "Processes active: 0, idle: 2, Requests: 0, slow: 0, Traffic: 0req/sec"
Tasks: 3 (limit: 1152)
CGroup: /system.slice/php7.3-fpm.service
├─32210 php-fpm: master process (/etc/php/7.3/fpm/php-fpm.conf)
├─32235 php-fpm: pool www
└─32236 php-fpm: pool www
The php-fpm service creates a default pool, the configuration (www.conf) for which can be found in /etc/php/7.3/fpm/pool.d
folder. You can customize the default pool as per your requirements. But it is a standard practice to create separate pools to have better control over resource allocation to each FPM processes. Furthermore, segregating FPM pool will enable them to run independently by creating its own master process. That means each php application can be configured with its own cache settings using PHP-FPM. A change in one pool’s configuration does not require you to start or stop the rest of the FPM pools. Let us create an FPM pool for running a PHP application effectively through a separate user. To start with, create a new user who will have exclusive rights over this pool:
# groupadd wordpress_user
# useradd -g wordpress_user wordpress_user
Now navigate to the FPM configuration directory and create a configuration file using your favorite text editor like vi:
# cd /etc/php/7.3/fpm/pool.d
# vi wordpress_pool.conf
[wordpress_site]
user = wordpress_user
group = wordpress_user
listen = /var/run/php7.2-fpm-wordpress-site.sock
listen.owner = www-data
listen.group = www-data
php_admin_value[disable_functions] = exec,passthru,shell_exec,system
php_admin_flag[allow_url_fopen] = off
; Choose how the process manager will control the number of child processes.
pm = dynamic
pm.max_children = 75
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 20
pm.process_idle_timeout = 10s
The above FPM configuration options and their values are described below.
env['PHP_FOO'] = $bar
. For example, adding the following options in the above configuration file will set the hostname and temporary folder location to the PHP environment....
...
env[HOSTNAME] = $HOSTNAME
env[TMP] = /tmp
...
...
Also, the process managers settings in the above pool configuration file are set to dynamic. Choose a setting that best suits your requirement. The other configuration options for process manager are:- Static: A fixed number of PHP processes will be maintained.
Once you are done with creating the above configuration file, restart fpm service to apply new settings:
# systemctl start php7.3-fpm
The FPM pool will be created immediately to serve php pages. Remember, you can create a separate systemd service by specifying the above FPM configuration file thereby enabling you to start/stop this pool without affecting other pools.
Now create an NGINX server block that will make use of the above FPM pool. To do that, edit your NGINX configuration file and pass the path of pool’s socket file using the option fastcgi_pass
inside location block for php.
server {
listen 80;
server_name example.journaldev.com;
root /var/www/html/wordpress;
access_log /var/log/nginx/example.journaldev.com-access.log;
error_log /var/log/nginx/example.journaldev.com-error.log error;
index index.html index.htm index.php;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php7.2-fpm-wordpress-site.sock;
fastcgi_index index.php;
include fastcgi.conf;
}
}
Make sure the above configuration setting is syntactically correct and restart NGINX.
# nginx-t
# systemctl restart nginx
To test if the above NGINX configuration file is indeed using the newly created FPM pool, create a php info file inside the web root. I have used /var/www/html/wordpress
as a web root in the above NGINX configuration file. Adjust this value according to your environment.
# cd /var/www/html/wordpress
# echo "<?php echo phpinfo();?>" > info.php
Once you are done with creating the PHP info page, point your favorite web browser to it. You will notice that the value of $_SERVER['USER']
and $_SERVER['HOME']
variable are pointing to wordpress_user
and /home/wordpress_user
respectively that we set in the FPM configuration file previously and thus confirms that the NGINX is serving the php pages using our desired FPM pool.
In this article, we learned how to install php-fpm and configure separate pools for different users and applications. We also learned how to configure an NGINX server block to connect to a PHP-FPM service. PHP-FPM provides reliability, security, scalability, and speed along with a lot of performance tuning options. You can now split the default PHP-FPM pool into multiple resource pools to serve different applications. This will not only enhance your server security but also enable you to allocate server resources optimally!
]]>In this tutorial, we will explore how NGINX can be used as a reverse proxy server for a Node or an Angular application. Below diagram gives you an overview of how reverse proxy server works and process client requests and send the response.
SUBDOMAIN.DOMAIN.TLD
and PRIVATE_IP
. Replace them with your own values at appropriate places.Assuming you have already installed NGINX in your environment, Let us create an example NodeJS application that will be accessed through NGINX reverse proxy. To start with, set up a node environment in a system residing in your private network.
Before proceeding with installing NodeJS and latest version of npm(node package manager), check if it is already installed or not:
# node --version
# npm --version
If the above commands return the version of NodeJS and NPM then skip the following installation step and proceed with creating the example NodeJS application. To install NodeJS and NPM, use the following commands:
# apt-get install nodejs npm
Once installed, check the version of NodeJS and NPM again.
# node --version
# npm --version
Once NodeJS environment is ready, create an example application using ExpressJS. Therefore, create a folder for node application and install ExpressJS.
# mkdir node_app
# cd node_app
# npm install express
Now using your favorite text editor, create app.js
and add the following content into it.
# vi app.js
const express = require('express')
const app = express()
app.get('/', (req, res) => res.send('Hello World !'))
app.listen(3000, () => console.log('Node.js app listening on port 3000.'))
Run the node application using following command:
# node app.js
Make a curl query to the port number 3000 to confirm that the application is running on localhost.
# curl localhost:3000
Hello World !
At this point, NodeJS application will be running in the upstream server. In the last step, we will configure NGINX to act as a reverse proxy for the above node application. For the time being, let us proceed with creating an angular application, the steps for which are given below:
Angular is another JavaScript framework for developing web applications using typescript. In general, an angular application is accessed through the standalone server that is shipped along with it. But due to a few disadvantages of using this standalone server in a production environment, a reverse proxy is placed in front of an angular application to serve it better.
Since Angular is a JavaScript framework, it requires to have Nodejs with version > 8.9 installed in the system. Therefore before proceeding with installing angular CLI, quickly setup node environment by issuing following command in the terminal.
# curl -sL https://deb.nodesource.com/setup_10.x | sudo bash -
# apt-get install nodejs npm
Now proceed with installing Angular CLI that helps us to create projects, generate application and library code for any angular application.
# npm install -g @angular/cli
The setup needed for Angular environment is now complete. In the next step, we will create an angular application.
Create an Angular application using following angular CLI command:
# ng new angular-app
Change to the newly created angular directory and run the web application by specifying the host name and port number:
# cd angular-app
# ng serve --host PRIVATE_IP --port 3000
Make a curl query to the port number 3000 to confirm that the angular application is running on localhost.
# curl PRIVATE_IP:3000
At this point, the angular application will be running in your upstream server. In the next step, we will configure NGINX to act as a reverse proxy for the above angular application.
Navigate to the NGINX virtual host configuration directory and create a server block that will act as a reverse proxy. Remember the system where you have installed NGINX earlier can be reached via the Internet i.e. a public IP is attached to the system.
# cd /etc/nginx/sites-available
# vi node_or_angular_app.conf
server {
listen 80;
server_name SUBDOMAIN.DOMAIN.TLD;
location / {
proxy_pass https://PRIVATE_IP:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
The proxy_pass directive in the above configuration makes the server block a reverse proxy. All traffic destined to the domain SUBDOMAIN.DOMAIN.TLD
and those matches with root location block (/) will be forwarded to https://PRIVATE_IP:3000
where the node or angular application is running.
The above server block will act as a reverse proxy for either node or angular application. To serve both node and angular application at the same time using NGINX reverse proxy, just run them in two different port number if you intended to use the same system for both of them. It is also very much possible to use two separate upstream servers for running node and angular application. Further, you also need to create another NGINX server block with a matching values for server_name
and proxy_pass
directive. Recommended Read: Understanding NGINX Configuration File. Check for any syntactical error in the above server block and enable the same. Finally, reload NGINX to apply new settings.
# nginx -t
# cd /etc/nginx/sites-enabled
# ln -s ../sites-available/node_or_angular_app.conf .
# systemctl reload nginx
Now point your favorite web browser to https://SUBDOMAIN.DOMAIN.TLD
, you will be greeted with a welcome message from the Node or Angular application.
That’s all for configuring an NGINX reverse proxy for NodeJS or Angular application. You can now proceed with adding a free SSL certificate like Let’s Encrypt to secure your application!
]]>The ls command without any options lists files and directories in a plain format without displaying much information like file types, permissions, modified date and time to mention just but a few. Syntax
$ ls
To list files in reverse order, append the -r flag as shown Syntax
$ ls -r
As you can see above, the order of the listing has changed from the last to the first in comparison to the previous image.
using the -l flag, you can list the permissions of the files and directories as well as other attributes such as folder names, file and directory sizes, and modified date and time. Syntax
$ ls -l
As you may have noticed, the file and folder sizes displayed are not easy to decipher and make sense of at first glance. To easily identify the file sizes as kilobytes (kB), Megabytes (MB) or Gigabytes (GB), append the -lh flag as shown Syntax
$ ls -lh
You can view hidden files by appending the -a flag. Hidden files are usually system files that begin with a full stop or a period. Syntax
$ ls -a
To display the directory tree of files and folders use the ls -R
command as shown Syntax
$ ls -R
If you wish to go ahead and further distinguish files from folders, use the -F flag such that folder will appear with a forward slash character ‘/’ at the end. Syntax
$ ls -F
To display the inode number of files and directories, append the -i
flag at the end of the ls command as shown Syntax
$ ls -i
If you want to display the UID as well as the GId of files and directories, append the -n parameter as shown Syntax
$ ls -n
Aliases are customized or modified commands in Linux shell which are used in the place of the original commands. We can create an alias for the ls command this way Syntax
$ alias="ls -l"
What this does is that it tells the system to execute the ls -l
command instead of the ls
command. Be sure to observe that the output you get when running the ls command thereafter, will be as though you run the ls -l
command. To remove the added alias, run
unalias ls
To add some flair to the output display based on the types of files, you may want to colorize your output to easily distinguish files, folders and other attributes such as file and directory permissions. To achieve this run Syntax
ls --color
If you are a bit curious as to what version of ls you are running, execute the command below
# ls --v
ls (GNU coreutils) 8.22
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Written by Richard M. Stallman and David MacKenzie.
#
You can also execute the command ls --version
to print the ls command version.
To view more options and what you can do with ls simply run]
ls --help
Alternatively, you can view the manpages to find out more about its usage by running
man ls
That’s all we had for you today. We hope at this point, you will be more comfortable using the ls command in your day to day operations. Feel free to weigh in your feedback. Thanks!
]]>While Load Average is one of the most fundamental metrics of resource usage, the metric is pointless unless you understand what it tells a user. In this tutorial, we will help you understand what Load Average in Linux means.
Further, we will discuss some easy methods to monitor the load average for your system.
To understand the Load Average in Linux, we need to know what do we define as load. In a Linux system, the load is a measure of CPU utilization at any given moment.
It refers to the number of processes which are either currently being executed by the CPU or are waiting for execution.
An idle system has a load of 0. With each process that is being executed or is on the waitlist, the load increases by 1.
On its own, the load doesn’t give any useful information to the user. The load can change in split seconds. This is because the number of processes using or waiting for the CPU time doesn’t remain constant. This is why we use Load Average in Linux to monitor resource usage.
Load average, as the name suggests, depicts the average load on a CPU for a set time interval. These values are the number of processes waiting for the CPU or using it in the given period.
While most people are used to the load percentages shown in Windows systems, Load Average in Linux is depicted as three different decimal values.
Have look at the image above where it says “load average: 0.03, 0.03, 0.01”
Going left to right:
This helps a user get an idea of how the CPU is being utilized by the processes on a system over time.
While a load of 1 can mean approximately 100% resource usage on a single processor system, such systems are practically non-existent today. Unless you haven’t upgraded your system in over a decade, your system should run on a multi-core processor.
For a dual-core processor, a load of 1 means that 1 core was 100% idle. This translates to approximately 50% CPU usage. Similarly, it would represent 25% CPU usage for a quad-core processor.
Load Average in Linux takes into account the waiting threads and tasks along with processes being executed. Also, it is an average value instead of being an instantaneous value.
However, an approximate idea of resource usage can be determined by the ratio of Load Average over the number of cores of your processor. While it is not an exact value for the CPU utilization at any given time, it can be helpful for resource monitoring.
Now that we know what Load Average represents, we will discuss a few ways to check the Load Average in Linux. Load Average can be looked up in three common ways.
The uptime command is one of the most common methods for checking the Load Average for your system. To use the uptime command, we simply open the command line and type the following.
uptime
This displays the amount of time that our system has been up for, along with the number of active users and the Load Average for our system. The following screenshot shows what should you see when you use the uptime command on your system.
As you can see, the load average for the last minute is 0.03. For the last five minutes and fifteen minutes, the Load Average values are 0.03 and 0.01 respectively.
Another way to monitor the Load Average on your system is to utilise the top command in Linux. To do so, simply open the terminal and type this.
top
This will open the top interface in your terminal. Unlike the uptime command, this gives an in-depth view of the resource usage for your system.
The following screenshot shows what should you see when you use the top command on your system.
As you can see in the top-most line, the load average for the last minute is 0.34. For the last five minutes and fifteen minutes, the Load Average values are 0.14 and 0.405 respectively.
The glances tool is a system monitoring tool which works similar to the top command. It gives a detailed overview of the system resource usage. To use the glances tool on your system, you need to install its package using this command.
sudo apt-get install glances
Once you are done with the installation, type the following in your terminal.
glances
This will open the glances interface in your terminal. Unlike the top command, this gives the number of processor cores available along with the Load Average for your system.
The following screenshot shows what should you see when you use the glances command on your system.
As you can see in the highlighted region, the load average for the last minute is 0.14. For the last five minutes and fifteen minutes, the Load Average values are 0.12 and 0.05 respectively.
The Load Average in Linux is an essential metric to monitor the usage of system resources easily. Keeping the load average in check helps ensure that your system does not experience a crash or sluggish sessions.
We hope this tutorial was able to help you to get familiar with the concept of Load Average in Linux.
]]>You can follow this tutorial even if you’re on a different distribution. To do so, make sure you use the package manager depending on the distribution that you’re using.
Tomcat is a Java application server designed to deploy Java Servlets and JSPs on your system. Developed by the Apache Software Foundation, it is one of the most widely used Java applications and web servers.
Tomcat was created in an effort towards making an HTTP server which was purely built on Java and allowed Java code operations.
Its open-source nature has greatly contributed to Tomcat’s popularity. In this tutorial, we attempt to guide you to install Tomcat on Linux.
To properly install Tomcat on Linux, we need Java to be installed on our system. If it isn’t already on your system, we install the OpenJDK which is the default Java development package.
For this, we need to first update our default repositories using the apt package management service. To do this, you need to open the terminal on your Ubuntu system and type the following.
sudo apt update
This command updates the Ubuntu repositories to the latest available repositories. Now, this ensures that we will get the latest version of the OpenJDK package when we install Java on our system.
Now we use the following command to install Java. For the complete steps to install Java click here.
sudo apt install default-jdk
This is what you will see on the terminal screen. Enter ‘Y’ in the command line to proceed with the operation. Once the installation is complete, we verify it by checking the version of java installed on our system using this command.
java -version
Now that we understand what Tomcat does, and have covered the prerequisites, it is time to install Tomcat on our system. To do so, you need to follow the following steps.
It is not advisable to run Tomcat under a root account. Hence we need to create a new user where we run the Tomcat server on our system. We will use the following command to create our new user.
sudo useradd -r -m -U -d /opt/tomcat -s /bin/false tomcat
As you can see, we grouped our new system user with the directory /opt/Tomcat. This will be used to run the Tomcat service on our system.
Now that we have created a new user for our Tomcat server and switched to it. We need to download the Tomcat package to install Tomcat on Linux.
Let’s use the wget command to download the Tomcat package from their official website.
wget -c https://downloads.apache.org/tomcat/tomcat-9/v9.0.34/bin/apache-tomcat-9.0.34.tar.gz
Once the tar archive is downloaded on our system, we need to untar the archive on our system. This can be done as follows using the tar command as shown below.
sudo tar xf apache-tomcat-9.0.34.tar.gz -C /opt/tomcat
Using this command, we have extracted the contents of the tar package in /opt/Tomcat. To make updating Tomcat easy, we create a symbolic link that will point to the installation directory of Tomcat.
sudo ln -s /opt/tomcat/apache-tomcat-9.0.34 /opt/tomcat/updated
Now, if you wish to install Tomcat on Linux with a newer version in future, simply unpack the new archive and change the symbolic link so that it points to the new version.
Now we need to provide the user Tomcat with access for the Tomcat installation directory. We would use the chown command to change the directory ownership.
sudo chown -R tomcat: /opt/tomcat/*
Finally, we will use the chmod command to provide all executable flags to all scripts within the bin directory.
sudo sh -c 'chmod +x /opt/tomcat/updated/bin/*.sh'
Don’t forget to make sure that the “tomcat” user and group has read and write access to all the files and folders within the /opt/tomcat/updated folder like below.
See how both the user and group for the directories is tomcat and tomcat.
Once you install Tomcat on Linux, you need to configure it before you can start using it. First, we need to create a systemd unit file to be able to run Tomcat as a service. We need to create a new unit file for this. We will open a new file named tomcat.service in the directory /etc/systemd/system using nano or your preferred editor.
sudo nano /etc/systemd/system/tomcat.service
Now enter the following in your file and save it. Note that you need to update the value of JAVA_HOME if your Java installation directory is not the same as given below.
[Unit]
Description=Apache Tomcat Web Application Container
After=network.target
[Service]
Type=forking
Environment="JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64"
Environment="CATALINA_PID=/opt/tomcat/updated/temp/tomcat.pid"
Environment="CATALINA_HOME=/opt/tomcat/updated/"
Environment="CATALINA_BASE=/opt/tomcat/updated/"
Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC"
Environment="JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom"
ExecStart=/opt/tomcat/updated/bin/startup.sh
ExecStop=/opt/tomcat/updated/bin/shutdown.sh
User=tomcat
Group=tomcat
UMask=0007
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
Now we reload the daemon to update the system about the new file.
sudo systemctl daemon-reload
We use the following command to start the Tomcat service on our system.
sudo systemctl start tomcat
We will use the systemctl command to check the status of our Tomcat service. If the output looks like this, you were successful to install Tomcat on Linux.
Now we can enable the Tomcat service to run on startup using this command.
sudo systemctl enable tomcat
After you install Tomcat on Linux, you need to allow it to use the 8080 port through the firewall to be able to communicate outside your local network.
sudo ufw allow 8080/tcp
Once we install Tomcat on Linux, we need to verify our installation. To do so, simply enter the following in your browser.
http://<YourIPAddress>:8080
If your installation and configuration were successful, you should see this page.
Tomcat is a powerful tool for deploying Java Servlets and JSPs. It allows you to run Java code in a web server built purely using Java. We hope this tutorial was able to help you install Tomcat on Linux and make some basic configurations.
You can further make custom configurations to your Tomcat server to meet your preferences. If you have any feedback, queries or suggestions, feel free to reach out to us in the comments below.
]]>Apart from this it also has a built-in terminal and integration with major Version controls systems (Git, SVN, etc.) and Virtual Machines like Docker and Vagrant.
This feature-rich environment is the reason, PyCharm has quickly become one of the most popular IDE among developers. Since a lot of developers use Linux, we will take a look at how to install PyCharm on Linux.
There are two major ways to install Pycharm on Linux. The first one is using the official tar package released by JetBrains and the other is through a Snap package.
Snaps are an app package developed by Canonical. They are marketed as universal packages and are supported by all major distributions including Ubuntu, Linux Mint, Debian, Arch, Fedora, and Manjaro. For a full list of supported distributions refer here.
To install packages through snap, we first need to have snapd on your system. If you don’t have snap installed on your system, refer here to learn how to download snap.
Now to install PyCharm, run the following command:
sudo snap install pycharm-community --classic \\For free version
sudo snap install pycharm-professional --classic \\For paid version
This method simply required downloading and unpacking a tarball and can be used on ANY Linux distribution.
Step 1: Download Tarball
Go to the Pycharm Download page and download the package of your choice. In this tutorial, I will be installing the Community (Free) package.
Step 2: Extract the Tarball
After you have downloaded the tar package, go ahead and unpack it using the tar command.
tar -xvzf /path/to/pycharm/tarball
Flags explained :
Step 3: Make PyCharm executable
With Tarballs, we don’t have to install anything. Instead, we just have to extract the tarball and make the shell script executable by giving them certain permissions.
To do this, go to the extracted folder (it must be in the same folder as the tarball), enter its bin folder and use chmod on the PyCharm shell file.
cd pycharm-community-2021.3.2/bin/
chmod u+x pycharm.sh
./pycharm.sh
If you have installed Pycharm through snaps, you can launch it from the start menu or by typing Pycharm
in the terminal. If you have installed it through tarball, simply go to the bin folder of the extracted pycharm folder and execute the pycharm.sh file by typing ./pycharm.sh
in the terminal.
When you start up PyCharm for the first time, you will be prompted with terms and conditions. After accepting the Terms and Conditions, PyCharm will ask you whether you want to send “Anonymous Statistics” or not.
After accepting the necessary terms, you can start your first project in Pycharm or you can open an already existing project. Other than this you can also open a project from VCS such as Git.
In this article, we learned how to install Pycharm, which is an integrated development environment widely used by programmers around the world. It can be installed either using the tar file or the snap package manager. To learn more about Pycharm, check out the documentation here. How to Install PyCharm on Linux
]]>Anbox is short for Android in a box and it is exactly what it sounds like! Anbox is a free and open-source environment that enables you to run Android applications on your Linux distribution.
It follows a container-based approach to run the android operating system on Linux.
Here’s a quick summary of the step to install Anbox on your Linux Mint:
You can install Anbox on your system from the Snap Store. As of now, Snap is the only way to get Anbox. The organization does not officially support any other distribution method of Anbox at the moment.
In case you haven’t heard about Snaps, don’t worry. Snaps are just software packages that are simple to create and install.
Snap is available for the following releases of Mint :
To install Snap on Linux Mint 20, you need to remove /etc/apt/preferences.d/nosnap.pref first. The reason being, this file blocks the installation of snap.
This is done with the command :
sudo rm /etc/apt/preferences.d/nosnap.pref
sudo apt update
To install snapd on your system use the apt command as shown below:
$ sudo apt install snapd
Alternatively, you can download it form the Software Manager application. Search for snapd and click Install.
Before installing Anbox, you need to install two kernel modules. This is necessary to support the mandatory kernel subsystems ashmem and binder for the Android container.
You can do this with the following commands:
sudo add-apt-repository ppa:morphis/anbox-support
sudo apt update
sudo apt install linux-headers-generic anbox-modules-dkms
This will install anbox-modules-dkms package on your system.
After this, you need to manually load the kernel modules. This loading is a one-time thing. You can do this with the following commands:
sudo modprobe ashmem_linux
sudo modprobe binder_linux
This will add two new nodes in your system.
/dev/ashmen
/dev/binder
Now after installing Snaps and the necessary modules on your system, you can install Anbox on your system using :
sudo snap install --devmode --beta anbox
To update to a newer version use the command:
sudo snap refresh --beta --devmode anbox
To get information about the anbox snap use the command:
snap info anbox
If you need to uninstall Anbox, use the command :
$ snap remove anbox
After uninstalling anbox, you can also uninstall the kernel modules using :
$ sudo apt install ppa-purge
$ sudo ppa-purge ppa:morphis/anbox-support
Running these commands will successfully uninstall Anbox from your system.
In this tutorial, you saw how to install Anbox on your Linux Mint system. Do let us know if you have any questions in the comments below.
]]>These packages are used when you want to install a new program or service on their system. All the packages on a system are stored in a local ‘repository’.
This repository can be accessed by a package management service whenever required. Let’s talk about one of those package management utilities, the dpkg command in Linux today.
Essentially, the man page describes it like this: “dpkg is a tool to install, build, remove and manage Debian packages.”
We use the dpkg command to interact with packages on our system. It is controlled fully with the help of command-line parameters and the first parameter is referred to as the action parameter that is used to direct what to do. This parameter may or may not be followed by any other parameter.
Later, a new tool named aptitude was designed to provide a more user-friendly, interactive front-end for the users to manage packages without the complexity of the dpkg command. It interacts with the dpkg interface on behalf of the user. Now, let’s try and understand the dpkg command in Linux.
Here’s what the basic syntax of the dpkg command looks like:
dpkg [options] [.deb package name]
The dpkg command provides a long list of options to customise the data we receive while analysing our network. Here is a list of some of the most popular dpkg options.
Option | Function |
-i OR --install | Install a package using the dpkg command. The command will extract all control files for the specified package, remove any previously installed older instance of the package, and install the new package on our system. |
-r OR --remove | Remove an installed package from our system. It removes every file belonging to the specific package except the configuration files. This can be seen as the uninstallation option. |
-P OR --purge | An alternative way to remove an installed package from our system. It completely removes every fie belonging to the specific package, including the configuration files. This can be seen as the ‘complete uninstallation’ option. |
--update-avail | Uhe information of the dpkg command about available packages in its repositories. If new packages are available, they are synced from the official repositories. |
--merge-avail | Merge the information of the dpkg command about available packages in its repositories with previously available information. It is usually run right after the previous command. |
--help | Display the help page for the dpkg command and exit. |
These are some of the most commonly used options for the dpkg command and you can explore more by displaying the help options in your terminal.
Let us explore the common uses of the dpkg command. As the command works the same for both Debian and Ubuntu systems, we will only mention Ubuntu in this tutorial from now on.
The most basic use of the dpkg command in Ubuntu is a package installation. We can install a deb package in Ubuntu or Debian using the dpkg -i
command option.
Here’s how you’d install a package.
sudo dpkg -i [package name]
We’re installing the VLC player on our Ubuntu system. Have a look at the below screenshot for what the installation looks like on screen.
You can also install multiple packages at the same time by specifiying the package names separated by spaces.
When you no longer need a program or service on your system, there is no use keeping it.
The dpkg command has got us covered here as well.
We can uninstall a program or service from our system using the dpkg -r
option.
Let’s remove the VLC player that we installed for this demonstration.
sudo dpkg -r [package name]
Look at the below screenshot to see how dpkg triggers changes for all the dependent menus, desktop icons, etc similar to the apt command.
The dpkg repository stores all the packages available for installation on your Ubuntu or Debian Linux distribution.
However, as these packages are stored locally you can often end up having old versions of packages for a program when newer versions have already been released. This causes a need for a method to update your repositories.
Guess what? The dpkg --update-avail
option has got you covered.
It checks the online repositories and downloads all the updated packages to your local repository.
Let’s update our local repositories to the latest version:
sudo dpkg --update-avail
That brings us to the end of our topic for the day. This is all you’d need for the most part when using the dpkg command in Linux. Most regular users would not need more than these three options for the command. However, if you’re a power user, you can run man dpkg
and get complete details of everything the command can do.
In this tutorial, we will utilize the fdisk command to create a disk partition. The fdisk utility is a text-based command-line utility for viewing and managing disk partitions on a Linux system.
Before we create a partition on our system, we need to list all the partitions on our system. This is essential as we need to choose a disk before we partition it.
To view all the partitions currently on your system, we use the following command.
sudo fdisk -l
You might be prompted to enter your password again to verify your sudo privileges. Here we called the fdisk command with the -l to list the partitions. You should get an output similar to the following.
Now, we choose one disk from this list to partition. For this tutorial, we will choose the disk. To create partitions, we use the ‘command mode’ of the fdisk command. To enter the command mode, we use this command in our terminal.
sudo fdisk [disk path]
If you see an output similar to this, you have successfully entered the command mode.
Once we enter the command mode, many beginners might get confused due to the unfamiliar interface. The command mode of fdisk uses single character commands to specify the desired action for the system. You can get a list of available commands by pressing ‘m’, as shown below.
Our main objective here is to create a partition. To create a new partition, we use the command ‘n’. This will prompt you to specify the type of partition which you wish to create.
If you wish to create a logical partition, choose ‘l’. Alternatively, you can choose ‘p’ for a primary partition. For this tutorial, we will create a primary partition.
Now, we will be asked to specify the starting sector for our new partition. Press ENTER to choose the first available free sector on your system. Next, you’ll be prompted to select the last sector for the partition.
Either press ENTER to use up all the available space after your first sector or specify the size for your partition.
As shown in the screenshot above, we chose to create a 10 MB partition for this demonstration. Here ‘M’ specifies the unit as megabytes. You can use ‘G’ for gigabytes.
If you don’t specify a unit, the unit will be assumed to be sectors. Hence +1024 will mean 1024 sectors from the starting sector.
Once we create a partition, Linux sets the default partition type as ‘Linux’. However, suppose we wish my partition type to be the ‘Linux LVM’ partition. To change the ID for our partition, we will use the command ‘t’.
Now, we get prompted to enter the HEX code for our desired partition ID. We don’t remember the HEX code for the partition types on top of our heads.
So we will take the help of the ‘L’ command to list all the HEX codes for the available partition types. This list should look as shown below.
We see that the HEX code 8e is the partition ID for the ‘Linux LVM’ partition type. Hence, we will enter the required HEX code. The following output gives us the confirmation that our partition ID has been changed successfully.
Now that we have created a new partition and given it our desired partition ID, we need to confirm our changes. All the changes made until this point are saved in the memory, waiting to be written on our disk.
We use the command ‘p’ to see the detailed list of partitions for our current disk as seen in the screenshot below.
This allows us to confirm all the changes we have done to the disk before making them permanent. Once you have verified the changes, press ‘w’ to write the new partition on your disk.
If you don’t wish to permanently write your new partition to the disk, you can enter the command ‘q’. This will exit the fdisk command mode without saving any changes.
Once you create a new partition, it is advisable to format your new partition using the appropriate mkfs command.
This is because using a new partition without formatting it may cause issues in the future. To see the list of all available mkfs commands, we enter the following in our command line.
sudo mkfs
This gives us a list of available mkfs commands. If we wish to format a partition on our current disk with the ext4 file system, we use this command.
sudo mkfs.ext4 [partition path]
That’s it! You now know how to create a partition in Linux using the fdisk command… You can reserve space for specific tasks. And in case one partition gets corrupted, you don’t need to worry about data on other partitions.
As each partition is treated as a separate disk, data on other partitions remains safe. The fdisk utility is a powerful tool for the task of managing disk partitions, but it can often be confusing for new users.
We hope this tutorial was able to help you understand how to create a new disk partition in Linux using the fdisk utility. If you have any feedback, queries or suggestions, feel free to reach out to us in the comments below.
]]>First things first, we have to install the atop command on the system. Debian/Ubuntu users can do so by:
sudo apt install atop
Other Linux users can use their standard package manager, followed by the 'atop'
keyword.
This command has the ability to display multiple confidential information related to the system. In order to prevent any data abstraction, we can get elevated access using 'sudo su'
or 'sudo -s'
. We have complete documentation on sudo.
To display all the process-level use of the system’s resources, we can simply run 'atop'
in the terminal.
atop
As we can see, the whole layout is divided into two panels. The upper panel provides the cumulative use of the system’s resources, whereas the bottom one, displays disintegrated information for each process. Let’s see each of the
Each entry in this view, focuses on a particular system resource.
'sys'
(system) and 'user'
processes.'#proc'
.'#trun'
)'#tslpi'
denotes the number of threads that are currently sleeping and interruptible.'#tslpu'
denotes the number of threads that are currently sleeping and uninterruptible.'#exit'
)'irq'
)'guest'
denotes the guest-percentage, which is the CPU time spent on other virtual machines.'atop'
displays the above statistics for each core independently.'csw'
)'intr'
)MEM - Memory Utilization
The total physical memory supported.
The memory currently free.
The current cache memory.
'buff'
as in “buffer” is the amount of memory consumed in filesystem meta-data.
The sum of memory for kernel’s memory allocation shown as 'slab'
.
The amount of shared memory.
SWP - Swap Memory.
'transport'
signifies the Transport layer in Networking, which deals with the data protocols.'tcpi'
)'tcpo'
)'udpi'
for UDP in) and ('udpo'
for UDP out).'tcpao'
is the number of active TCP open connections.'tcppo'
is the number of passive TCP connections, but still open.'tcprs'
.'udpie'
.'network'
signifies the Network Layer, which deals with Internet Protocols, IPv4 and IPv6 combined.'ipi'
)'ipo'
)'ipfrw'
)'deliv'
)'wlp19s0'
.'pcki'
and 'pcko'
)'sp'
.'si'
and 'so'
)'erri'
and 'erro'
).'drpi'
and 'drpo'
)This concludes the explanation of the top panel of the atop command.
It is worth noticing that the values in 'atop'
command keeps updating after certain time intervals.
The generic output of the 'atop'
command displays the following details for each process entry:
In this generic output, the processes are sorted on the basis of percentage CPU utilization. As we can see, in this particular output, we get small amount of information for every type of system resource.
Let us try to study process-level information for each type of system resource.
The 'atop'
command provides the opportunity to study the memory consumption for each process running in the system. We can do so by running:
atop -m
As we can see, the top panel remains constant even if we added the memory option, '-m'
. Let us now understand the columns for each process entry.
The processes are sorted with respect to the 'MEM'
column.
Since,
'atop'
is somewhat an interactive command utility, we can alter the columns from within itself. All we have to do is type the specific option while it is displaying information.For instance, after running
'atop'
on the terminal, we can switch to memory-specific output by just typing'm'
.
To extract information related to disk utilization, we can use the '-d'
option along with 'atop'
command.
atop -d
There is not a lot of stuff to notice in the disk-specific output. Some of the key findings are:
It must be noted that, the processes are sorted on the bases of 'DSK'
column.
This gives us the commands that are running in the background as processes in a command-line output format.
atop -c
If you copy-paste the lines under the command line column, you can re-run the same process. This output tells us exactly what command was run in the background to initiate the process.
Instead of just inspecting process information, atop
command provides the ability to check for thread-specific resource utilization. To access this output we can either run:
atop -y
or just press 'y'
key when the command is already displaying system resource information.
It is clear that none of the system resource columns have changed. All that has been added is the thread count of their respective process.
There can numerous kinds of information that can be extracted using the 'atop'
command. Some of the useful ones are:
Using the '-v'
option, we can get process characteristics.
atop -v
atop -au
This specific kind of information comes under scheduling characteristics for a process. It can be accessed by using '-s'
option.
atop -s
There are certain 'atop'
command tricks that might be useful:
'atop'
screen - using 'z'
key.i'
key followed by the number of seconds, we wish to change it to.'t'
key.'q'
key.We know that 'atop'
command can be too much to handle for any Linux user. It takes patience and perseverance to learn about this brilliant command. For any queries, feel free to ping us down in the comments section.
If you already have a basic understanding of any programming language, you know what arrays are. But for the uninitiated, let’s go over the basics of arrays and learn how to work with them.
Variables store single data elements. Arrays, on the other hand, can store a virtually unlimited number of data elements. When working with a large amount of data, variables can prove to be very inefficient and it’s very helpful to get hands-on with arrays.
Let’s learn how to create arrays in shell scripts.
There are two types of arrays that we can work with, in shell scripts.
The default array that’s created is an indexed array. If you specify the index names, it becomes an associative array and the elements can be accessed using the index names instead of numbers.
Declaring Arrays:
root@ubuntu:~# declare -A assoc_array
root@ubuntu:~# assoc_array[key]=value
OR
root@ubuntu:~# declare -a indexed_array
root@ubuntu:~# indexed_array[0]=value
Notice the uppercase and lowercase letter a. Uppercase A
is used to declare an associative array while lowercase a
is used to declare an indexed array.
The declare
keyword is used to explicitly declare arrays but you do not really need to use them. When you’re creating an array, you can simply initialize the values based on the type of array you want without explicitly declaring the arrays.
Now that you know how to create arrays, let’s learn how to work with arrays. Since these are collections of data elements, we can work with loops and arrays at the same time to extract the required data points.
Since we know that each data point is being indexed individually, we can access all the array elements by specifying the array index as shown below:
assoc_array[element1]="Hello World"
echo ${assoc_array[element1]}
Similarly, let’s access some indexed array elements. We can specify all the elements for the index array by delimiting with spaces because the index is automatically generated for each of those elements.
index_array=(1 2 3 4 5 6)
echo ${index_array[0]}
As you can see, the first element is automatically printed based on index 0.
This is going to be an easy task if you know for loops already. If you don’t we’ll cover them in a future tutorial. We’ll make use of the while or for loops in shell scripts to work through the array elements. Copy the script below and save it as <filename>.sh
#!/bin/bash
index_array=(1 2 3 4 5 6 7 8 9 0)
for i in ${index_array[@]}
do
echo $i
done
The above script will output the following:
Now you might have noticed the index_array[@] and if you’re wondering what the @ symbol is for, we’re going to go over the same right now.
Now that you learned how to access elements individually and using for loops, let’s learn the different operations that are available by default for arrays.
We learned how to access elements by providing the index or the key of the array. But if we want to print all the elements at the same time or work with all the elements, we can use another operator which is the [@]
symbol.
As you noticed in the example above, I used this symbol when I wanted to loop through all the array elements using the for loop.
echo ${assoc_array[@]}
The above will print all the elements that are stored within the assoc array.
Similar to the @
symbol above, we have the #
symbol which can be prefixed to an array name to provide us the count of the elements stored in the array. Let’s see how it works.
echo ${#index_array[@]}
If you want to count the number of characters used for a particular element, we can simply replace the @
symbol with the index.
We know how to add array elements and print them too. Let’s learn how to delete specific elements. For this purpose, we’ll use the unset
keyword.
unset index_array[1]
Replace the array name and the index ID in the above code example and you’ve removed the array element that you desire. Pretty simple isn’t it?
Shell scripts are pretty vast and can replace any function that you can perform on the terminal with the right person writing the script. Some additional functionalities of arrays in shell scripts also include being able to work with regex (Regular Expressions). We can use various regular expressions to manipulate array elements within shell scripts.
For now, we hope you have a good understanding of creating and working with arrays and will be able to use arrays in your scripting. Comment below to let us know what you think, and if you have any questions about this topic.
]]>A vanilla installation leaves you with nothing more than just a black screen which is for you to customize. In this module, we’ll be walking through the essential things to do after installing Arch Linux.
First things first, update the system with the pacman command:
$ sudo pacman -Syyu
Now we can go ahead and install packages and other application on our system!
To get a GUI environment, first we need to install a Display Server. The go-to option is to install xorg, which is one of the oldest and the most popular display servers out there.
$ sudo pacman -S xorg
Next up, we would need a Desktop Environment for our distro. Popular choices include:
To install Xfce4:
$ sudo pacman -S xfce4 xfce4-goodies
To install KDE Plasma:
$ sudo pacman -S plasma
To install Gnome:
$ sudo pacman -S gnome gnome-extra
To install Cinnamon:
$ sudo pacman -S cinnamon nemo-fileroller
To install MATE:
$ sudo pacman -S mate mate-extra
Next up, we would need a Display Manager which would enable us to login to our Desktop Environments. The popular choices are :
To install LightDM:
$ sudo pacman -S lightdm lightdm-gtk-greeter lightdm-gtk-greeter-settings
Enable lightdm with:
$ sudo systemctl enable lightdm
To install LXDM:
$ pacman -S lxdm
Enable LXDM with:
$ sudo systemctl enable lxdm.service
To install SDDM:
$ sudo pacman -S sddm
Enable SDDM with:
$ sudo systemctl enable sddm
One of the main reasons is the Arch User Repository (AUR) which has a vast array of packages and application. However, we cannot fetch these packages directly using pacman. To fetch packages from AUR we need special programs called AUR Helpers. There are many such helpers available but the one we recommend is paru.
To install Paru:
$ sudo pacman -S base-devel git --needed
$ cd paru
$ makepkg -si
Now we can fetch packages from AUR with:
$ paru -S <PACAKGE-NAME>
It is considered good practice to have multiple kernels at your disposal, just in case the main kernel runs into any issues.
The popular kernels apart from the mainline Linux Kernel are :
To install the LTS kernel:
$ sudo pacman -S linux-lts linux-lts-headers
To install the Hardened kernel:
$ sudo pacman -S linux-hardened linux-hardened-headers
To install the Zen kernel:
$ sudo pacman -S linux-zen linux-zen-headers
Processor manufacturers release stability and security updates to the processor microcode. These updates provide bug fixes that can be critical to the stability of your system. Without them, you may experience spurious crashes or unexpected system halts that can be difficult to track down. It is recommended to install it after Arch install just for the sake of stability.
For Intel Processors:
$ sudo pacman -S intel-ucode
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
For AMD Processors:
$ sudo pacman -S linux-firmware
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
In order to have faster updates, you can rank your mirrors according to their speed. To do so, first backup your current mirrorlist.
# mv /etc/pacman.d/mirrorlist /etc/pacman.d/mirrorlist.bak
Next up, to rank all mirrors based on their speed with:
# rankmirrors /etc/pacman.d/mirrorlist.bak > /etc/pacman.d/mirrorlist
Thus in this module we covered the essential things to be done after an Arch install. There’s still a lot to do, especially regarding the installation of essentials but we would leave that to the reader as to what applications they want to work with !
]]>In Linux processes can be of two types:
Foreground Processes
depend on the user for input
also referred to as interactive processes
Background Processes
run independently of the user
referred to as non-interactive or automatic processes
A process in Linux can go through different states after it’s created and before it’s terminated. These states are:
Running
Sleeping
Stopped
Zombie
A process in running state means that it is running or it’s ready to run.
The process is in a sleeping state when it is waiting for a resource to be available.
A process in Interruptible sleep will wakeup to handle signals, whereas a process in Uninterruptible sleep will not.
A process enters a stopped state when it receives a stop signal.
Zombie state is when a process is dead but the entry for the process is still present in the table.
There are two commands available in Linux to track running processes. These two commands are Top and Ps.
To track the running processes on your machine you can use the top command.
$ top
Top command displays a list of processes that are running in real-time along with their memory and CPU usage. Let’s understand the output a little better:
You can use the up/down arrow keys to navigate up and down through the list. To quit press q. To kill a process, highlight the process with the up/down arrow keys and press ‘k’.
Alternatively, you can also use the kill command, which we will see later.
ps command is short for ‘Process Status’. It displays the currently-running processes. However, unlike the top command, the output generated is not in realtime.
$ ps
The terminology is as follows :
PID | process ID |
TTY | terminal type |
TIME | total time the process has been running |
CMD | name of the command that launches the process |
To get more information using ps command use:
$ ps -u
Here:
While ps command only displays the processes that are currently running, you can also use it to list all the processes.
$ ps -A
This command lists even those processes that are currently not running.
To stop a process in Linux, use the 'kill’ command. kill command sends a signal to the process.
There are different types of signals that you can send. However, the most common one is ‘kill -9’ which is ‘SIGKILL’.
You can list all the signals using:
$ kill -L
The default signal is 15, which is SIGTERM. Which means if you just use the kill command without any number, it sends the SIGTERM signal.
The syntax for killing a process is:
$ kill [pid]
Alternatively you can also use :
$ kill -9 [pid]
This command will send a ‘SIGKILL’ signal to the process. This should be used in case the process ignores a normal kill request.
In Linux, you can prioritize between processes. The priority value for a process is called the ‘Niceness’ value. Niceness value can range from -20 to 19. 0 is the default value.
The fourth column in the output of top command is the column for niceness value.
To start a process and give it a nice value other than the default one, use:
$ nice -n [value] [process name]
To change nice value of a process that is already running use:
renice [value] -p 'PID'
This tutorial covered process management in Linux. Mainly the practical aspects of process management were covered. In theory, process management is a vast topic and covering it in its entirety is out of scope for this tutorial.
]]>TestDisk was created by CGSecurity to recover deleted partitions. PhotoRec, on the other hand, was created to recover media files that were deleted from SD Cards and other removable media. That’s why the name “PhotoRec” which is short for “Photo Recovery”. That’s not to say PhotoRec cannot be used for other file types, you sure can.
Before we begin, we need to install PhotoRec on our Linux system. It comes packaged with the testdisk utility and not as a separate package.
To install PhotoRec, run the below command:
sudo apt -y install testdisk
Once the setup is complete, you can download and run the Photorec utility using the command below:
sudo photorec
For this demonstration, I’ve created a random image file and deleted it. Let’s go ahead and recover this file.
Let’s fire up PhotoRec in our terminal. To make things easy, navigate to the directory that you want to run the recovery on prior to running the command.
sudo photorec
When you’ve started PhotoRec, select the hard drive that you want to run the restore operation on and hit the enter key.
The next screen will ask you to select the partition that you want to run the recovery process on.
Before you proceed, make sure you select the file type from the file options menu which you can access on the partition selection screen.
As we know, we’re only looking for our JPG file, I’ve selected that extension. Anything else is unnecessary and will just consume more time. Select the file type that you’re looking for and proceed.
Next is to select the partition type which in our case is ext4.
Now select if you want the utility to only look at free sectors or the entire drive.
You might have noticed, when I ran the command, I was in the ~/Desktop directory.
This is where the command will start looking at by default unless you navigate to a specific folder on the next screen.
Once you’ve finalized the folder you want to start looking into, press the letter C and the program will begin searching for files.
Great! So we’re all set to let PhotoRec restore deleted files for us. It may take some time depending on how many file types you’ve selected.
A folder named recup_dir will start to restore all the files that were recovered. You can access the files even when the recovery is in progress.
Great, we now have a list of all the files that we deleted previously. You can look for the file that you want here since the filenames aren’t restored by PhotoRec.
Noticed how saving a file on the hard disk takes time, but deleting is almost instantaneous? Let’s understand that first.
When you store data on your hard drive, the data is stored in blocks. Each block contains a piece of the data. The first block usually contains the metadata for the file in question. Each block of data is written one at a time at the speed of the hard drive.
But when we delete a file, only the first block which contains the metadata is deleted. The operating system no longer can detect the file because it’s metadata is lost and hence considers the blocks free for writing new data.
This is where recovery tools come in. Since only the metadata is lost, the job of the tools is to make the meta available for the operating system for reading.
They read the hard drive sectors one by one, block by block, and find correlated blocks. Once all the correlated blocks are found, the recovery utilities remake the metadata.
And that’s how you are able to recover a deleted file.
Like other file recovery utilities, PhotoRec scans data sectors on the hard drive to find the data size. Once it finds the data size, and the hard drive and data is intact (not defragmented or overwritten), PhotoRec begins the data recovery process by looking for adjacent data blocks and recreating the meta for them.
Since the utility can’t search for a specific file, it will return all the files that are found and save it in a folder. You can then sort through the files and restore the one required.
At the end of the process, all the files that were still lying around on your hard drive will be available for you to restore.
I hope that you’ve been able to use PhotoRec to recover deleted files on your Linux system. There are a lot of other utilities that you can also try if PhotoRec didn’t work for you.
Here’s a list of top 20 data recovery tools for Linux. I’m sure you’ll find one that suits your needs best!
]]>Let’s go over the steps to install Google Chrome, which is Google’s version of the original open-source Chromium browser. Since Google Chrome isn’t natively available in the package repositories, we need to add their Linux repos and install the package from there.
Before we proceed, install Google’s Linux package signing Key. This key will automatically configure the repository settings necessary to keep your Google Linux applications up-to-date.
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
For installing Chrome you need to add Chrome repository to your system source. You can do this with the command:
sudo add-apt-repository "deb http://dl.google.com/linux/chrome/deb/ stable main"
You may also add the repository manually by editing your /etc/apt/sources.list file.
After you add the Chrome repository in the last step you need to do an apt-update. The command for doing that is:
sudo apt update
After going through the commands above you are ready to finally install Chrome. The command for doing that is :
sudo apt install google-chrome-stable
This command installs the stable version of Chrome.
While installing you will be prompted to grant permission to proceed with the installation. Press ‘y’ to continue.
That is it! Now you can run Google Chrome by typing in :
google-chrome
Or you can use the GUI to go to your applications and find Google Chrome there.
To uninstall Google Chrome use the command :
sudo apt remove google-chrome-stable
This will successfully remove Google Chrome from your system.
Chromium is an open-source version of Chrome. It is present by default in the Linux repositories. So you won’t need to add it explicitly.
To install Chromium you just need to run the command :
sudo apt install chromium-browser
One command and Chromium is ready to go!
In this tutorial, we saw how Chrome can be installed on Linux systems. Apart from that we also saw Chromium, a better open-source version of Chrome that is readily available on Linux.
]]>In this tutorial, I’ll be using an Ubuntu server to work with, but even if you are on any other distribution, you can follow the same steps. The only thing that will be different is the package manager used for installation.
The testdisk package is available on all the major Linux distributions and can be easily downloaded with the use of the default package manager. Here, I’ve listed down the distro-specific commands to install testdisk on Linux.
Install TestDisk on Ubuntu/Debian
sudo apt update
sudo apt -y install testdisk
We’re using the apt package manager instead of the apt-get since that’s the new package manager for Ubuntu/Debian.
Install TestDisk on Red Hat and CentOS 7
yum install epel-release
yum update
yum install testdisk
Install TestDisk on Red Hat and CentOS 8
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
yum update
yum install testdisk
You need to also enable/install the EPEL repository. The EPEL repository is an additional package repository that provides easy access to install packages for commonly used software.
To know more about the EPEL repository, visit the official page.
Install TestDisk on Arch Linux
sudo pacman -S testdisk
Install TestDisk on Fedora
sudo dnf install testdisk
Now that you have the testdisk utility installed, it’s time to use it to recover our deleted files or partitions.
Testdisk works with the following partition types:
You might not need to check for the partition type as the above list covers almost all the major partition types. But if you’re unsure, enter one of the commands:
stat -f <partition>
df -T
fdisk -l
Either of the commands above will give you the filesystem type information.
In your terminal, simply enter the command testdisk
to run the utility and you’ll be greeted with the below prompt. You can select the appropriate disk drive that you want to recover files on.
If its the first time you’re running this utility, it will give you an option to create a log file on the welcome screen. You can select create and just move ahead with the defaults.
The next screen asks you to select the disk drive/partition:
Once you’ve selected the right partition, you will be asked to select the partition type.
It should auto-select the correct partition type, but if it doesn’t, make sure you select the correct type.
Once that’s done, you’ll be given a menu of options out of which we need to go ahead with “Analyse” to search for lost data.
You can go with “Quick Search” or “Deeper Search” as it fits your needs and let the search run until it has scanned all the inodes.
With the option selected, you’ll be greeted with which specific partition you want to scan.
Select the correct partition, and let the utility scan the entire drive. Within some time, you’ll get the list of files within the partition. When the scanning is in progress, you’ll see a screen similar to the one below.
Now once the progress is completed, it provides you with the option to select the partition that you want to browse the files in. All the files that are highlighted in “red” or any color or text style (can also be influenced by terminal configuration) are the files that have been recovered by the TestDisk utility.
To restore those files, simply press the letter "C"
and it will allow you to copy that file and paste it in some other directory that you want to restore it to.
Well, there you have it. You’ve learned how to recover deleted files in Linux! Go ahead and explore this utility more on a virtual machine to get a hang of it before using it in real-life situations so you know exactly how to work with it on an advanced scale.
We hope you’ve understood the use of the testdisk utility in Linux and know how to use it now. If you have any questions, let us know in the comments below.
]]>Grep, short for “global regular expression print”, is a command used for searching and matching text patterns in files contained in the regular expressions. Furthermore, the command comes pre-installed in every Linux distribution. In this guide, we will look at the most common grep command usages, along with popular use cases.
Grep command can be used to find or search a regular expression or a string in a text file. To demonstrate this, let’s create a text file welcome.txt and add some content as shown.
Welcome to Linux !
Linux is a free and opensource Operating system that is mostly used by
developers and in production servers for hosting crucial components such as web
and database servers. Linux has also made a name for itself in PCs.
Beginners looking to experiment with Linux can get started with friendlier linux
distributions such as Ubuntu, Mint, Fedora and Elementary OS.
Great! Now we are ready to perform a few grep commands and manipulate the output to get the desired results. To search for a string in a file, run the command below Syntax
$ grep "string" file name
OR
$ filename grep "string"
Example:
$ grep "Linux" welcome.txt
Output As you can see, grep has not only searched and matched the string “Linux” but has also printed the lines in which the string appears. If the file is located in a different file path, be sure to specify the file path as shown below
$ grep "string" /path/to/file
If you are working on a system that doesn’t display the search string or pattern in a different color from the rest of the text, use the --color
to make your results stand out. Example
$ grep --color "free and opensource" welcome.txt
Output
If you wish to search for a string in your current directory and all other subdirectories, search using the - r
flag as shown
$ grep -r "string-name" *
For example
$ grep -r "linux" *
Output
In the above example, our search results gave us what we wanted because the string “Linux” was specified in Uppercase and also exists in the file in Uppercase. Now let’s try and search for the string in lowercase.
$ grep "linux" file name
Nothing from the output, right? This is because grepping could not find and match the string “linux” since the first letter is Lowercase. To ignore case sensitivity, use the -i
flag and execute the command below
$ grep -i "linux" welcome.txt
Output Awesome isn’t’ it? The - i
is normally used to display strings regardless of their case sensitivity.
To count the total number of lines where the string pattern appears or resides, execute the command below
$ grep -c "Linux" welcome.txt
Output
To invert the Grep output , use the -v
flag. The -v
option instructs grep to print all lines that do not contain or match the expression. The –v option tells grep to invert its output, meaning that instead of printing matching lines, do the opposite and print all of the lines that don’t match the expression. Going back to our file, let us display the line numbers as shown. Hit ESC on Vim editor, type a full colon followed by
set nu
Next, press Enter Output Now, to display the lines that don’t contain the string “Linux” run
$ grep -v "Linux" welcome.txt
Output As you can see, grep has displayed the lines that do not contain the search pattern.
To number the lines where the string pattern is matched , use the -n
option as shown
$ grep -n "Linux" welcome.txt
Output
Passing then -w
flag will search for the line containing the exact matching word as shown
$ grep -w "opensource" welcome.txt
Output However, if you try
$ grep -w "open" welcome.txt
NO results will be returned because we are not searching for a pattern but an exact word!
The grep command can be used together with pipes for getting distinct output. For example, If you want to know if a certain package is installed in Ubuntu system execute
$ dpkg -L | grep "package-name"
For example, to find out if OpenSSH has been installed in your system pipe the dpkg -l
command to grep as shown
$ dpkg -L | grep -i "openssh"
Output
You can use the -A or -B to dislay number of lines that either precede or come after the search string. The -A flag denotes the lines that come after the search string and -B prints the output that appears before the search string. For example
$ ifconfig | grep -A 4 ens3
This command displays the line containing the string plus 4 lines of text after the ens string in the ifconfig
command. Output Conversely, in the example below, the use of the -B flag will display the line containing the search string plus 3 lines of text before the ether string in the ifconfig
command. Output
$ ifconfig | grep -B 4 ether
The term REGEX is an acronym for REGular EXpression. A REGEX is a sequence of characters that is used to match a pattern. Below are a few examples:
^ Matches characters at the beginning of a line
$ Matches characters at the end of a line
"." Matches any character
[a-z] Matches any characters between A and Z
[^ ..] Matches anything apart from what is contained in the brackets
Example To print lines beginning with a certain character, the syntax is;
grep ^character file_name
For instance, to display the lines that begin with the letter “d” in our welcome.txt file, we would execute
$ grep ^d welcome.txt
Output To display lines that end with the letter ‘x’ run
$ grep x$ welcome.txt
Output
If you need to learn more on Grep command usage, run the command below to get a sneak preview of other flags or options that you may use together with the command.
$ grep --help
Sample Output We appreciate your time for going through this tutorial. Feel free to try out the commands and let us know how it went.
]]>This is a command-line hack that acts as a vacuum, that sucks anything thrown to it.
Let’s take a look at understanding what it means, and what we can do with this file.
This will return an End of File (EOF) character if you try to read it using the cat command.
cat /dev/null
This is a valid file, which can be verified using
stat /dev/null
This gives me an output of
File: /dev/null
Size: 0 Blocks: 0 IO Block: 4096 character special file
Device: 6h/6d Inode: 5 Links: 1 Device type: 1,3
Access: (0666/crw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2020-02-04 13:00:43.112464814 +0530
Modify: 2020-02-04 13:00:43.112464814 +0530
Change: 2020-02-04 13:00:43.112464814 +0530
This shows that this file has a size of 0 bytes, has zero blocks allocated to it. The file permissions are also set that anyone can read/write to it, but cannot execute it.
Since it is not an executable file, we cannot use piping using |
operator to redirect to /dev/null
. The only way is to use file redirections (>
, >>
, or <
, <<
).
The below diagram shows that /dev/null
is indeed a valid file.
Let’s now take a look at some common use cases for /dev/null
.
We can discard any output of a script that we use by redirecting to /dev/null
.
For example, we can try discarding echo
messages using this trick.
echo 'Hello from JournalDev' > /dev/null
You will not get any output since it is discarded!
Let’s try running a command incorrectly and pipe it’s output to /dev/null
.
cat --INCORRECT_OPTION > /dev/null
We still get an output like this:
cat: unrecognized option '--INCORRECT'
Try 'cat --help' for more information.
Why is this happening? This is because the error messages are coming from stderr
, but we are only discarding output from stdout
.
We need to take stderr
into account as well.
Let us redirect the stderr to /dev/null, along with stdout. We can use the file descriptor for stderr(=2) for this.
cat --INCORRECT_OPTION > /dev/null 2>/dev/null
This will give us what we need!
There is another way of doing the same; by redirecting stderr to stdout first, and then redirect stdout to /dev/null.
The syntax for this will be:
command > /dev/null 2>&1
Notice the 2>&1
at the end. We redirect stderr(2) to stdout(1). We use &1
to mention to the shell that the destination file is a file descriptor and not a file name.
cat --INCORRECT_OPTION > dev/null 2>&1
So if we use 2>1
, we will only redirect stderr
to a file called 1
. This is not what we want!
Hopefully, this clears things up a bit, so that you can now use /dev/null in Linux, knowing what it means! Feel free to ask questions in the comment section below.
So why is WordPress this popular? Let’s briefly look into some of the factors that have led to the immense success of the platform.
WordPress comes with a simple, intuitive and easy to use dashboard. The dashboard doesn’t require any knowledge in web programming languages like PHP, HTML5, and CSS3 and you can build a website with just a few clicks on a button. In addition, there are free templates, widgets, and plugins that come with the platform to help you get started with your blog or website.
WordPress drastically saves you the agony of having to pay developer tonnes of cash to develop your website. All you have to do is to get a free WordPress theme or purchase one and install it. Once installed, you have the freedom to deploy whatever features that suit you and customize a myriad of features without running much code. What’s more, is that it takes a much shorter time to design your site that coding from scratch.
WordPress platform is inherently responsive and you do not have to stay awake worrying about your sites being able to fit across multiple devices. This benefit also adds to your site being ranked higher in Google’s SEO score!
WordPress is built using well-structured, clean and consistent code. This makes your blog/site easily indexable by Google and other search engines thereby making your site rank higher. In addition, you can decide which pages rank higher or alternatively use SEO plugins like the popular Yoast plugin which enhances your site’s ranking on Google.
It’s very easy to install WordPress on Ubuntu or any other operating system. There are so many open-source scripts to even automate this process. Many hosting companies provide a one-click install feature for WordPress to get you started in no time.
Before we begin, let’s update and upgrade the system. Login as the root user to your system and update the system to update the repositories.
apt update && apt upgrade
Output Next, we are going to install the LAMP stack for WordPress to function. LAMP is short for Linux Apache MySQL and PHP.
Let’s jump right in and install Apache first. To do this, execute the following command.
apt install apache2
Output To confirm that Apache is installed on your system, execute the following command.
systemctl status apache2
Output To verify further, open your browser and go to your server’s IP address.
https://ip-address
Output
Next, we are going to install the MariaDB database engine to hold our Wordpress files. MariaDB is an open-source fork of MySQL and most of the hosting companies use it instead of MySQL.
apt install mariadb-server mariadb-client
Output Let’s now secure our MariaDB database engine and disallow remote root login.
$ mysql_secure_installation
The first step will prompt you to change the root password to login to the database. You can opt to change it or skip if you are convinced that you have a strong password. To skip changing type n. For safety’s sake, you will be prompted to remove anonymous users. Type Y. Next, disallow remote root login to prevent hackers from accessing your database. However, for testing purposes, you may want to allow log in remotely if you are configuring a virtual server Next, remove the test database. Finally, reload the database to effect the changes.
Lastly, we will install PHP as the last component of the LAMP stack.
apt install php php-mysql
Output To confirm that PHP is installed , created a info.php
file at /var/www/html/
path
vim /var/www/html/info.php
Append the following lines:
<?php
phpinfo();
?>
Save and Exit. Open your browser and append /info.php
to the server’s URL.
https://ip-address/info.php
Output
Now it’s time to log in to our MariaDB database as root and create a database for accommodating our WordPress data.
$ mysql -u root -p
Output Create a database for our WordPress installation.
CREATE DATABASE wordpress_db;
Output Next, create a database user for our WordPress setup.
CREATE USER 'wp_user'@'localhost' IDENTIFIED BY 'password';
Output Grant privileges to the user Next, grant the user permissions to access the database
GRANT ALL ON wordpress_db.* TO 'wp_user'@'localhost' IDENTIFIED BY 'password';
Output Great, now you can exit the database.
FLUSH PRIVILEGES;
Exit;
Go to your temp directory and download the latest WordPress File
cd /tmp && wget https://wordpress.org/latest.tar.gz
Output Next, Uncompress the tarball which will generate a folder called “wordpress”.
tar -xvf latest.tar.gz
Output Copy the wordpress folder to /var/www/html/
path.
cp -R wordpress /var/www/html/
Run the command below to change ownership of ‘wordpress’ directory.
chown -R www-data:www-data /var/www/html/wordpress/
change File permissions of the WordPress folder.
chmod -R 755 /var/www/html/wordpress/
Create ‘uploads’ directory.
$ mkdir /var/www/html/wordpress/wp-content/uploads
Finally, change permissions of ‘uploads’ directory.
chown -R www-data:www-data /var/www/html/wordpress/wp-content/uploads/
Open your browser and go to the server’s URL. In my case it’s
https://server-ip/wordpress
You’ll be presented with a WordPress wizard and a list of credentials required to successfully set it up. Fill out the form as shown with the credentials specified when creating the WordPress database in the MariaDB database. Leave out the database host and table prefix and Hit ‘Submit’ button. If all the details are correct, you will be given the green light to proceed. Run the installation. Fill out the additional details required such as site title, Username, and Password and save them somewhere safe lest you forget. Ensure to use a strong password. Scroll down and Hit ‘Install WordPress’. If all went well, then you will get a ‘Success’ notification as shown.
Click on the ‘Login’ button to get to access the Login page of your fresh WordPress installation. Provide your login credentials and hit ‘Login’. Voila! there goes the WordPress dashboard that you can use to create your first blog or website! Congratulations for having come this far. You can now proceed to discover the various features, plugins, and themes and proceed setting up your first blog/website!
]]>This tutorial requires the use of domain names. Whenever you see either the SUBDOMAIN, DOMAIN, or TLD variables, replace them with your own domain name values.
Before you start installing NGINX, it is always recommended to upgrade your Ubuntu 18.04 to the latest. The following apt-get commands will do it for you.
# apt-get update
# apt-get upgrade
The first command will update the list of available packages and their versions and the second one will actually install the newer versions of the packages that you have. Once you are done with upgrading the system, check the release version of your Ubuntu system with the following command.
# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
Follow the steps below to install the WordPress with NGINX on Ubuntu server.
NGINX is available in the default repositories of Ubuntu and can be installed with a single line command as shown below.
# apt-get install nginx
Once NGINX has been installed, it will run automatically. You can verify that by the following systemctl command.
# systemctl status nginx
● nginx.service - A high-performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2019-02-12 09:12:08 UTC; 11s ago
Docs: man:nginx(8)
Process: 17726 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 17714 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 17729 (nginx)
Tasks: 2 (limit: 1152)
CGroup: /system.slice/nginx.service
├─17729 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─17732 nginx: worker process
The output of the above command verifies that NGINX is loaded and active with PID of 17729.
MariaDB is available in the default repository of Ubuntu. It is also possible to install it from the separate MariaDB repository. But we will stick to install it from default repository of Ubuntu. Issue the following commands from the terminal to install it and optionally you can run mysql_secure_installation
to make it secure.
# apt-get install mariadb-server
# systemctl enable mariadb.service
# mysql_secure_installation
The default password for MariaDB root user is blank. To update the password of the root user, get the MySQL prompt and update the password by issuing following command from MySQL shell.
$ mysql -u root -p
MariaDB [(none)]> use mysql;
MariaDB [mysql]> update user SET PASSWORD=PASSWORD("Passw0rd!") WHERE USER='root';
The installation of MariaDB is complete in your Ubuntu 18.04 system. Now proceed with installing PHP in the next step.
The latest version of PHP (7.2) is available in the repositories of Ubuntu 18.04 and is the default candidate for installation so simply run the following command in terminal to install it.
# apt-get install php7.2 php7.2-cli php7.2-fpm php7.2-mysql php7.2-json php7.2-opcache php7.2-mbstring php7.2-xml php7.2-gd php7.2-curl
Apart from installing php7.2, the above apt-get command also installs few other packages as well like MySQL, XML, Curl and GD packages and makes sure that your WordPress site can interact with the database, support for XMLRPC, and also to crop and resize images automatically. Further, the php-fpm (Fast process manager) package is needed by NGINX to process PHP pages of your WordPress installation. Remember that FPM service will run automatically once the installation of PHP is over.
Once the MariaDB is installed and configured in your server, create a user and a database especially for WordPress installation. To do that, log in to the MariaDB server using mysql -u root -p
command and complete the steps as described below.
$ mysql -u root -p
Enter password:
MariaDB [mysql]> CREATE DATABASE wordpress_db;
Query OK, 1 row affected (0.00 sec)
MariaDB [mysql]> GRANT ALL ON wordpress_db.* TO 'wpuser'@'localhost' IDENTIFIED BY 'Passw0rd!' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)
MariaDB [mysql]> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
MariaDB [mysql]> exit
Don’t get alarmed that there is no command to create ‘wpuser’ database user. It will get automatically created with the GRANT command above. I learned about this recently and thought to surprise anyone reading this tutorial. :)
Let us now proceed with configuring NGINX server blocks to serve your WordPress domain. To start with, create the root folder for your WordPress installation.
# mkdir -p /var/www/html/wordpress/public_html
To create NGINX server block for your WordPress domain, navigate to the /etc/nginx/sites-available
folder. This is the default location for NGINX server blocks. Use your favorite editor to create a configuration file for NGINX server block and edit it like below.
# cd /etc/nginx/sites-available
# cat wordpress.conf
server {
listen 80;
root /var/www/html/wordpress/public_html;
index index.php index.html;
server_name SUBDOMAIN.DOMAIN.TLD;
access_log /var/log/nginx/SUBDOMAIN.access.log;
error_log /var/log/nginx/SUBDOMAIN.error.log;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
}
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
Check the correctness of above configuration file using:
# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
To activate the server block create a symbolic link of the above configuration file inside /etc/nginx/sites-enabled
folder.
# cd /etc/nginx/sites-enabled
# ln -s ../sites-available/wordpress.conf .
Reload NGINX to apply the new WordPress domain settings.
# systemctl reload nginx
In this step, download the archived WordPress file using wget
and unzip it to the root of the WordPress installation that we have created in the previous step. To accomplish it run the following commands from the terminal.
# cd /var/www/html/wordpress/public_html
# wget https://wordpress.org/latest.tar.gz
# tar -zxvf latest.tar.gz
# mv wordpress/* .
# rm -rf wordpress
Change the ownership and apply correct permissions to the extracted WordPress files and folders. To do that, use the following command from the terminal.
# cd /var/www/html/wordpress/public_html
# chown -R www-data:www-data *
# chmod -R 755 *
Now provide the database name, database user and the password in the WordPress config file so that it can connect to the MariaDB database that we had created earlier. By default, WordPress provides a sample configuration file and we will make use of it to create our own configuration file. To do that, first, rename the sample WordPress configuration file to wp-config.php and edit it with your own favorite editor.
# cd /var/www/html/wordpress/public_html
# mv wp-config-sample.php wp-config.php
# vi wp-config.php
...
...
define('DB_NAME', 'wordpress_db');
define('DB_USER', 'wpuser');
define('DB_PASSWORD', 'Passw0rd!');
...
...
To secure your WordPress site, add the security key in the above WordPress config file just after database configuration options by generating it though this link.
You are now ready to install your WordPress site using your favorite browser.
To complete the installation of WordPress, point your favorite web browser to SUBDOMAIN.DOMAIN.TLD and follow the steps as described below.
The installer will prompt you to choose a language. Choose a language and click ‘Continue’.
Now provide the site information like site title, username, password, email and click on ‘Install WordPress’ button.
You are done with installing WordPress site. Click ‘Log In’ to login to Dashboard and proceed with configuring plugins and themes for your site.
Provide the user name and password that we have entered previously to login for the first time.
Congratulations! Your WordPress website is installed and ready for you to customize according to your requirements.
WordPress is the most popular CMS and we learned how to install it with NGINX on a Ubuntu server. You can now proceed further to create your website with it.
]]>p7zip
command. There are two other packages which you can install according to your requirement. You can use the p7zip-rar
if you have to deal with the RAR files. In this tutorial, we will demonstrate to you how you can install and use 7Zip on Ubuntu 18.04. We will also provide a small tutorial of using 7z on Ubuntu directly from CLI.
7Zip is available as a package named as p7zip in the Ubuntu repository. It can be installed with apt
or any other package manager on other Linux based systems too. First of all, let’s update our Ubuntu system.
sudo apt update
To install 7zip on your Ubuntu server or Desktop, open terminal (Ctrl + T) and enter the following command.
sudo apt install p7zip-full p7zip-rar
After executing this in your Terminal, p7zip will get installed as CLI utility 7z. The syntax of 7z is given below.
7z <command> [<switch>...] <base_archive_name> [<arguments>...] [<@listfiles...>]
The commands and switches you can use with 7zip are provided below with their meaning.
Let us now take a look at How to use 7Zip on Ubuntu.
Now you know the syntax of 7Zip on Ubuntu, you can move on to compressing and extracting files.
Here are the steps which you need to follow in order to compress files using the 7-zip on your Ubuntu machine: First of all, you need to select the file or folder to make a compressed file. To do so, just use the ls -la
command to show the list of all files and folders of the current directory. For instance, we would be compressing the data.txt file which is of size 50 kb at the moment.
$ ls -la
Now, to compress any file or folder. Like, in this case, data.txt, you need to enter the following command:
$ 7z a data.7z data.txt
Here, the option ‘a’ is for archive or compression. The data.7z is the filename of the compressed file. The data.txt is the file to be compressed. After compression, the size of the compressed file has come to around 3 kb. That’s more than 90% saving in the system space. Great! Isn’t it? You can also get detailed information about the compression using the ‘l’ option.
$ 7z l data.7z
If you have Ubuntu Desktop, you can use 7Zip from File Explorer to compress and extract them. First of all, you need to go to the File Explorer or File Manager on your Linux system. Now, select the file or folder which you want to compress and right-click on the same. Here, select the Compress option from the context menu. Select the extension for the compressed file and enter the filename. That’s it! You have successfully compressed a file on your Ubuntu Desktop system using File Explorer. Easy, No?
Here are the steps which you need to follow in order to extract 7z files using the 7-zip on your Ubuntu machine: First of all, you need to select the file or folder to extract the contents of the file. To do so, just use the ls -la command to show the list of all files and folders of the current directory. For instance, we would be extracting the contents of the data.7z file.
$ ls -la
Now, to extract any compressed file. Like, in this case, data.7z, you need to enter the following command:
$ 7z e data.7z
Just replace the data.7z with your filename in the above command and you should be good to go.
First of all, you need to go to the File Explorer or File Manager on your Linux system. Now, select the compressed file which you want to extract or uncompress and right-click on the same. Here, select the Extract option from the context menu. Select the location for the extraction of the files and click on the Extract button. That’s it! You have successfully extracted the contents of a compressed file on your Ubuntu system.
7Zip is a very popular software to compress files and save our system space. We learned how to Install 7Zip on Ubuntu system using the command line. We also learned how to compress and extract files from the command line and File Explorer.
]]>$ awk options 'selection _criteria {action }' input-file > output-file
To demonstrate more about AWK usage, we are going to use the text file called file.txt 1st column => Item, 2nd column => Model 3rd column => Country 4th column => Cost
To print the 2nd and 3rd columns, execute the command below.
$ awk '{print $2 "\t" $3}' file.txt
Output
If you wish to list all the lines and columns in a file, execute
$ awk ' {print $0}' file.txt
Output
if you want to print lines that match a certain pattern, the syntax is as shown
$ awk '/variable_to_be_matched/ {print $0}' file.txt
For instance, to match all entries with the letter ‘o’, the syntax will be
$ awk '/o/ {print $0}' file.txt
Output To match all entries with the letter ‘e’
$ awk '/e/ {print $0}' file.txt
Output
When AWK locates a pattern match, the command will execute the whole record. You can change the default by issuing an instruction to display only certain fields. For example:
$ awk '/a/ {print $3 "\t" $4}' file.txt
The above command prints the 3rd and 4th columns where the letter ‘a’ appears in either of the columns Output
You can use AWK to count and print the number of lines for every pattern match. For example, the command below counts the number of instances a matching pattern appears
$ awk '/a/{++cnt} END {print "Count = ", cnt}' file.txt
Output
AWK has a built-in length function that returns the length of the string. From the command $0 variable stores the entire line and in the absence of a body block, the default action is taken, i.e., the print action. Therefore, in our text file, if a line has more than 18 characters, then the comparison results true, and the line is printed as shown below.
$ awk 'length($0) > 20' file.txt
Output
If you wish to save the output of your results, use the > redirection operator. For example
$ awk '/a/ {print $3 "\t" $4}' file.txt > Output.txt
You can verify the results using the cat command as shown below
$ cat output.txt
Output
AWK is another simple programming script that you can use to manipulate text in documents or perform specific functions. The shared commands are a few or the many you are yet to know or come across.
]]>We will learn about following commands to send emails in Linux.
Linux mail command is quite popular and is commonly used to send emails from the command line. Mail is installed as part of mailutils and mailx packages on Debian and Redhat systems respectively. The two commands process messages on the command line. To install mailutils in Debian and Ubuntu Systems, run:
$ sudo apt install mailutils -y
For CentOS and RedHat distributions, run:
$ yum install mailx
When you run the command, the following window will pop up. Press the TAB button and hit on ‘OK’ In the next Window, scroll and hit ‘Internet Site’. The system will thereafter finish up with the installation process.
If the mail command is successfully installed, test the application by using the following format and press enter:
$ mail –s "Test Email" email_address
Replace email_address
with your email address. For example,
$ mail –s "Test Email" james@example.com
After pressing “Enter”, you’ll be prompted for a Carbon Copy (Cc:) address. If you wish not to include a copied address, proceed and hit ENTER. Next, type the message or the body of the Email and hit ENTER. Finally, Press Ctrl + D simultaneously to send the Email. Output Alternatively, you can use the echo command to pipe the message you want to send to the mail command as shown below.
$ echo "sample message" | mail -s "sample mail subject" email_address
For example,
$ echo "Hello world" | mail -s "Test" james@example.com
Output Let’s assume you have a file that you want to attach. Let’s call the file message.txt
How do you go about it? Use the command below.
$ mail -s "subject" -A message.txt email_address
The -A
flag defines attachment of the file. For example;
$ mail -s "Important Notice" -A message.txt james@example.com
Output To send an email to many recipients run:
$ mail –s "test header" email_address email_address2
Mailx is the newer version of mail command and was formerly referred to as nail in other implementations. Mailx has been around since 1986 and was incorporated into POSIX in the year 1992. Mailx is part of the Debian’s mail compound package used for various scenarios. Users, system administrators, and developers can use this mail utility. The implementation of mailx also takes the same form as the mail command line syntax. To install mailx in Debian/Ubuntu Systems run:
$ sudo apt install mailx
To install mailx in RedHat & CentOS run:
$ yum install mailx
You may use the echo command to direct the output to the mail command without being prompted for CC and the message body as shown here:
$ echo "message body" | mail -s "subject" email_address
For example,
$ echo "Make the most out of Linux!" | mail -s "Welcome to Linux" james@example.com
Mutt is a lightweight Linux command line email client. Unlike the mail command that can do basic stuff, mutt can send file attachments. Mutt also reads emails from POP/IMAP servers and connecting local users via the terminal. To install mutt in Debian / Ubuntu Systems run:
$ sudo apt install mutt
To install mutt in Redhat / CentOS Systems run:
$ sudo yum install mutt
You can send a blank message usign mutt with the < /dev/null
right after the email address.
$ mutt -s "Test Email" email_address < /dev/null
For example,
$ mutt -s "Greetings" james@example.com < /dev/null
Output Mutt command can also be used to attach a file as follows.
$ echo "Message body" | mutt -a "/path/to/file.to.attach" -s "subject of message" -- email_address
For example,
$ echo "Hey guys! How's it going ?" | mutt -a report.doc -s "Notice !" -- james@jaykiarie.com
Output
The mpack command is used to encode the file into MIME messages and sends them to one or several recipients, or it can even be used to post to different newsgroups. To install mpack in Debian / Ubuntu Systems run:
$ sudo apt install mpack
To install mpack in Redhat / CentOS Systems run:
$ sudo yum install mpack
Using mpack to send email or attachment via command line is as simple as:
$ mpack -s "Subject here" -a file email_address
For example,
$ mpack -s "Sales Report 2019" -a report.doc james@jaykiarie.com
Output
This command is another popular SMTP server used in many distributions. To install sendmail in Debian/ Ubuntu Systems run:
$ sudo apt install sendmail
To install sendmail in RedHat / CentOS Systems run:
$ sudo yum install sendmail
You can use the following instructions to send email using the sendmail command:
$ sendmail email_address < file
For example, I have created a file report.doc
with the following text:
Hello there !
The command for sending the message will be,
$ sendmail < report.doc james@example.com
Output You can use -s option to specify the email subject.
While the command line emails clients are a lot simpler and less computationally intensive, you can only use them to send email to personal email domains and not to Gmail or Yahoo domains because of extra authentication required. Also, you cannot receive emails from external SMTP servers. Generally, it’s a lot easier if you use GUI email clients like Thunderbird or Evolution to avoid undelivered emails problem.
]]>Let’s find out some Command Line and GUI methods to deal with this problem.
We can use the ghostscript
command line utility in Linux to compress PDFs.
If the command is not available in your machine, you can install it using your package manager.
For example, in Ubuntu, you can use apt
:
sudo apt install ghostscript
You can use this magic command to compress PDFs to a readable quality.
gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -dBATCH -sOutputFile=output.pdf input.pdf
Here, replace output.pdf
and input.pdf
accordingly.
The various tweaks to the -dPDFSETTINGS
option are provided in the table below. Use them according to your need.
-dPDFSETTINGS Option | Description |
-dPDFSETTINGS=/screen |
Has a lower quality and smaller size. (72 dpi) |
-dPDFSETTINGS=/ebook |
Has a better quality, but has a slightly larger size (150 dpi) |
-dPDFSETTINGS=/prepress |
Output is of a higher size and quality (300 dpi) |
-dPDFSETTINGS=/printer |
Output is of a printer type quality (300 dpi) |
-dPDFSETTINGS=/default |
Selects the output which is useful for multiple purposes. Can cause large PDFS. |
I have used the above command to achieve a compression from 73MB to 14MB!
This command ps2pdf
converts a PDF to PS and then again back, compressing it efficiently as a result.
It may not always work, but it can give very good results.
Format:
ps2pdf input.pdf output.pdf
It is recommended that you use the -dPDFSETTINGS=/ebooks
setting to get the best performance, as ebooks have the best size for readability and also are small enough in size.
ps2pdf -dPDFSETTINGS=/ebook input.pdf output.pdf
I have tried this on a 73MB PDF and it had the same results as the ghostscript
command, the compressed PDF having only 14MB!
If you are uncomfortable with using command line tools, there is a GUI alternative as well.
This is a GUI front end to ghostscript
, which can be installed in any Linux distribution, since it uses Python3
and it’s GTK
modules.
This package is called Densify, and is available here(Link to github).
I have created a simple bash
script to do all the necessary work. Run this bash script as root, to link and download necessary files.
#!/bin/bash
#- HELPER SCRIPT FOR DENSIFY
#- original package https://github.com/hkdb/Densify
#- script author Vijay Ramachandran
#- site https://journaldev.com
#-
# Go to your home directory (preferred)
cd $HOME
# Download the package
git clone https://github.com/hkdb/Densify
cd Densify
# Queue must be changed to queue in the file.
# Will not work otherwise
sed -i 's/Queue/queue/g' $PWD/densify
# Create the symlink to /opt
sudo ln -s $PWD /opt/Densify
# Perform the install
cd /opt/Densify
sudo chmod 755 install.sh
sudo ./install.sh
# Export to PATH
if [ $SHELL == "/bin/zsh" ]; then
if test -f $HOME/.zshrc; then
echo 'export PATH=/opt/Densify:$PATH' >> $HOME/.zshrc
source $HOME/.zshrc
else
echo "No zshrc Found! Please create a zsh config file and try again"
fi
else
if [ $SHELL == "/bin/bash" ]; then
if test -f $HOME/.bashrc; then
echo 'export PATH=/opt/Densify:$PATH' >> $HOME/.bashrc
source $HOME/.bashrc
else
if test -f $HOME/.bash_profile; then
echo 'export PATH=/opt/Densify:$PATH' >> $HOME/.bash_profile
source $HOME/.bash_profile
else
echo "No bashrc Found! Please create a bash config file and try again"
fi
fi
else
echo "Default Shell is not zsh or bash. Please add /opt/Densify to your PATH"
fi
fi
If there are no errors, you are good to go! Simply type the below command from opt/densify
to invoke the GUI, or open it from your dashboard.
densify
You can now compress as many PDF files as you need, using a GUI!
To start off, add the NodeJS PPA to your system using the following commands.
sudo apt-get install software-properties-common
Sample Output Next, add the NodeJS PPA.
curl -sL https://deb.nodesource.com/setup_11.x | sudo -E bash -
Sample Output Great! In our next step, we are going to run the command for installing NodeJS.
After successfully adding the NodeJS PPA, It’s time now to install NodeJS using the command below.
sudo apt-get install nodejs
Sample Output This command not only installs NodeJS but also NPM (NodeJS Package Manager) and other dependencies as well.
After successful installation of NodeJS, you can test the version of NodeJS using the simple command below.
node -v
Sample Output For NPM, run
npm -v
Sample Output
This is an optional step that you can use to test if NodeJS is working as intended. We are going to create a web server that displays the text “Congratulations! node.JS has successfully been installed !” Let’s create a NodeJS file and call it nodeapp.js
vim nodeapp.js
Add the following content
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Congratulations! node.JS has successfully been installed !\n');
}).listen(3000, " server-ip);
console.log('Server running at https://server-ip:3000/');
Save and exit the text editor Start the application using the command below
node nodeapp.js
Sample Output This will display the content of the application on port 3000. Ensure the port is allowed on the firewall of your system.
ufw allow 3000/tcp
ufw reload
Now open your browser and browse the server’s address as shown
https://server-ip:3000
If you wish to uninstall NodeJS from your Ubuntu system, run the command below.
sudo apt-get remove nodejs
The command will remove the package but retain the configuration files. To remove both the package and the configuration files run:
sudo apt-get purge nodejs
As a final step, you can run the command below to remove any unused files and free up the disk space
sudo apt-get autoremove
Great! We have successfully installed and tested the installation of NodeJS. We also learned how to uninstall NodeJS from Ubuntu and clean up space.
]]>Before we begin, ensure you have the following minimum prerequisites
With the minimum requirements satisfied, we can now proceed.
To start off, log into your Ubuntu 18.04 system using SSH protocol and update & upgrade system repositories using the following command.
apt update -y && apt upgrade -y
Sample Output Next reboot the system using the command.
sudo reboot
OR
init 6
Best practice demands that devstack should be run as a regular user with sudo privileges. With that in mind, we are going to add a new user called “stack” and assign sudo privileges. To create stack user execute
sudo adduser -s /bin/bash -d /opt/stack -m stack
Next, run the command below to assign sudo privileges to the user
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
Sample Output
Once you have successfully created the user ‘stack’ and assigned sudo privileges, switch to the user using the command.
su - stack
In most Ubuntu 18.04 systems, git comes already installed. If by any chance git is missing, install it by running the following command.
sudo apt install git -y
Sample output Using git, clone devstack’s git repository as shown.
git clone https://git.openstack.org/openstack-dev/devstack
Sample output
In this step, navigate to the devstack directory.
cd devstack
Then create a local.conf
configuration file.
vim local.conf
Paste the following content
[[local|localrc]]
# Password for KeyStone, Database, RabbitMQ and Service
ADMIN_PASSWORD=StrongAdminSecret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
# Host IP - get your Server/VM IP address from ip addr command
HOST_IP=10.208.0.10
Save and exit the text editor. NOTE:
ADMIN_PASSWORD
is the password that you will use to log in to the OpenStack login page. The default username is admin.HOST_IP
is your system’s IP address that is obtained by running ifconfig
or ip addr
commands.To commence the installation of OpenStack on Ubuntu 18.04, run the script below contained in devstack directory.
./stack.sh
The following features will be installed:
The deployment takes about 10 to 15 minutes depending on the speed of your system and internet connection. In our case, it took roughly 12 minutes. At the very end, you should see output similar to what we have below. This confirms that all went well and that we can proceed to access OpenStack via a web browser.
To access OpenStack via a web browser browse your Ubuntu’s IP address as shown. https://server-ip/dashboard
This directs you to a login page as shown. Enter the credentials and hit “Sign In” You should be able to see the Management console dashboard as shown below. For more on Devstack’s customization, check out their system configuration guide. Additionally, check out the Openstack documentation for administration guide.
Without any arguments, the command will generate or display all exported variables. Below is an example of the expected output.
$ export
Sample output
If you wish to view all exported variables on the current shell, use the -p
flag as shown in the example
$ export -p
Sample output
Suppose you have a function and you wish to export it, how do you go about it? In this case , the -f
flag is used. In this example, we are exporting the function name ()
. First, call the function
$ name () { echo "Hello world"; }
Then export it using the -f
flag
$ export -f name
Next, invoke bash shell
$ bash
Finally, call the function
$ name
Output
Hello World
You can also assign a value before exporting a function as shown
$ export name[=value]
For example, you can define a variable before exporting it as shown
$ student=Divya
In the above example, the variable ‘student’ has been assigned the value ‘Divya’ To export the variable run
$ export students
You can use the printenv
command to verify the contents of the variable as shown
$ printenv students
Check the output below of the commands we have just executed Output The above can be achieved in 2 simple steps by declaring and exporting the variable in one line as shown
$ export student=Divya
To display the variable run
$ printenv student
Output This concludes our tutorial about export command. Go ahead and give it a try and see the magic! Your feedback is most welcome.
]]>Telnet is an old network protocol that is used to connect to remote systems over a TCP/IP network. It connects to servers and network equipment over port 23. Let’s take a look at Telnet command usage.
Let’s see how you can install and use the telnet protocol.
In this section, we will walk you through the process of installing telnet in RPM and DEB systems.
To begin the installation process on the server, run the command
# yum install telnet telnet-server -y
Sample Output Next, Start and enable the telnet service by issuing the command below
# systemctl start telnet.socket
# systemctl enable telnet.socket
Sample Output Next, allow port 23 which is the native port that telnet uses on the firewall.
# firewall-cmd --permanent --add-port=23/tcp
Finally, reload the firewall for the rule to take effect.
# firewall-cmd --reload
Sample Output To verify the status of telnet run
# systemctl status telnet.socket
Sample Output Telnet protocol is now ready for use. Next, we are going to create a login user.
In this example, we will create a login user for logging in using the telnet protocol.
# adduser telnetuser
Create a password for the user.
# passwd telnetuser
Specify the password and confirm. To use telnet command to log in to a server, use the syntax below.
$ telnet server-IP address
For example
$ telnet 38.76.11.19
In the black console, specify the username and password. To login using putty, enter the server’s IP address and click on the ‘Telnet’ radio button as shown. Finally, click on the ‘Open’ button. On the console screen, provide the username and password of the user
To install telnet protocol in Ubuntu 18.04 execute:
$ sudo apt install telnetd -y
Sample Output To check whether telnet service is running, execute the command.
$ systemctl status inetd
Sample Output Next, we need to open port 23 in ufw firewall.
$ ufw allow 23/tcp
Sample Output Finally, reload the firewall to effect the changes.
$ ufw reload
Telnet has been successfully installed and ready for use. Like in the previous example in CentOS 7, you need to create a login user and log in using the same syntax.
Telnet can also be used to check if a specific port is open on a server. To do so, use the syntax below.
$ telnet server-IP port
For example, to check if port 22 is open on a server, run
$ telnet 38.76.11.19 22
Sample Output
This tutorial is an educational guide that shows you how to use telnet protocol. We HIGHLY DISCOURAGE the use of telnet due to the high-security risks it poses due to lack of encryption. SSH is the recommended protocol when connecting to remote systems. The data sent over SSH is encrypted and kept safe from hackers.
]]>Nohup, short for no hang up is a command in Linux systems that keep processes running even after exiting the shell or terminal. Nohup prevents the processes or jobs from receiving the SIGHUP (Signal Hang UP) signal. This is a signal that is sent to a process upon closing or exiting the terminal. In this guide, we take a look at the nohup command and demonstrate how it can be used.
Nohup command syntax is as follows;
nohup command arguments
OR
nohup options
Let’s see how the command comes into play
You can begin by checking the version of Nohup using the syntax below
nohup --version
Output
If you want to keep your processes/jobs running, precede the command with nohup
as shown below. The jobs will still continue running in the shell and will not get killed upon exiting the shell or terminal.
nohup ./hello.sh
Output From the output above, the output of the command has been saved to nohup.out
to verify this run,
cat nohup.out
Output Additionally, you can opt to redirect the output to a different file as shown
nohup ./hello.sh > output.txt
Once again, to view the file run
cat output.txt
Output To redirect to a file and to standard error and output use the > filename 2>&1
attribute as shown
nohup ./hello.sh > myoutput.txt >2&1
Output
To start a process in the background use the &
symbol at the end of the command. In this example, we are pinging google.com and sending it to the background.
nohup ping google.com &
Output To check the process when resuming the shell use the pgrep
command as shown
pgrep -a ping
Output If you want to stop or kill the running process, use the kill
command followed by the process ID as shown
kill 2565
Output
nohup.out
is used as the default file for stdout and stderr.ps
command, short for Process Status, is a command line utility that is used to display or view information related to the processes running in a Linux system. As we all know, Linux is a multitasking and multiprocessing system. Therefore, multiple processes can run concurrently without affecting each other. The ps command lists current running processes alongside their PIDs and other attributes. In this guide, we are going to focus on ps command usage. It retrieves information about the processes from virtual files which are located in the /proc file system
The ps command without arguments lists the running processes in the current shell
ps
Output The output consists of four columns PID
- This is the unique process ID TTY
- This is the typeof terminal that the user is logged in to TIME
- This is the time in minutes and seconds that the process has been running CMD
- The command that launched the process
To have a glance at all the running processes, execute the command below ps -A
Output or ps -e
Output
To view processes associated with the terminal run ps -T
Output
To view all processes with the exception of processes associated with the terminal and session leaders execute ps -a
A session leader is a process that starts other processes Output
To view all current processes execute
ps -ax
Output -a
flag stands for all processes -x
will display all processes even those not associated with the current tty
If you wish to display processes in BSD format, execute
ps au
OR
ps aux
Output
To view a full format listing run
ps -ef
OR
ps -eF
Output
If you wish to list processes associated with a specific user, use the -u
flag as shown
ps -u user
For example
ps -u jamie
Output
If you wish to know the thread of a particular process, make use of the -L
flag followed by the PID For example
ps -L 4264
Output
Sometimes, you may want to reveal all processes run by the root user. To achieve this run
ps -U root -u root
Output
If you wish to list all processes associated by a certain group run
ps -fG group_name
Or
ps -fG groupID
For example
ps -fG root
Output
Chances are that usually don’t know the PID to a process. You can search the PID of a process by running
ps -C process_name
For example
ps -C bash
Output
You can display processes by their PID as shown
ps -fp PID
For example
ps -fp 1294
Output
Usually, most processes are forked from parent processes. Getting to know this parent-child relationship can come in handy. The command below searches for processes going by the name apache2
ps -f --forest -C bash
Output
For example, If you wish to display all forked processes belonging to apache, execute
ps -o pid,uname,comm -C bash
Output The first process, which is owned by root is the main apache2 process and the rest of the processes have been forked from this main process To display all the child apache2 processes using the pid of the main apache2 process execute
ps --ppid PID no.
For example
ps --ppid 1294
Output
The ps command can be used to view threads along with the processes. The command below displays all the threads owned by the process with PID pid_no
ps -p pid_no -L
For example
ps -p 1294 -L
Output
You can use the ps command to display only the columns you need. For example ,
ps -e -o pid,uname,pcpu,pmem,comm
The command above will only display the PID, Username, CPU, memory and command columns Output
To rename column labels execute the command below
ps -e -o pid=PID,uname=USERNAME,pcpu=CPU_USAGE,pmem=%MEM,comm=COMMAND
Output
Elapsed time refers to how long the process has been running for
ps -e -o pid,comm,etime
Output The -o option enables the column for elapsed time
the ps command can be used with grep command to search for a particular process For example
ps -ef | grep systemd
Output
]]>Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
Now let’s dive a little deeper into each of these commands and understand them in more detail. We already have a lot of existing articles for each of those individual commands. For your convenience, we’ll add links to all the existing articles, and continue to update the article as new topics are covered.
The ls command is used to list files and directories in the current working directory. This is going to be one of the most frequently used Linux commands you must know of.
As you can see in the above image, using the command by itself without any arguments will give us an output with all the files and directories in the directory. The command offers a lot of flexibility in terms of displaying the data in the output.
Learn more about the ls command (link to full article)
The pwd command allows you to print the current working directory on your terminal. It’s a very basic command and solves its purpose very well.
Now, your terminal prompt should usually have the complete directory anyway. But in case it doesn’t, this can be a quick command to see the directory that you’re in. Another application of this command is when creating scripts where this command can allow us to find the directory where the script has been saved.
While working within the terminal, moving around within directories is pretty much a necessity. The cd command is one of the important Linux commands you must know and it will help you to navigate through directories. Just type cd followed by directory as shown below.
root@ubuntu:~# cd <directory path>
As you can see in the above command, I simply typed cd /etc/ to get into the /etc directory. We used the pwd command to print the current working directory.
The mkdir command allows you to create directories from within the terminal. The default syntax is mkdir followed by the directory name.
root@ubuntu:~# mkdir <folder name>
As you can see in the above screenshot, we created the JournalDev directory with just this simple command.
Learn more about the mkdir command (Link to article)
The cp and mv commands are equivalent to the copy-paste and cut-paste in Windows. But since Linux doesn’t really have a command for renaming files, we also make use of the mv command to rename files and folders.
root@ubuntu:~# cp <source> <destination>
In the above command, we created a copy of the file named Sample. Let’s see how what happens if we use the mv command in the same manner. For this demonstration, I’ll delete the Sample-Copy file.
root@ubuntu:~# mv <source> <destination
In the above case, since we were moving the file within the same directory, it acted as rename. The file name is now changed.
Learn more about the cp command (Link to article) and mv command (Link to article).
In the previous section, we deleted the Sample-Copy file. The rm command is used to delete files and folders and is one of the important Linux commands you must know.
root@ubuntu:~# rm <file name>
root@ubuntu:~# rm -r <folder/directory name>
To delete a directory, you have to add the -r argument to it. Without the -r argument, rm command won’t delete directories.
To create a new file, the touch command will be used. The touch keyword followed by the file name will create a file in the current directory.
root@ubuntu:~# touch <file name>
To create a link to another file, we use the ln command. This is one of the important Linux commands that you should know if you’re planning to work as a Linux administrator.
root@ubuntu:~# ln -s <source path> <link name>
The basic syntax involves using the -s parameter so we can create a symbolic link or soft link.
When you want to output the contents of a file, or print anything to the terminal output, we make use of the cat or echo commands. Let’s see their basic usage. I’ve added some text to our New-File that we created earlier.
root@ubuntu:~# cat <file name>
root@ubuntu:~# echo <Text to print on terminal>
As you can see in the above example, the cat command when used on our “New-File”, prints the contents of the file. At the same time, when we use echo command, it simply prints whatever follows after the command.
The less command is used when the output printed by any command is larger than the screen space and needs scrolling. The less command allows use to break down the output and scroll through it with the use of the enter or space keys.
The simple way to do this is with the use of the pipe operator (|).
root@ubuntu:~# cat /boot/grub/grub.cfg | less
Learn more about the echo command(Link to article) and cat command(Link to article).
The man command is a very useful Linux command you must know. When working with Linux, the packages that we download can have a lot of functionality. Knowing it all is impossible.
The man pages offer a really efficient way to know the functionality of pretty much all the packages that you can download using the package managers in your Linux distro.
root@ubuntu:~# man <command>
The uname and whoami commands allow you to know some basic information which comes really handy when you work on multiple systems. In general, if you’re working with a single computer, you won’t really need it as often as someone who is a network administrator.
Let’s see the output of both the commands and the way we can use these.
root@ubuntu:~# uname -a
The parameter -a which I’ve supplied to uname, stands for “all”. This prints out the complete information. If the parameter is not added, all you will get as the output is “Linux”.
The tar command in Linux is used to create and extract archived files in Linux. We can extract multiple different archive files using the tar command.
To create an archive, we use the -c parameter and to extract an archive, we use the -x parameter. Let’s see it working.
#Compress
root@ubuntu:~# tar -cvf <archive name> <files seperated by space>
#Extract
root@ubuntu:~# tar -xvf <archive name>
In the first line, we created an archive named Compress.tar with the New-File and New-File-Link. In the next command, we have extracted those files from the archive.
Now coming to the zip and unzip commands. Both these commands are very straight forward. You can use them without any parameters and they’ll work as intended. Let’s see an example below.
root@ubuntu:~# zip <archive name> <file names separated by space>
root@ubuntu:~# unzip <archive name>
Since we already have those files in the same directory, the unzip command prompts us before overwriting those files.
Learn more about the tar command (Link to article) and zip and unzip commands (Link to article)
If you wish to search for a specific string within an output, the grep command comes into the picture. We can pipe (|) the output to the grep command and extract the required string.
root@ubuntu:~# <Any command with output> | grep "<string to find>"
This was a simple demonstration of the command. Learn more about the grep command (Link to article)
When outputting large files, the head and the tail commands come in handy. I’ve created a file named “Words” with a lot of words arranged alphabetically in it. The head command will output the first 10 lines from the file, while the tail command will output the last 10. This also includes any blank lines and not just lines with text.
root@ubuntu:~# head <file name>
root@ubuntu:~# tail <file name>
As you can see, the head command showed 10 lines from the top of the file.
The tail command outputted the bottom 10 lines from the file.
Learn more about the tail command(Link to article)
Linux offers multiple commands to compare files. The diff, comm, and cmp commands compare differences and are some of the most useful Linux commands you must know. Let’s see the default outputs for all the three commands.
root@ubuntu:~# diff <file 1> <file 2>
As you can see above, I’ve added a small piece of text saying “This line is edited” to the New-File-Edited file.
root@ubuntu:~# cmp <file 1> <file 2>
The cmp command only tells use the line number which is different. Not the actual text. Let’s see what the comm command does.
root@ubuntu:~# comm <file 1> <file2>
The text that’s aligned to the left is the text that’s only present in file 1. The center-aligned text is present only in file 2. And the right-aligned text is present in both the files.
By the looks of it, comm command makes the most sense when we’re trying to compare larger files and would like to see everything arranged together.
The sort command will provide a sorted output of the contents of a file. Let’s use the sort command without any parameters and see the output.
The basic syntax of the sort command is:
root@ubuntu:~# sort <filename>
The export command is specially used when exporting environment variables in runtime. For example, if I wanted to update the bash prompt, I’ll update the PS1 environment variable. The bash prompt will be updated with immediate effect.
root@ubuntu:~# export <variable name>=<value>
If for some reason, your bash prompt doesn’t update, just type in bash and you should see the updated terminal prompt.
Learn more about the export command(Link to article)
The ssh command allows us to connect to an external machine on the network with the use of the ssh protocol. The basic syntax of the ssh command is:
root@ubuntu:~ -->> ssh username@hostname
Learn more about ssh command(Link to article)
The service command in Linux is used for starting and stopping different services within the operating system. The basic syntax of the command is as below.
root@ubuntu:~ -->> service ssh status
root@ubuntu:~ -->> service ssh stop
root@ubuntu:~ -->> service ssh start
As you can see in the image, the ssh server is running on our system.
While we’re on the topic of processes, let’s see how we can find active processes and kill them. To find the running processes, we can simply type ps in the terminal prompt and get the list of running processes.
root@ubuntu:~ -->> ps
root@ubuntu:~ -->> kill <process ID>
root@ubuntu:~ -->> killall <process name>
For demonstration purposes, I’m creating a shell script with an infinite loop and will run it in the background.
With the use of the & symbol, I can pass a process into the background. As you can see, a new bash process with PID 14490 is created.
Now, to kill a process with the kill command, you can type kill followed b the PID of the process.
But if you do not know the process ID and just want to kill the process with the name, you can make use of the killall command.
You will notice that PID 14490 stayed active. That is because both the times, I killed the sleep process.
Learn more about ps command (Link to article).
When working with Linux, the df and mount commands are very efficient utilities to mount filesystems and get details of the file system.
When I say mount, it means that we’ll connect the device to a folder so we can access the files from our filesystem. The default syntax to mount a filesystem is below:
root@ubuntu:~ -->> mount /dev/cdrom /mnt
root@ubuntu:~ -->> df -h
In the above case, /dev/cdrom is the device that needs to be mounted. Usually, a mountable device is found inside the /dev folder. /mnt is the destination folder to mount the device to. You can change it to any folder you want but I’ve used /mnt as it’s pretty much a system default folder for mounting devices.
To see the mounted devices and get more information about them, we make use of the df command. Just typing df will give us the data in bytes which is not readable. So we’ll use the -h parameter to make the data human-readable.
Learn more about the df command(Link to article)
The chmod and chown commands give us the functionality to change the file permissions and file ownership are the most important Linux commands you should know.
The main difference between the functions of the two commands is that the chmod command allows changing file permissions, while chown allows us to change the file owners.
The default syntax for both the commands is chmod <parameter> filename and chown user:group filename
root@ubuntu:~ -->> chmod +x loop.sh
root@ubuntu:~ -->> chmod root:root loop.sh
In the above example, we’re adding executable permissions to the loop.sh file with the chmod command. Apart from that, with the chown command, we’ve made it accessible only by root user and users within the root group.
As you will notice, the root root part is now changed to www-data which is the new user who has full file ownership.
Learn more about the chmod command(Link to article) and chown command (Link to article)
Moving on to the networking section in Linux, we come across the ifconfig and traceroute commands which will be frequently used if you manage a network.
The ifconfig command will give you the list of all the network interfaces along with the IP addresses, MAC addresses and other information about the interface.
root@ubuntu:~ -->> ifconfig
There are multiple parameters that can be used but we’ll work with the basic command here.
When working with traceroute, you can simply specify the IP address, the hostname or the domain name of the endpoint.
root@ubuntu:~ -->> traceroute <destination address>
Now obviously, localhost is just one hop (which is the network interface itself). You can try this same command with any other domain name or IP address to see all the routers that your data packets pass through to reach the destination.
Learn more about the ifconfig command(Link to article)
If you want to download a file from within the terminal, the wget command is one of the handiest command-line utilities available. This will be one of the important Linux commands you should know when working with source files.
When you specify the link for download, it has to directly be a link to the file. If the file cannot be accessed by the wget command, it will simply download the webpage in HTML format instead of the actual file that you wanted.
Let’s try an example. The basic syntax of the wget command is :
root@ubuntu:~ -->> wget <link to file>
OR
root@ubuntu:~ -->> wget -c <link to file>
The -c argument allows us to resume an interrupted download.
UFW and IPTables are firewall interfaces for the Linux Kernel’s netfilter firewall. IPTables directly passes firewall rules to netfilter while UFW configures the rules in IPTables which then sends those rules to netfilter.
Why do we need UFW when we have IPTables? Because IPTables is pretty difficult for a newbie. UFW makes things extremely easy. See the below example where we are trying to allow the port 80 for our webserver.
root@ubuntu:~# iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
root@ubuntu:~# ufw allow 80
I’m sure you now know why UFW was created! Look at how easy the syntax becomes. Both these firewalls are very comprehensive and can allow you to create any kind of configuration required for your network. Learn at least the basics of UFW or IPTables firewall as these are the Linux commands you must know.
Learn more opening ports on Linux(Link to article)
Different distros of Linux make use of different package managers. Since we’re working on a Ubuntu server, we have the apt package manager. But for someone working on a Fedora, Red Hat, Arch, or Centos machine, the package manager will be different.
Getting yourself well versed with the package manager of your distribution will make things much easier for you in the long run. So even if you have a GUI based package management tool installed, try an make use of the CLI based tool before you move on to the GUI utility. Add these to your list of Linux commands you must know.
“With great power, comes great responsibility”
This is the quote that’s displayed when a sudo enabled user(sudoer) first makes use of the sudo command to escalate privileges. This command is equivalent to having logged in as root (based on what permissions you have as a sudoer).
non-root-user@ubuntu:~# sudo <command you want to run>
Password:
Just add the word sudo before any command that you need to run with escalated privileges and that’s it. It’s very simple to use, but can also be an added security risk if a malicious user gains access to a sudoer.
Learn more about the sudo command (Link to article)
Ever wanted to view the calendar in the terminal? Me neither! But there apparently are people who wanted it to happen and well here it is.
The cal command displays a well-presented calendar on the terminal. Just enter the word cal on your terminal prompt.
root@ubuntu:~# cal
root@ubuntu:~# cal May 2019
Even though I don’t need it, it’s a really cool addition! I’m sure there are people who are terminal fans and this is a really amazing option for them.
Do you have some commands that you run very frequently while using the terminal? It could be rm -r or ls -l, or it could be something longer like tar -xvzf. This is one of the productivity-boosting Linux commands you must know.
If you know a command that you run very often, it’s time to create an alias. What’s an alias? In simple terms, it’s another name for a command that you’ve defined.
root@ubuntu:~# alias lsl="ls -l"
OR
root@ubuntu:~# alias rmd="rm -r"
Now every time you enter lsl or rmd in the terminal, you’ll receive the output that you’d have received if you had used the full commands.
The examples here are for really small commands that you can still type by hand every time. But in some situations where a command has too many arguments that you need to type, it’s best to create a shorthand version of the same.
Learn more about alias command (LInk to article)
This command was created to convert and copy files from multiple file system formats. In the current day, the command is simply used to create bootable USB for Linux but there still are some things important you can do with the command.
For example, if I wanted to back up the entire hard drive as is to another drive, I’ll make use of the dd command.
root@ubuntu:~# dd if = /dev/sdb of = /dev/sda
The if and of arguments stand for input file and output file.
The names of the commands make it very clear as to their functionality. But let’s demonstrate their functionality to make things more clear.
The whereis command will output the exact location of any command that you type in after the whereis command.
root@ubuntu:~# whereis sudo
sudo: /usr/bin/sudo /usr/lib/sudo /usr/share/man/man8/sudo.8.gz
The whatis command gives us an explanation of what a command actually is. Similar to the whereis command, you’ll receive the information for any command that you type after the whatis command.
root@ubuntu:~# whatis sudo
sudo (8) - execute a command as another user
A few sections earlier, we talked about the ps command. You observed that the ps command will output the active processes and end itself.
The top command is like a CLI version of the task manager in Windows. You get a live view of the processes and all the information accompanying those processes like memory usage, CPU usage, etc.
To get the top command, all you need to do is type the word top in your terminal.
The useradd or adduser commands are the exact same commands where adduser is just a symbolic link to the useradd command. This command allows us to create a new user in Linux.
root@ubuntu:~# useradd JournalDev -d /home/JD
The above command will create a new user named JournalDev with the home directory as /home/JD.
The usermod command, on the other hand, is used to modify existing users. You can modify any value of the user including the groups, the permissions, etc.
For example, if you want to add more groups to the user, you can type in:
root@ubuntu:~# usermod JournalDev -a -G sudo, audio, mysql
Learn more on how to create and manage users on Linux (Link to article)
Now that you know how to create new users, let’s also set the password for them. The passwd command lets you set the password for your own account, or if you have the permissions, set the password for other accounts.
The command usage is pretty simple:
root@ubuntu:~# passwd
New password:
If you add the username after passwd, you can set passwords for other users. Enter the new password twice and you’re done. That’s it! You will have a new password set for the user!
This happened to be a very long article but I’m sure will be something you can refer to whenever required. As we add more articles to JournalDev, we will continue to add links to those articles here.
We hope this article was useful to you. If you have any questions, feel free to comment down below.
]]>Conditional programming is an important part of any programming language because executing every single statement in our program is, more often than not, undesirable.
And we need a way to conditionally execute statements. The if-else in shell scripts serves this exact situation.
One of the most important parts of conditional programming is the if-else statements. An if-else statement allows you to execute iterative conditional statements in your code.
We use if-else in shell scripts when we wish to evaluate a condition, then decide to execute one set between two or more sets of statements using the result.
This essentially allows us to choose a response to the result which our conditional expression evaluates to.
Now we know what is an if-else function and why is it important for any programmer, regardless of their domain. To understand if-else in shell scripts, we need to break down the working of the conditional function.
Let us have a look at the syntax of the if-else condition block.
if [condition]
then
statement1
else
statement2
fi
Here we have four keywords, namely if, then, else and fi.
An important thing to keep in mind is that, like C programming, shell scripting is case sensitive. Hence, you need to be careful while using the keywords in your code.
It is easy to see the syntax of a function and believe you know how to use it. But it is always a better choice to understand a function through examples because they help you understand the role that different aspects of a function play.
Here are some useful examples of if-else in shell scripts to give you a better idea of how to use this tool.
Command | Description |
&& | Logical AND |
$0 | Argument 0 i.e. the command that’s used to run the script |
$1 | First argument (change number to access further arguments) |
-eq | Equality check |
-ne | Inequality check |
-lt | Less Than |
-le | Less Than or Equal |
-gt | Greater Than |
-ge | Greater Than or Equal |
When trying to understand the working of a function like if-else in a shell script, it is good to start things simple. Here, we initialize two variables a and b, then use the if-else function to check if the two variables are equal. The bash script should look as follows for this task.
#!/bin/bash
m=1
n=2
if [ $n -eq $m ]
then
echo "Both variables are the same"
else
echo "Both variables are different"
fi
Output:
Both variables are different
The more common use of if-else in shell scripts is for comparing two values. Comparing a variable against another variable or a fixed value helps is used in a variety of cases by all sorts of programmers.
For the sake of this example, we will be initializing two variables and using the if-else function to find the variable which is greater than the other.
#!/bin/bash
a=2
b=7
if [ $a -ge $b ]
then
echo "The variable 'a' is greater than the variable 'b'."
else
echo "The variable 'b' is greater than the variable 'a'."
fi
Output:
The variable 'b' is greater than the variable 'a'.
Sometimes we come across situations where we need to deal with and differentiate between even and odd numbers. This can be done with if-else in shell scripts if we take the help of the modulus operator.
The modulus operator divides a number with a divisor and returns the remainder.
As we know all even numbers are a multiple of 2, we can use the following shell script to check for us whether a number is even or odd.
#!/bin/bash
n=10
if [ $((n%2))==0 ]
then
echo "The number is even."
else
echo "The number is odd."
fi
Output:
The number is even
As you can see, we’ve enclosed a part of the condition within double brackets. That’s because we need the modulus operation to be performed before the condition is checked.
Also, enclosing in double brackets runs statements in C-style allowing you to process some C-style commands within bash scripts.
The if-else function is known for its versatility and range of application. In this example, we will use if-else in shell script to make the interface for a password prompt.
To do this, we will ask the user to enter the password and store it in the variable pass.
If it matches the pre-defined password, which is ‘password’ in this example, the user will get the output as -“The password is correct”.
Else, the shell script will tell the user that the password was incorrect and ask them to try again.
#!/bin/bash
echo "Enter password"
read pass
if [ $pass="password" ]
then
echo "The password is correct."
else
echo "The password is incorrect, try again."
fi
The function of if-else in shell script is an important asset for shell programmers. It is the best tool to use when you need to execute a set of statements based on pre-defined conditions.
The if-else block is one, if not the most essential part of conditional programming. By regulating the execution of specific statements you not only make your code more efficient but you also free up the precious time which the processor might have wasted executing statements which are unnecessary for a specific case.
We hope this tutorial was able to help you understand how to use the if-else function. If you have any queries, feedback or suggestions, feel free to reach out to us in the comments below.
]]>This makes it important for us to be aware about the different types of shells available in Linux. In this tutorial, we will discuss what is a shell and why is it important.
Further, we will explore different types of shells in Linux to understand their functions and properties.
Whenever a user logs in to the system or opens a console window, the kernel runs a new shell instance. The kernel is the heart of any operating system.
It is responsible for the control management, and execution of processes, and to ensure proper utilization of system resources.
A shell is a program that acts as an interface between a user and the kernel. It allows a user to give commands to the kernel and receive responses from it. Through a shell, we can execute programs and utilities on the kernel. Hence, at its core, a shell is a program used to execute other programs on our system.
Being able to interact with the kernel makes shells a powerful tool. Without the ability to interact with the kernel, a user cannot access the utilities offered by their machine’s operating system.
Let’s understand the major shells that are available for the Linux environment.
If you now understand what a kernel is, what a shell is, and why a shell is so important for Linux systems, let’s move on to learning about the different types of shells that are available.
Each of these shells has properties that make them highly efficient for a specific type of use over other shells. So let us discuss the different types of shells in Linux along with their properties and features.
Developed at AT&T Bell Labs by Steve Bourne, the Bourne shell is regarded as the first UNIX shell ever. It is denoted as sh. It gained popularity due to its compact nature and high speeds of operation.
This is what made it the default shell for Solaris OS. It is also used as the default shell for all Solaris system administration scripts. Start reading about shell scripting here.
However, the Bourne shell has some major drawbacks.
The complete path-name for the Bourne shell is /bin/sh and /sbin/sh. By default, it uses the prompt # for the root user and $ for the non-root users.
More popularly known as the Bash shell, the GNU Bourne-Again shell was designed to be compatible with the Bourne shell. It incorporates useful features from different types of shells in Linux such as Korn shell and C shell.
It allows us to automatically recall previously used commands and edit them with help of arrow keys, unlike the Bourne shell.
The complete path-name for the GNU Bourne-Again shell is /bin/bash. By default, it uses the prompt bash-VersionNumber# for the root user and bash-VersionNumber$ for the non-root users.
The C shell was created at the University of California by Bill Joy. It is denoted as csh. It was developed to include useful programming features like in-built support for arithmetic operations and a syntax similar to the C programming language.
Further, it incorporated command history which was missing in different types of shells in Linux like the Bourne shell. Another prominent feature of a C shell is “aliases”.
The complete path-name for the C shell is /bin/csh. By default, it uses the prompt hostname# for the root user and hostname% for the non-root users.
The Korn shell was developed at AT&T Bell Labs by David Korn, to improve the Bourne shell. It is denoted as ksh. The Korn shell is essentially a superset of the Bourne shell.
Besides supporting everything that would be supported by the Bourne shell, it provides users with new functionalities. It allows in-built support for arithmetic operations while offereing interactive features which are similar to the C shell.
The Korn shell runs scripts made for the Bourne shell, while offering string, array and function manipulation similar to the C programming language. It also supports scripts which were written for the C shell. Further, it is faster than most different types of shells in Linux, including the C shell.
The complete path-name for the Korn shell is /bin/ksh. By default, it uses the prompt # for the root user and $ for the non-root users.
The Z Shell or zsh is a sh shell extension with tons of improvements for customization. If you want a modern shell that has all the features a much more, the zsh shell is what you’re looking for.
Some noteworthy features of the z shell include:
Let us summarise the different shells in Linux which we discussed in this tutorial in the table below.
Shell | Complete path-name | Prompt for root user | Prompt for non root user |
Bourne shell (sh) | /bin/sh and /sbin/sh | # | $ |
GNU Bourne-Again shell (bash) | /bin/bash | bash-VersionNumber# | bash-VersionNumber$ |
C shell (csh) | /bin/csh | # | % |
Korn shell (ksh) | /bin/ksh | # | $ |
Z Shell (zsh) | /bin/zsh | <hostname># | <hostname>% |
Shells are one of, if not the most powerful tools available to a Linux user. Without shells, it is practically impossible for a person to utilise the features and functionality offered by the kernel installed on their system.
While we covered only the most commonly used types of shells in Linux, there are many other shell types worth exploring.
We hope this tutorial was able to help you to get understand the concept of shells, along with the properties of the different types of shells in Linux. If you have any feedback, queries, or suggestions, feel free to reach out to us in the comments below.
]]>But don’t lose hope if you’ve deleted the files long ago. There’s still a possibility that the file data is still present on your hard drive. So read on, you might as well be able to recover all the files that you’ve lost before!
There are very few things that you need to consider because most of the data recovery tools work in a similar manner by accessing the fragmented bits on your hard drive.
The major differences are usually in the ease of use, user interface, and features. So let’s go over the features and functionality of the top 20 best Linux data recovery tools in this article.
This is one of my favorite utilities. It’s a command-line based tool but is really easy to use and very interactive. The utility runs and starts working its magic by simply running the command.
We wrote a recent tutorial on the testdisk utility which walks you through the installation and the steps to recover files.
Some of the features of TestDisk:
Another really good command-line utility is Mondo Rescue which has a few unique features which are really helpful for people working on multiple different types of file systems. This is the one utility that has been used for decades to backup/restore/recover data from all types of storage devices tape drives too!
Some features of Mondo Rescue:
We’re still sticking with command-line utilities. This is a utility that was developed by GNU. This is a free and open-source utility like all the other utilities by GNU.
Some of the features of ddrescue:
This can work as a regular utility or as a bootable CD/USB that you can plug into any device, boot into the recovery utility, and start recovering data. The benefit of such a utility is that it is platform-independent allowing you to restore data for pretty much any operating system including Linux.
Some features of Redo Backup and Recovery:
This is another recovery utility by CGSecurity (the other one is TestDisk). PhotoRec was specifically created for recovering deleted photos and other graphic style media from SDCards, and hard drives.
Some of the features of PhotoRec:
If your operating system no longer boots and you need to recover files from your hard disk, this is the live CD to use. It can be used as a CD or a USB based on availability.
Some of the features of Boot Repair Live CD:
This is a forensic data recovery tool that is pre-installed in Kali Linux but can be installed on pretty much any other Linux distro. This tool can also recover data from images (like those created with the dd command).
Some of the features of Foremost:
Originally based on Foremost, Scalpel is another file carving utility that works on Windows and Linux. This utility also works on image files but has an added advantage of multithreading and asynchronous IO.
Some features of Scalpel:
This is more of a collection of tools than a tool in itself. If you are stuck in a situation where you’re not able to boot into your system, this is the one bootable recovery CD that will help you out.
Some features of SystemRescue CD:
Similar to the SystemRescue CD, Ultimate boot CD is a collection of diagnostic tools. But it doesn’t end at that. If you see the above screenshot, you’ll notice “Parted Magic” and “UBCD FreeDOS”. Yes, that’s exactly what it says.
The CD also packages these two operating systems which can be booted live from this menu to troubleshoot any of your Linux or Windows issues. The full list of tools and utilities packaged inside this CD are available on the website but here is a list of a few of the tools.
Some of the features of Ultimate Boot CD:
Now Knoppix is not your regular Linux recovery utility like the ones listed above. Though the entire purpose of this Linux distro was to be run live and recover lost data or operating systems, it is fully capable of being run as the sole operating system for your computer.
It comes packaged with almost all the tools you’d ever need to recover lost data.
Some of the features of Knoppix:
In some of the above Live CDs, we mentioned the GParted tool which is a GUI layer to the GNU Parted utility. Well, if all you want is the GParted tool for recovery, this live CD will solve your problem.
GParted Live is a live CD that gives you instant access to GParted if you want to recover a failing system or partition.
Some of the features of GParted Live:
The SafeCopy is one of the best Linux data recovery tools and works when all else fails. This tool is used for recovering data from damaged and bad sectors on a hard drive.
SafeCopy also tries to get as much data as possible from the source drive, even resorting to some device-specific low-level operations wherever applicable.
Some of the features of SafeCopy:
The grep utility that we use for finding text on the terminal output is powerful enough to also help us find lost text data. Have a look at the code sample below:
grep -a -A 400 -B 25 'string to find here' /dev/sdb1 > recover.txt
This is a command-line tool that is created for ext3 file systems for data recovery. With just two commands, you can recover and restore any deleted file that was recovered with this tool.
ext3grep --dump-name <drive>
ext3grep --restore-all <drive>
#Restored data is stored in this folder
cd RESTORED_FILES
This is a command-line tool like many others in the list and is available from the package repositories for most Linux distributions.
Some of the features of ext4magic:
This utility has its roots in the code of ext3grep. The ext3grep utility used the disk journal to recover files and so does the extundelete. This utility searches the disk journal for old copies of an existing inode to find more details and collectively forms it into a file.
Some of the features of extundelete:
This is one of the best data recovery tools in Linux from the list. It has a free and a premium version but for personal use, it’s a completely free tool. It uses IntelligentScan technology that can recover severely damaged data too.
Some of the features of R-Undelete:
This is a script written to simplify the use of some of the complicated tools like Sleuthkit and PhotoRec and simplify them. Using these tools in combination also makes this script a more efficient method to extract more of the deleted inodes correctly.
Since this is a script that combines the functionality of multiple tools, the functionality is similar to the tools themselves.
This is a tool that’s made specifically for forensic investigators to perform analysis on hard drives and collect evidence. So the Sleuth Kit uses very efficient and advanced algorithms to extract as much deleted data as possible from hard drives.
The previous tool that we talked about “ext3undel” leverages technology from the Sleuth Kit but if you would like to use this tool as a standalone one, you can use it along with it’s GUI front-end “Autopsy”
Some of the features of The Sleuth Kit:
We hope you found the right tool for your data recovery needs with this article. For a basic user, pretty much any tool from the list will work. But for people who’ve special needs with file recovery, they need to find the one that has the features that they need.
Always remember though, these tools try to recover files that have been deleted based on the metadata that still remains. But that’s not a guarantee of recovery so it’s always best to have backups of all the data that’s being stored.
]]>It is a hidden file and simple ls command won’t show the file.
To view hidden files, you can run the below command:
$ ls -a
You can see the .bashrc command in the first column. The contents of .bashrc can be changed to define functions, command aliases, and customize the bash.
.bashrc file has a lot of comments that makes it easy to understand.
To view the bashrc file:
$ cat .bashrc
A few examples of editing .bashrc are provided below.
bashrc can be used to define functions that reduce redundant efforts. These functions can be a collection of basic commands. These functions can even use arguments from the terminal.
Let’s define a function that tells the date in a more descriptive manner.
First you’ll need to enter the .bashrc file in editing mode.
$ vi .bashrc
This is what the terminal will look like. To start editing press any letter on the keyboard. At the end of the file add the following code:
today()
{
echo This is a `date +"%A %d in %B of %Y (%r)"` return
}
Press escape. Then to save and exit from vi, press colon (:) followed by ‘wq’ and enter.
The changes are saved. To reflect the changes in the bash, either exit and launch the terminal again.
Or use the command:
$ source .bashrc
To run the function just created call today :
$ today
Let’s create another function. This would combine the process of creating a directory and then entering that directory into a single command.
In the bashrc file add:
mkcd ()
{
mkdir -p -- "$1" && cd -P -- "$1"
}
This combines the two separate commands :
$1 represents the first parameter passed along with the function call.
To use this function:
$ mkcd directory_name
This command will pass ‘directory_name’ as the parameter.
Our function will first use mkdir to create the directory by the name ‘directory_name’ and then cd into ‘directory_name’.
Aliases are different names for the same command. Consider them as shortcuts to a longer form command. The .bashrc file already has a set of predefined aliases.
As a user, if there is an alias that you use regularly, then instead of defining it every time you open the terminal, you can save it in the .bashrc file.
For example, we can replace the whoami command with the following line of code.
alias wmi='whoami'
Don’t forget to save the edit and then run:
$ source .bashrc
Now I can use wmi command and the terminal will run it as whoami.
In general aliases can be defined by adding the statement:
alias aliasname='commands'
Here it is noteworthy to mention that there should be no space between ‘aliasname’, ‘=’ and ‘commands’.
Aliases can also be used to store lengthy paths to directories.
There are a lot of ways to customize the terminal using bashrc file.
To change the text displayed at the prompt, add the following line at the end of the file :
PS1="JournalDev> "
Save the edit and run :
$ source .bashrc
Once you refresh the bashrc file using the source command, your bash prompt will change like the image below.
You can also change the limit of command history that is displayed when the UP arrow is pressed. To do so, change the HISTSIZE and HISTFILESIZE variables in the bashrc file.
The changes made to bashrc file look like this:
Redundant command sequences can be put in bashrc under a function. This will save a lot of time and effort. While editing the bashrc file, users should be careful and always take a backup before making any changes.
]]>