A flexible and versatile programming language, Python is effective for many use cases, including scripting, automation, data analysis, machine learning, and back-end development. First published in 1991 with a name inspired by the British comedy group Monty Python, the development team wanted to make Python a language that was fun to use. Quick to set up, and written in a relatively straightforward style with immediate feedback on errors, Python is a great choice for beginners and experienced developers alike. Python 3 is the most current version of the language and is considered to be the future of Python.
This tutorial will get your Debian 9 server set up with a Python 3 programming environment. Programming on a server has many advantages and supports collaboration across development projects.
In order to complete this tutorial, you should have a non-root user with sudo
privileges on a Debian 9 server. To learn how to achieve this setup, follow our Debian 9 initial server setup guide.
If you’re not already familiar with a terminal environment, you may find the article “[An Introduction to the Linux Terminal] (https://www.digitalocean.com/community/tutorials/an-introduction-to-the-linux-terminal)” useful for becoming better oriented with the terminal.
With your server and user set up, you are ready to begin.
Debian Linux ships with both Python 3 and Python 2 pre-installed. To make sure that our versions are up-to-date, let’s update and upgrade the system with the apt
command to work with the Advanced Packaging Tool:
- sudo apt update
- sudo apt -y upgrade
The -y
flag will confirm that we are agreeing for all items to be installed.
Once the process is complete, we can check the version of Python 3 that is installed in the system by typing:
- python3 -V
You’ll receive output in the terminal window that will let you know the version number. While this number may vary, the output will be similar to this:
OutputPython 3.5.3
To manage software packages for Python, let’s install pip, a tool that will install and manage programming packages we may want to use in our development projects. You can learn more about modules or packages that you can install with pip by reading “How To Import Modules in Python 3.”
- sudo apt install -y python3-pip
Python packages can be installed by typing:
- pip3 install package_name
Here, package_name
can refer to any Python package or library, such as Django for web development or NumPy for scientific computing. So if you would like to install NumPy, you can do so with the command pip3 install numpy
.
There are a few more packages and development tools to install to ensure that we have a robust set-up for our programming environment:
- sudo apt install build-essential libssl-dev libffi-dev python3-dev
Once Python is set up, and pip and other tools are installed, we can set up a virtual environment for our development projects.
Virtual environments enable you to have an isolated space on your server for Python projects, ensuring that each of your projects can have its own set of dependencies that won’t disrupt any of your other projects.
Setting up a programming environment provides us with greater control over our Python projects and over how different versions of packages are handled. This is especially important when working with third-party packages.
You can set up as many Python programming environments as you want. Each environment is basically a directory or folder on your server that has a few scripts in it to make it act as an environment.
While there are a few ways to achieve a programming environment in Python, we’ll be using the venv module here, which is part of the standard Python 3 library. Let’s install venv by typing:
- sudo apt install -y python3-venv
With this installed, we are ready to create environments. Let’s either choose which directory we would like to put our Python programming environments in, or create a new directory with mkdir
, as in:
- mkdir environments
- cd environments
Once you are in the directory where you would like the environments to live, you can create an environment by running the following command:
- python3.5 -m venv my_env
Essentially, pyvenv
sets up a new directory that contains a few items which we can view with the ls
command:
- ls my_env
Outputbin include lib lib64 pyvenv.cfg share
Together, these files work to make sure that your projects are isolated from the broader context of your local machine, so that system files and project files don’t mix. This is good practice for version control and to ensure that each of your projects has access to the particular packages that it needs. Python Wheels, a built-package format for Python that can speed up your software production by reducing the number of times you need to compile, will be in the Ubuntu 18.04 share
directory.
To use this environment, you need to activate it, which you can achieve by typing the following command that calls the activate script:
- source my_env/bin/activate
Your command prompt will now be prefixed with the name of your environment, in this case it is called my_env. Depending on what version of Debian Linux you are running, your prefix may appear somewhat differently, but the name of your environment in parentheses should be the first thing you see on your line:
-
This prefix lets us know that the environment my_env is currently active, meaning that when we create programs here they will use only this particular environment’s settings and packages.
Note: Within the virtual environment, you can use the command python
instead of python3
, and pip
instead of pip3
if you would prefer. If you use Python 3 on your machine outside of an environment, you will need to use the python3
and pip3
commands exclusively.
After following these steps, your virtual environment is ready to use.
Now that we have our virtual environment set up, let’s create a traditional “Hello, World!” program. This will let us test our environment and provides us with the opportunity to become more familiar with Python if we aren’t already.
To do this, we’ll open up a command-line text editor such as nano and create a new file:
- nano hello.py
Once the text file opens up in the terminal window we’ll type out our program:
print("Hello, World!")
Exit nano by typing the CTRL
and X
keys, and when prompted to save the file press y
.
Once you exit out of nano and return to your shell, let’s run the program:
- python hello.py
The hello.py
program that you just created should cause your terminal to produce the following output:
OutputHello, World!
To leave the environment, simply type the command deactivate
and you will return to your original directory.
Congratulations! At this point you have a Python 3 programming environment set up on your Debian 9 Linux server and you can now begin a coding project!
If you are using a local machine rather than a server, refer to the tutorial that is relevant to your operating system in our “How To Install and Set Up a Local Programming Environment for Python 3” series.
With your server ready for software development, you can continue to learn more about coding in Python by reading our free How To Code in Python 3 eBook, or consulting our Programming Project tutorials.
Download our free Python eBook!
<VirtualHost *:80>
ServerName domain.com
ServerAlias www.domain.com
Redirect permanent / https://domain.com/
ServerAdmin webmaster@localhost
Alias /static /var/www/static-root
<Directory /var/www/static-root>
Require all granted
</Directory>
Alias /media /var/www/media-root
<Directory /var/www/media-root>
Order deny,allow
Allow from all
</Directory>
<Directory /var/www/venv/src/myproject >
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess myproject python-path=/var/www/venv/src/:/var/www/venv/lib/python3.5/site-packages
WSGIProcessGroup myproject
WSGIScriptAlias / /var/www/venv/src/myproject /wsgi.py
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
so can anyone help me please
]]>and i check shources.list
deb http://mirrors.digitalocean.com/debian stretch main contrib non-free deb-src http://mirrors.digitalocean.com/debian stretch main contrib non-free
deb http://security.debian.org stretch/updates main contrib non-free deb-src http://security.debian.org stretch/updates main contrib non-free deb http://mirrors.digitalocean.com/debian stretch-updates main contrib non-free deb-src http://mirrors.digitalocean.com/debian stretch-updates main contrib non-free
wrong on shources.list or what? could you solve please?
]]>I’m very much a beginner with Linux and website code as a whole, so please be patient with me- if it comes down to it, I can try to make the website itself listen to a different port, but that’s a process I don’t even know how to begin. Both ports should be open and functional- port 8081 is for certain.
I don’t know what logs would be relevant here, but I’d be happy to fetch whatever’s needed. Currently, it’s running on Debian 9. This whole issue started when ACME v1 was shut down and I updated to ACME v2- beforehand I barely had to touch the code on my site at all, as it was a fork of a now-abandoned project. Which does mean I don’t know it as well as I likely should… Whoops.
Thank you so much for taking the time to look at my question, and apologies in advance if I miss something obvious! Any advice at all is greatly appreciated
]]>INFO[2021-06-04T15:55:01.566710794Z] Starting up
failed to start daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid
I deleted the PID file and then this error came:
INFO[2021-06-04T15:53:13.559081445Z] Starting up
INFO[2021-06-04T15:53:13.561387278Z] parsed scheme: "unix" module=grpc
INFO[2021-06-04T15:53:13.561516232Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2021-06-04T15:53:13.561653000Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>} module=grpc
INFO[2021-06-04T15:53:13.561728791Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2021-06-04T15:53:13.563173017Z] parsed scheme: "unix" module=grpc
INFO[2021-06-04T15:53:13.563257881Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2021-06-04T15:53:13.563334916Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>} module=grpc
INFO[2021-06-04T15:53:13.563401769Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2021-06-04T15:53:13.569060266Z] [graphdriver] using prior storage driver: overlay2
failed to start daemon: error while opening volume store metadata database: timeout
Both command were launched with sudo dockerd.
]]>Assignment outside of section. Ignoring
a few instances of that… then lastly
Service lacks both ExecStart= and ExecStop= setting. Refusing
]]>After the initial scare, i checked the apt logs and it seems that do-agent service was upgraded silently:
apt-get -o Dpkg::Options::=--force-confdef -o Dpkg::Options::=--force-confold -qq install -y --only-upgrade do-agent
No other alarms were going off, and everything seems to be working nominal.
I think the changes to /etc/passwd were made to the do-agent user during the upgrade process, but i would like to know from you guys if this is actually the case.
]]>i try with this butnot working
sysctl net.ipv4.ip_forward = 1 iptables -A PREROUTING -t nat -i eth0 -p tcp –dport 8270 -j DNAT –to-destination 192.168.245.1:8270 iptables -A FORWARD -p tcp -d 192.168.245.2 –dport 8270 -j ACEPTA iptables -A POSTROUTING -t nat -s 192.168.245.2 -o eth0 -j MASQUERADE
intente de esta otra manera:
sysctl net.ipv4.ip_forward = 1 iptables -t nat -A PREROUTING -p tcp –dport 8270 -j DNAT –to-destination 192.168.245.2:8270 iptables -t nat -A POSTROUTING -j MASQUERADE
]]>intente via ip tables y no funciona
]]>I’m actually trying to install peertube on my server, but the thing is that there is a redirection loop in my nginx config, and i can’t find it… Can someone help me? Here is my nginx config (of course examples has been replaced with the name of my website
server {
server_name peertube.example.com;
location / {
proxy_buffering off;
proxy_pass http://localhost/;
proxy_redirect off;
client_max_body_size 40G;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Websocket tracker
location /tracker/socket {
# Peers send a message to the tracker every 15 minutes
# Don't close the websocket before this time
proxy_read_timeout 1200s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://localhost;
}
location /socket.io {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://localhost;
# enable WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/peertube.example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/peertube.example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = peertube.example.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name peertube.example.com;
listen 80;
return 404; # managed by Certbot
}
]]>MySQL — известная база данных с открытым исходным кодом, которая используется для хранения и получения данных для разнообразных популярных приложений. MySQL — это компонент M в комплекте LAMP, в который также входят операционная система Linux, веб-сервер Apache и язык программирования PHP.
В Debian 9 по умолчанию используется база данных MariaDB, разработанное сообществом ответвление проекта MySQL. MariaDB хорошо работает в большинстве случаев, но если вам требуются функции, которые доступны только в Oracle MySQL, вы можете устанавливать и использовать пакеты из репозитория, обслуживаемого разработчиками MySQL.
Чтобы установить последнюю версию MySQL, мы добавим этот репозиторий, установим программное обеспечение MySQL, обеспечим защиту установки и, наконец, протестируем работу MySQL и реагирование на команды.
Для целей этого обучающего руководства вам потребуется следующее:
sudo
.Разработчики MySQL предоставляют пакет .deb
, который обрабатывает настройку и установку официальных репозиториев программного обеспечения MySQL. После настройки репозиториев мы сможем использовать стандартную команду apt
в Ubuntu для установки программного обеспечения. Мы загрузим этот файл .deb
с wget
и установим его с помощью команды dpkg
.
Прежде всего следует загрузить страницу загрузки MySQL в браузере. Найдите кнопку Download (Загрузка) в правом нижнем углу и нажмите на нее для перехода на следующую страницу. На этой странице вам будет предложено ввести учетные данные или зарегистрировать учетную запись Oracle. Мы можем пропустить этот шаг и использовать ссылку No thanks, just start my download (Нет, спасибо, просто начните загрузку). Нажмите правой кнопкой мыши на ссылку и выберите Copy Link Address (Скопировать адрес ссылки) (в зависимости от браузера эта команда может называться по другому).
Теперь мы загрузим файл. Перейдите на сервере в директорию, куда может выполняться запись. Загрузите файл с помощью wget
и не забудьте вставить скопированный адрес вместо выделенного ниже фрагмента:
- cd /tmp
- wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
Теперь файл должен быть загружен в нашей текущей директории. Выведите список файлов, чтобы убедиться:
- ls
Должно быть указано имя файла:
Outputmysql-apt-config_0.8.10-1_all.deb
. . .
Теперь мы готовы к установке:
- sudo dpkg -i mysql-apt-config*
dpkg
используется для установки, удаления и проверки пакетов программного обеспечения .deb
. Флаг -i
показывает, что мы хотим выполнить установку из указанного файла.
Во время установки вам будет представлен экран конфигурации, где вы сможете указать предпочитаемую версию MySQL, а также установить репозитории для других инструментов, связанных с MySQL. По умолчанию данные будут добавлены только данные репозитория в последнюю стабильную версию MySQL. Это то, что нам требуется, так что используйте стрелку вниз для перехода к меню Ok
и нажмите ENTER
.
Теперь пакет закончит добавление репозитория. Обновите кэш пакетов apt
, чтобы сделать доступными новые пакеты программного обеспечения:
- sudo apt update
Теперь мы добавили репозитории MySQL и готовы к установке серверного программного обеспечения MySQL. Если вам потребуется обновить конфигурацию этих репозиториев, просто запустите команду sudo dpkg-reconfigure mysql-apt-config
, выберите новые опции, а затем запустите команду sudo apt-get update
для обновления кэша пакетов.
После добавления репозитория и обновления кэша пакетов мы можем использовать apt
для установки последней версии серверного пакета MySQL:
- sudo apt install mysql-server
apt
просмотрит все доступные пакеты mysql-server
и определит последний и лучше всего подходящий пакет MySQL. После этого будет проведен расчет зависимостей пакетов и вам будет предложено одобрить установку. Нажмите y
, а затем нажмите ENTER
. Будет выполнена установка программного обеспечения.
На этапе настройки конфигурации при установке вам будет предложено установить пароль для пользователя root. Выберите и подтвердите защищенный пароль, чтобы продолжить. После этого появится диалог, где вам нужно будет выбрать плагин аутентификации по умолчанию. Прочитайте текст на экране, чтобы понять предлагаемые для выбора варианты. Если вы не уверены, безопаснее выбрать вариант Use Strong Password Encryption (Использовать шифрование с надежным паролем).
Теперь база данных MySQL установлена и запущена. Давайте проверим, используя команду systemctl
:
- sudo systemctl status mysql
Output● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 15:58:21 UTC; 30s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Main PID: 12805 (mysqld)
Status: "SERVER_OPERATING"
CGroup: /system.slice/mysql.service
└─12805 /usr/sbin/mysqld
Sep 05 15:58:15 mysql1 systemd[1]: Starting MySQL Community Server...
Sep 05 15:58:21 mysql1 systemd[1]: Started MySQL Community Server.
Строка Active: active (running)
означает, что база данных MySQL запущена и работает. Теперь сделаем установку немного более безопасной.
В MySQL имеется команда, которую мы можем использовать для обновления безопасности установленного программного обеспечения. Запустим эту команду:
- mysql_secure_installation
MySQL запросит у вас пароль пользователя root, заданный при установке. Введите этот пароль и нажмите ENTER
. После этого нужно ответить на серию диалогов с вариантами ответов «Да» или «Нет». Давайте рассмотрим эти диалоги:
Вначале открывается диалог плагина проверки пароля, который автоматически задает определенные правила обеспечения надежности пароля для пользователей MySQL. Это решение вам нужно принимать в зависимости от ваших индивидуальных требований безопасности. Введите y
и нажмите ENTER
для активации или просто нажмите ENTER
, чтобы пропустить ввод. В случае активации вам также будет предложено выбрать уровень от 0 до 2, чтобы задать требуемый уровень строгости проверки пароля. Выберите число и нажмите ENTER
, чтобы продолжить.
Затем вам будет предложено изменить пароль для пользователя root. Поскольку мы создали пароль при установке MySQL, мы можем безопасно пропустить этот шаг. Нажмите ENTER
, чтобы продолжить без обновления пароля.
В остальных диалогах можно выбрать утвердительный ответ (yes). Вам будет предложено удалить анонимного пользователя MySQL, запретить удаленный вход в систему пользователя root, удалить базу данных test и перезагрузить таблицы привилегий, чтобы предыдущие изменения вступили в силу надлежащим образом. Лучше всего сделать все это. Введите y
и нажмите ENTER
в каждом из этих диалогов.
После ответов на вопросы во всех диалогах скрипт завершит работу. Теперь установка MySQL защищена. Мы проведем еще одно тестирование, для чего запустим клиент, который подключается к серверу и получает определенную информацию.
mysqladmin
— клиент администрирования MySQL через интерфейс командной строки. С его помощью мы подключимся к серверу и выведем информацию о версии и состоянии:
- mysqladmin -u root -p version
Опция -u root
указывает mysqladmin
выполнять вход от имени пользователя root в MySQL, опция -p
указывает клиенту запросить пароль, а version
— это запускаемая нами команда.
В результатах мы увидим версию запущенного сервера MySQL, время работы и другую информацию о состоянии:
Outputmysqladmin Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Server version 8.0.12
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 6 min 42 sec
Threads: 2 Questions: 12 Slow queries: 0 Opens: 123 Flush tables: 2 Open tables: 99 Queries per second avg: 0.029
Если вы получили примерно такие же результаты, поздравляем! Вы успешно установили последнюю версию сервера MySQL и обеспечили его защиту.
Вы завершили установку последней версии базы данных MySQL, которая должна работать с большинством популярных приложений.
]]>Nginx — один из самых популярных веб-серверов в мире. На веб-серверах Nginx размещены некоторые самые крупные сайты в Интернете с самым высоким уровнем трафика. Обычно он использует ресурсы эффективнее, чем Apache, и может использоваться как веб-сервер или обратный прокси-сервер.
В этом обучающем модуле мы расскажем, как установить Nginx на сервере Debian 9.
Для прохождения этого обучающего модуля вам потребуется настроенный на сервере пользователь без привилегий root с привилегиями sudo, а также активный брандмауэр. Подробнее об этом можно узнать в обучающем модуле Начальная настройка сервера для Debian 9.
Создав учетную запись, войдите в систему как пользователь без привилегий root.
Поскольку Nginx доступен в репозиториях Debian по умолчанию, его можно установить из этих репозиториев с помощью системы пакетов apt
.
Поскольку это первое наше взаимодействие с системой пакетов apt
в этом сеансе, мы также обновим индекс локальных пакетов, чтобы получить доступ к актуальным спискам пакетов. Затем мы можем выполнить установку nginx
:
- sudo apt update
- sudo apt install nginx
После принятия процедуры apt
выполнит установку Nginx и других требуемых зависимостей на ваш сервер.
Перед тестированием Nginx необходимо выполнить настройку программного обеспечения брандмауэра, чтобы разрешить доступ к службе.
Для вывода списка конфигураций приложений, которые известны ufw
, необходимо ввести следующую команду:
- sudo ufw app list
Необходимо получить список профилей приложений:
OutputAvailable applications:
...
Nginx Full
Nginx HTTP
Nginx HTTPS
...
Как видите, для Nginx доступны три профиля:
80
(обычный веб-трафик без шифрования) и порт 443
(трафик с шифрованием TLS/SSL)80
(обычный веб-трафик без шифрования)443
(трафик с шифрованием TLS/SSL)Рекомендуется применять самый ограничивающий профиль, который будет разрешать заданный трафик. Поскольку в этом модуле мы еще не настроили SSL для нашего сервера, нам нужно будет только разрешить трафик на порте 80
.
Для активации можно ввести следующую команду:
- sudo ufw allow 'Nginx HTTP'
Для проверки изменений введите:
- sudo ufw status
В результатах вы должны увидеть, что трафик HTTP разрешен:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
В конце процесса установки Debian 9 запускает Nginx. Веб-сервер уже должен быть запущен и работать.
Используйте команду systemd init system
, чтобы проверить работу службы:
- systemctl status nginx
Output● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:15:57 UTC; 3min 28s ago
Docs: man:nginx(8)
Process: 2402 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 2399 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 2404 (nginx)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/nginx.service
├─2404 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─2405 nginx: worker process
Как видно из результатов выполнения команды, служба успешно запущена. Однако лучше всего протестировать ее запуск посредством запроса страницы из Nginx.
Откройте страницу Nginx по умолчанию, чтобы подтвердить работу программного обеспечения через IP-адрес вашего сервера. Если вы не знаете IP-адрес своего сервера, введите в командную строку следующую команду:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
На экране появится несколько строк с адресами. Вы можете попробовать каждый из них в браузере, чтобы найти свой.
Когда вы узнаете IP-адрес вашего сервера, введите его в адресную строку браузера:
http://your_server_ip
Вы увидите начальную страницу Nginx по умолчанию:
Эта страница включена в Nginx и показывает правильную работу сервера.
Ваш веб-сервер запущен и работает, и теперь мы изучим некоторые базовые команды управления.
Чтобы остановить веб-сервер, введите:
- sudo systemctl stop nginx
Чтобы запустить остановленный веб-сервер, введите:
- sudo systemctl start nginx
Чтобы остановить и снова запустить службу, введите:
- sudo systemctl restart nginx
Если вы просто вносите изменения в конфигурацию, во многих случаях Nginx может перезагружаться без отключения соединений. Для этого введите:
- sudo systemctl reload nginx
По умолчанию Nginx настроен на автоматический запуск при загрузке сервера. Если вы не хотите этого, вы можете отключить такое поведение с помощью следующей команды:
- sudo systemctl disable nginx
Чтобы перезагрузить службу для запуска во время загрузки, введите:
- sudo systemctl enable nginx
При использовании веб-сервера Nginx вы можете использовать блоки сервера (аналогичные виртуальным хостам в Apache) для инкапсуляции данных конфигурации и размещения на одном сервере нескольких доменов. Мы создадим домен example.com, но вы должны заменить это имя собственным доменным именем. Чтобы узнать больше о настройке доменного имени с помощью DigitalOcean, см. наше руководство Введение в DigitalOcean DNS.
В Nginx на Debian 9 по умолчанию включен один серверный блок, настроенный для вывода документов из директории /var/www/html
. Хотя это хорошо работает для отдельного сайта, при хостинге нескольких сайтов это неудобно. Вместо изменения /var/www/html
мы создадим в /var/www
структуру директорий для нашего сайта example.com, оставив /var/www/html
как директорию по умолчанию для вывода в случае, если запросу клиента не соответствуют никакие другие сайты.
Создайте директорию для example.com следующим образом, используя флаг -p
для создания необходимых родительских директорий:
- sudo mkdir -p /var/www/example.com/html
Затем назначьте владение директорией с помощью переменной среды $USER
:
- sudo chown -R $USER:$USER /var/www/example.com/html
Разрешения корневых директорий веб-сервера должны быть правильными, если вы не изменяли значение umask
. Тем не менее вы можете проверить это с помощью следующей команды:
- sudo chmod -R 755 /var/www/example.com
Затем создайте в качестве примера страницу index.html
, используя nano
или свой любимый редактор:
- nano /var/www/example.com/html/index.html
Добавьте в страницу следующий образец кода HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com server block is working!</h1>
</body>
</html>
Сохраните файл и закройте его после завершения.
Для обслуживания этого контента Nginx необходимо создать серверный блок с правильными директивами. Вместо того чтобы изменять файл конфигурации по умолчанию напрямую, мы создадим новый файл в директории /etc/nginx/sites-available/example.com
:
- sudo nano /etc/nginx/sites-available/example.com
Введите следующий блок конфигурации, который похож на заданный по умолчанию, но обновлен с учетом новой директории и доменного имени:
server {
listen 80;
listen [::]:80;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
}
Мы обновили конфигурацию root
с указанием новой директории и заменили server_name
на имя нашего домена.
Теперь мы активируем файл, создав ссылку в директории sites-enabled
, который Nginx считывает при запуске:
- sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
Два серверных блока активированы и настроены для реагирования на запросы на основе директив listen
и server_name
(дополнительную информацию об обработке Nginx этих директив можно найти здесь):
example.com
: будет отвечать на запросы example.com
и www.example.com
.default
: будет отвечать на любые запросы порта 80
, не соответствующие двум другим блокам.Чтобы избежать возможной проблемы с хэшированием памяти при добавлении дополнительных имен серверов, необходимо изменить одно значение в файле /etc/nginx/nginx.conf
. Откройте файл:
- sudo nano /etc/nginx/nginx.conf
Найдите директиву server_names_hash_bucket_size
и удалите символ #
, чтобы убрать режим комментариев для строки:
...
http {
...
server_names_hash_bucket_size 64;
...
}
...
Сохраните файл и закройте его после завершения.
Проведите тестирования, чтобы убедиться в отсутствии ошибок синтаксиса в файлах Nginx:
- sudo nginx -t
Если проблем нет, вы увидите на экране следующие результаты:
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
После тестирования конфигурации перезапустите Nginx для активации изменений:
- sudo systemctl restart nginx
Теперь Nginx должен обслуживать ваше доменное имя. Вы можете проверить это, открыв в браузере адрес http://example.com
, после чего должны увидеть примерно следующее:
Теперь вы научились управлять службой Nginx, и настало время познакомиться с несколькими важными директориями и файлами.
/var/www/html
: веб-контент, в состав которого по умолчанию входит только показанная ранее страница Nginx по умолчанию, выводится из директории /var/www/html
. Это можно изменить путем изменения файлов конфигурации Nginx./etc/nginx
: директория конфигурации Nginx. Здесь хранятся все файлы конфигурации Nginx./etc/nginx/nginx.conf
: основной файл конфигурации Nginx. Его можно изменить для внесения изменений в глобальную конфигурацию Nginx./etc/nginx/sites-available/
: директория, где могут храниться серверные блоки для каждого сайта. Nginx не будет использовать файлы конфигурации из этой директории, если они не будут связаны с директорией sites-enabled
. Обычно конфигурации серверных блоков записываются в эту директорию и активируются посредством ссылки на другую директорию./etc/nginx/sites-enabled/
: директория, где хранятся активные серверные блоки каждого узла. Они созданы посредством ссылки на файлы конфигурации в директории sites-available
./etc/nginx/snippets
: в этой директории содержатся фрагменты конфигурации, которые можно включить в конфигурацию Nginx. Воспроизводимые сегменты конфигурации хорошо подходят для преобразования в сниппеты./var/log/nginx/access.log
: каждый запрос к вашему веб-серверу регистрируется в этом файле журнала, если Nginx не настроен иначе./var/log/nginx/error.log
: любые ошибки Nginx будут регистрироваться в этом журнале.Теперь вы установили веб-сервер и у вас есть богатые возможности выбора типа обслуживаемого контента и технологий для расширения возможностей ваших пользователей.
]]>Is there anything DigitalOcean virtualization related stuff going on which one might want to take into account before making the switch?
]]>Комплект LAMP — это набор программного обеспечения с открытым исходным кодом, которое обычно устанавливается в комплексе для размещения на сервере динамических сайтов и веб-приложений. Этот термин представляет собой аббревиатуру. Операционная система Linux используется с веб-сервером Apache. Данные сайта хранятся в базе данных MariaDB, а за обработку динамического контента отвечает PHP.
В этом обучающем модуле мы установим комплект LAMP на сервер Debian 9.
Для данного обучающего модуля вам потребуется сервер Debian 9 с учетной записью пользователя без привилегий root и с привилегиями sudo
, а также базовым брандмауэром. При настройке следует использовать указания руководства по начальной настройке сервера Debian 9.
Веб-сервер Apache — один из самых популярных веб-серверов в мире. По нему имеется очень много документации, и он широко использовался в течение почти всей истории Интернета, что делает его отличным выбором для хостинга сайта.
Установите Apache с помощью диспетчера пакетов apt
в Debian:
- sudo apt update
- sudo apt install apache2
Поскольку это команда sudo
, данные операции выполняются с привилегиями root. Для подтверждения система попросит ввести пароль обычного пользователя.
После ввода пароля apt
покажет, какие пакеты планируются к установке, и сколько они займут дополнительного пространства на диске. Нажмите Y
, а затем нажмите ENTER
, чтобы продолжить установку.
Если вы выполнили указания по начальной настройке сервера и установили и включили брандмауэр UFW, убедитесь, что ваш брандмауэр разрешает трафик HTTP и HTTPS.
При установке в операционной системе Debian 9 в UFW имеются профили приложений, с помощью которых вы можете настраивать параметры брандмауэра. Для просмотра полного списка профилей приложений запустите следующую команду:
- sudo ufw app list
Профили WWW
используются для управления портами, которые используются веб-серверами:
OutputAvailable applications:
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
Если вы просмотрите профиль WWW Full
, вы увидите, что он активирует трафик на портах 80
и 443
:
- sudo ufw app info "WWW Full"
OutputProfile: WWW Full
Title: Web Server (HTTP,HTTPS)
Description: Web Server (HTTP,HTTPS)
Ports:
80,443/tcp
Разрешите входящий трафик HTTP и HTTPS для этого профиля:
- sudo ufw allow in “WWW Full”
Вы можете провести точечную проверку работы брандмауэра, открыв в браузере внешний IP-адрес сервера:
http://your_server_ip
Вы увидите веб-страницу по умолчанию Debian 9 Apache, предназначенную для информационных целей и целей тестирования. Она должна выглядеть следующим образом:
Если вы видите эту страницу, ваш веб-сервер правильно установлен и доступен через ваш брандмауэр.
Если вы не знаете внешний IP-адрес вашего сервера, вы можете определить его с помощью нескольких способов. Обычно это адрес, который вы используете для подключения к серверу через SSH.
Существует несколько способов сделать это через командную строку. Во-первых, вы можете использовать инструменты iproute2
для получения IP-адреса с помощью следующей команды:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
В результате будут выведены две или три строки. Все указанные адреса верные, но ваш компьютер может использовать только один из них, так что вы можете попробовать каждый.
Также можно использовать утилиту curl
для связи с внешним устройством. Вы увидите, как это устройство видит ваш сервер. Для этого нужно запросить ваш IP-адрес у конкретного сервера:
- sudo apt install curl
- curl http://icanhazip.com
Вне зависимости от метода получения IP-адреса введите его в адресную строку браузера для просмотра страницы Apache по умолчанию.
Вы установили и запустили веб-сервер, а теперь можете устанавливать MariaDB. MariaDB — это система управления базами данных. Она организует и обеспечивает доступ к базам данных, где ваш сайт может хранить информацию.
MariaDB — это созданное сообществом разработчиков ответвление MySQL. В Debian 9 в качестве сервера MySQL по умолчанию используется MariaDB 10.1, а пакет mysql-server
, который обычно используется для установки MySQL, представляет собой переходный пакет, который фактически устанавливает MariaDB. Однако рекомендуется устанавливать MariaDB с помощью пакета mariadb-server
.
Используйте apt
для получения и установки этого программного обеспечения:
- sudo apt install mariadb-server
Примечание. В этом случае вам не нужно предварительно запускать команду sudo apt update
до выполнения команды. Это связано с тем, что вы недавно запускали ее для установки Apache, и индекс пакетов на вашем компьютере уже должен быть обновлен.
Эта команда также выводит список устанавливаемых пакетов и показывает, сколько они займут места на диске. Введите Y
, чтобы продолжить.
После завершения установки запустите простой скрипт безопасности, устанавливаемый вместе с MariaDB. Этот скрипт удаляет некоторые небезопасные параметры по умолчанию и блокирует доступ к системе баз данных. Для запуска интерактивного скрипта введите следующую команду:
- sudo mysql_secure_installation
При этом откроется серия диалогов, где вы можете внести некоторые изменения в параметры безопасности установки MariaDB. В первом диалоге вам нужно будет ввести пароль пользователя root для текущей базы данных. Это административная учетная запись MariaDB, имеющая повышенный уровень привилегий. Она напоминает учетную запись root на сервере (хотя эта учетная запись относится только к MariaDB). Поскольку вы только что установили MariaDB и еще не меняли параметры конфигурации, пароль будет пустым, так что вам достаточно нажать ENTER
в этом диалоге.
В следующем диалоге вам будет предложено задать пароль для пользователя root базы данных. Введите N
и нажмите ENTER
. В Debian учетная запись пользователя root для MariaDB тесно связана с автоматизированным обслуживанием системы, так что мы не должны изменять настроенные методы аутентификации для этой учетной записи. Это позволит нам обновить пакет для уничтожения системы базы данных посредством удаления доступа к административной учетной записи. Позднее мы расскажем о том, как настроить дополнительную административную учетную запись для доступа через пароль, если аутентификация через сокет не подходит для вашего случая.
Далее вы можете использовать клавиши Y
и ENTER
, чтобы принять ответы по умолчанию для всех последующих вопросов. Выбрав эти ответы, вы удалите ряд анонимных пользователей и тестовую базу данных, отключите возможность удаленного входа пользователя root и загрузите новые правила, чтобы внесенные изменения немедленно активировались в MariaDB.
При установке на системы Debian пользователь root MariaDB настроен для аутентификации с помощью плагина unix_socket
, а не с помощью пароля. Во многих случаях это обеспечивает более высокую безопасность и удобство, однако это также может осложнить ситуацию, если вам нужно предоставить права администратора внешней программе (например, phpMyAdmin).
Поскольку для ротации журналов, запуска и остановки сервера используется учетная запись root, лучше всего не менять учетные данные пользователя root. Изменение учетных данных в файле конфигурации /etc/mysql/debian.cnf
может сработать на начальном этапе, но при обновлении пакета изменения могут быть перезаписаны. Если вам требуется настроить доступ с паролем, команда поддержки рекомендует создать отдельную учетную запись администратора вместо изменения учетной записи root.
Для этого мы создадим новую учетную запись admin
с теми же привилегиями, что и у учетной записи root, но настроенную для аутентификации с использованием пароля. Для этого откройте командную строку MariaDB через терминал:
- sudo mariadb
Теперь мы можем создать нового пользователя с привилегиями root и доступом с использованием пароля. Измените имя пользователя и пароль на желаемые:
- GRANT ALL ON *.* TO 'admin'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
Очистите привилегии, чтобы они были сохранены и доступны в текущем сеансе:
- FLUSH PRIVILEGES;
После этого закройте оболочку MariaDB:
- exit
Теперь для получения доступа к базе данных с правами администратора вам потребуется пройти аутентификацию учетной записи этого пользователя с указанием заданного пароля. Для этого необходимо использовать следующую команду:
- mariadb -u admin -p
Вы настроили систему базы данных и теперь можете перейти к установке PHP, заключительного компонента комплекта LAMP.
PHP — это компонент системы, обрабатывающий код для отображения динамического контента. Он может запускать скрипты, подключаться к базам данных MariaDB для получения информации и передавать обработанный контент на веб-сервер для отображения.
Используйте систему apt
для установки PHP. Кроме того, установите вспомогательные пакеты, чтобы код PHP можно было запускать на сервере Apache, и чтобы он мог взаимодействовать с базой данных MariaDB:
- sudo apt install php libapache2-mod-php php-mysql
Эта команда должна установить PHP без каких-либо проблем. Сейчас мы это протестируем.
В большинстве случаев необходимо изменить способ обслуживания файлов Apache при запросе директории. Обычно, если пользователь запрашивает директорию на сервере, Apache в первую очередь ищет файл index.html
. Нам нужно, чтобы веб-сервер предпочитал файлы PHP, и чтобы Apache в первую очередь искал файл index.php
.
Для этого введите следующую команду, чтобы открыть файл dir.conf
в текстовом редакторе с привилегиями root:
- sudo nano /etc/apache2/mods-enabled/dir.conf
Он будет выглядеть примерно так:
<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>
Переместите файл индекса PHP (выделен выше) на первую позицию после спецификации DirectoryIndex
, примерно так:
<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>
После завершения сохраните и закройте файл, нажав CTRL+X
. Для подтверждения сохранения нажмите Y
, а затем нажмите ENTER
для подтверждения расположения сохраняемого файла.
После этого перезапустите веб-сервер Apache, чтобы ваши изменения были готовы к распознанию. Для этого введите следующую команду:
- sudo systemctl restart apache2
Также вы можете проверить статус службы apache2
с помощью команды systemctl
:
- sudo systemctl status apache2
Sample Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:23:03 UTC; 9s ago
Process: 22209 ExecStop=/usr/sbin/apachectl stop (code=exited, status=0/SUCCESS)
Process: 22216 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 22221 (apache2)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/apache2.service
├─22221 /usr/sbin/apache2 -k start
├─22222 /usr/sbin/apache2 -k start
├─22223 /usr/sbin/apache2 -k start
├─22224 /usr/sbin/apache2 -k start
├─22225 /usr/sbin/apache2 -k start
└─22226 /usr/sbin/apache2 -k start
Вы можете установить дополнительные модули, чтобы расширить функционал PHP. Чтобы посмотреть доступные варианты модулей и библиотек PHP, отправьте результаты поиска apt search
на пейджер less
, позволяющий просматривать результаты выполнения других команд:
- apt search php- | less
Используйте стрелки для прокрутки экрана и клавишу Q
для выхода.
На экране результатов будут показаны все доступные для установки опциональные компоненты. Также будет выведено краткое описание каждого из них:
OutputSorting...
Full Text Search...
bandwidthd-pgsql/stable 2.0.1+cvs20090917-10 amd64
Tracks usage of TCP/IP and builds html files with graphs
bluefish/stable 2.2.9-1+b1 amd64
advanced Gtk+ text editor for web and software development
cacti/stable 0.8.8h+ds1-10 all
web interface for graphing of monitoring systems
cakephp-scripts/stable 2.8.5-1 all
rapid application development framework for PHP (scripts)
ganglia-webfrontend/stable 3.6.1-3 all
cluster monitoring toolkit - web front-end
haserl/stable 0.9.35-2+b1 amd64
CGI scripting program for embedded environments
kdevelop-php-docs/stable 5.0.3-1 all
transitional package for kdevelop-php
kdevelop-php-docs-l10n/stable 5.0.3-1 all
transitional package for kdevelop-php-l10n
…
:
Дополнительную информацию о функциях каждого модуля можно найти в Интернете. Также вы можете посмотреть длинное описание пакета с помощью следующей команды:
- apt show package_name
На экране результатов будет много текста, а в поле описания Description
будет приведено развернутое описание функционала модуля.
Например, чтобы узнать функции модуля php-cli
, нужно использовать следующую команду:
- apt show php-cli
Помимо другой информации вы увидите следующий фрагмент:
Output…
Description: command-line interpreter for the PHP scripting language (default)
This package provides the /usr/bin/php command interpreter, useful for
testing PHP scripts from a shell or performing general shell scripting tasks.
.
PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used
open source general-purpose scripting language that is especially suited
for web development and can be embedded into HTML.
.
This package is a dependency package, which depends on Debian's default
PHP version (currently 7.0).
…
Если после изучения информации вы решите установить пакет, вы можете использовать для этого команду apt install
, как для любого другого программного обеспечения.
Если вы решили установить php-cli
, введите следующую команду:
- sudo apt install php-cli
Если вы хотите установить несколько модулей, вы можете перечислить их после команды apt install
, разделяя пробелами:
- sudo apt install package1 package2 ...
Ваш комплект LAMP установлен и настроен. Однако прежде чем вносить другие изменения или развертывать приложения, будет полезно протестировать конфигурацию PHP на наличие возможных проблем.
Чтобы проверить правильность настройки системы для кода PHP, создайте простой скрипт PHP под названием info.php
. Чтобы Apache мог найти этот файл и правильно его обслуживать, его нужно сохранить в специальной директории web root.
В Debian 9 эта директория находится по адресу /var/www/html/
. Для создания файла в этой директории запустите следующую команду:
- sudo nano /var/www/html/info.php
В результате откроется пустой файл. Вставьте в файл следующий код PHP:
<?php
phpinfo();
?>
После завершения редактирования сохраните и закройте файл.
Теперь вы можете проверить способность вашего веб-сервера правильно отображать контент, генерируемый этим скриптом PHP. Для этого откройте страницу в своем браузере. Вам снова потребуется внешний IP-адрес вашего сервера.
Вам нужен следующий адрес:
http://your_server_ip/info.php
Отображаемая страница должна выглядеть примерно так:
На этой странице содержится базовая информация о вашем сервере с точки зрения PHP. Эта информация полезна для отладки и обеспечения правильного применения настроек.
Если вы видите эту страницу в своем браузере, PHP работает надлежащим образом.
После тестирования этот файл лучше удалить, поскольку он может предоставить неуполномоченным пользователям информацию о вашем сервере. Для этого запустите следующую команду:
- sudo rm /var/www/html/info.php
Если впоследствии вам снова потребуется эта информация, вы всегда можете воссоздать эту страницу.
Вы установили комплект LAMP, и теперь у вас имеется множество вариантов дальнейших действий. Вы установили платформу, которая позволит вам устанавливать на ваш сервер разнообразные сайты и веб-приложения.
]]>Uma pilha de software “LAMP” é um grupo de softwares de código aberto que são normalmente instalados juntos para permitir que um servidor hospede sites dinâmicos e aplicativos Web. Este termo é, na verdade, uma sigla que representa o sistema operacional Linux, com o servidor Web do Apache. Os dados do site são armazenados em uma base de dados MariaDB e o conteúdo dinâmico é processado pelo PHP.
Neste guia, instalaremos uma pilha LAMP em um servidor Debian 9.
Para completar este tutorial, será necessário ter um servidor Debian 9 com uma conta de usuário não raiz com o sudo
habilitado e um firewall básico. Isso pode ser configurado usando nosso guia de configuração inicial de servidor para o Debian 9.
O servidor Web Apache está entre os servidores Web mais populares no mundo. Ele é bem documentado e tem sido amplamente usado em grande parte da história da Web, o que torna ele uma ótima escolha padrão para hospedar um site.
Instale o Apache usando o gerenciador de pacotes do Debian, apt
:
- sudo apt update
- sudo apt install apache2
Como este é um comando sudo
, essas operações são executadas com privilégios raiz. Ele irá pedir a senha do seu usuário para verificar suas intenções.
Assim que digitar sua senha, o apt
irá dizer a você quais pacotes ele planeja instalar e quanto espaço de disco extra irão ocupar. Pressione Y
e clique em ENTER
para continuar, e a instalação prosseguirá.
Em seguida, supondo que tenha seguido as instruções de configuração inicial do servidor, instalando e habilitando o firewall UFW, certifique-se de que seu firewall permite o tráfego HTTP e HTTPS.
Quando instalado no Debian 9, o UFW vem carregado com perfis de app que podem ser utilizados para ajustar suas configurações de firewall. Veja a lista completa dos perfis de aplicativo executando:
- sudo ufw app list
Os perfis WWW
são usados para gerenciar portas usadas por servidores Web:
OutputAvailable applications:
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
Se inspecionar o perfil WWW Full
, ele exibirá que habilita o tráfego para as portas 80
e 443
:
- sudo ufw app info "WWW Full"
OutputProfile: WWW Full
Title: Web Server (HTTP,HTTPS)
Description: Web Server (HTTP,HTTPS)
Ports:
80,443/tcp
Permita o tráfego HTTP e HTTPS de entrada para este perfil:
- sudo ufw allow in "WWW Full"
Você pode fazer uma checagem imediatamente para verificar se tudo ocorreu como planejado visitando o endereço IP público do seu servidor no seu navegador Web:
http://your_server_ip
Você verá a página Web padrão do Apache para o Debian 9, que está disponível para fins informativos e de teste. Ela deve se parecer com isto:
Se ver essa página, seu servidor Web agora está instalado corretamente e é acessível através do seu firewall.
Se não sabe qual é o endereço IP público do seu servidor, há uma série de maneiras para encontrá-lo. Normalmente, este é o endereço que você usa para se conectar ao seu servidor através do SSH.
Existem algumas maneiras de fazer isso a partir da linha de comando. Primeiro, poderia usar as ferramentas iproute2
para obter seu endereço IP digitando:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
Isso retornará a você duas ou três linhas. Todos os endereços são corretos, mas seu computador consegue usar apenas um deles, então sinta-se à vontade para testar cada um.
Um método alternativo é usar o utilitário curl
para contatar uma entidade exterior para dizer a você como ela vê seu servidor. Isso é feito perguntando a um servidor específico qual é seu endereço IP:
- sudo apt install curl
- curl http://icanhazip.com
Independentemente do método que você usa para obter seu endereço IP, digite-o na barra de endereço do seu navegador Web para ver a página padrão do Apache.
Agora que você tem seu servidor Web em funcionamento, é hora de instalar o MariaDB. O MariaDB é um sistema de gerenciamento de banco de dados. Basicamente, ele irá organizar e fornecer acesso aos bancos de dados onde seu site pode armazenar informações.
O MariaDB é uma bifurcação feita pela comunidade do MySQL. No Debian 9, o servidor padrão MySQL é o MariaDB 10.1, e o pacote mysql-server
, que é normalmente usado para instalar o MySQL, é um pacote transitório que irá instalar o MariaDB. No entanto, é recomendável que instale o MariaDB usando o pacote real do programa, o mariadb-server
.
Novamente, utilize o apt
para adquirir e instalar este software:
- sudo apt install mariadb-server
Nota: neste caso, não será necessário executar o sudo apt update
antes do comando. Isso se dá pois você o executou recentemente nos comandos acima para instalar o Apache, e o índice de pacotes no seu computador já deve estar atualizado.
Este comando também irá mostrar a você uma lista dos pacotes que serão instalados, junto com a quantidade de espaço em disco que irão ocupar. Digite Y
para continuar.
Quando a instalação for concluída, execute um script de segurança simples que vem pré-instalado com o MariaDB, que removerá algumas configurações padrão inseguras e irá bloquear o acesso ao seu sistema de banco de dados. Inicie o script interativo executando:
- sudo mysql_secure_installation
Isso levará você através de uma série de prompts onde é possível fazer algumas alterações nas opções de segurança de sua instalação do MariaDB. O primeiro prompt pedirá que digite a senha root do banco de dados atual. Esta é uma conta administrativa no MariaDB que tem mais privilégios. Pense nela como sendo similar à conta root para o servidor em si (embora essa conta que você está configurando agora é uma conta específica do MariaDB). Como você instalou o MariaDB e ainda não fez nenhuma alteração de configuração, essa senha estará em branco. Dessa forma, pressione apenas ENTER
no prompt.
O próximo prompt pergunta a você se deseja configurar uma senha root do banco de dados. Digite N
e então pressione ENTER
. No Debian, a conta root para o MariaDB está intimamente ligada à manutenção automatizada do sistema, então não deve-se alterar os métodos de autenticação configurados para essa conta. Se isso fosse feito, seria possível a atualização de pacotes para a quebra do sistema de banco de dados pela remoção do acesso à conta administrativa. Mais tarde, vamos tratar de como configurar, de forma opcional, uma conta administrativa adicional para o acesso por senha caso a autenticação por soquete não seja apropriada para o seu caso de uso.
A partir daí, pressione Y
e então ENTER
para aceitar as configurações padrão para todas as perguntas subsequentes. Isso irá remover alguns usuários anônimos e o banco de dados teste, desativará os logins remotos ao root, e carregará essas novas regras para que o MariaDB respeite imediatamente as alterações que você fez.
Em novas instalações nos sistemas Debian, o usuário root MariaDB é configurado para autenticar usando o plug-in unix_socket
por padrão ao invés de fazê-lo com uma senha. Isso permite maior segurança e usabilidade na maioria dos casos, mas também pode complicar as coisas quando for necessário permitir direitos administrativos a um programa externo (por exemplo, o phpMyAdmin).
Como o servidor usa a conta root para tarefas como a rotação de registro e a inicialização e parada do servidor, é melhor não alterar os detalhes de autenticação da conta root. A alteração das credenciais da conta no /etc/mysql/debian.cnf
pode funcionar inicialmente, mas as atualizações de pacotes podem substituir essas alterações. Ao invés de modificar a conta root, os mantenedores de pacotes recomendam a criação de uma conta administrativa separada caso seja necessário configurar o acesso baseado em senha.
Para fazer isso, criaremos uma nova conta chamada admin
com as mesmas capacidades que a conta root, mas configurada para a autenticação por senha. Para fazer isso, abra o prompt do MariaDB do seu terminal:
- sudo mariadb
Agora, podemos criar um novo usuário com privilégios root e acesso baseado em senha. Altere o nome de usuário e senha para que correspondam às suas preferências:
- GRANT ALL ON *.* TO 'admin'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
Descarregue os privilégios para garantir que eles estão salvos e disponíveis na sessão atual:
- FLUSH PRIVILEGES;
Em seguida, saia do shell do MariaDB:
- exit
Agora, quando quiser acessar seu banco de dados como seu novo usuário administrativo, será necessário autenticar-se como aquele usuário com a senha que você acabou de definir usando o seguinte comando:
- mariadb -u admin -p
Neste ponto, seu sistema de banco de dados está configurado e você pode seguir em frente para a instalação do PHP, o componente final da pilha LAMP.
O PHP é o componente da sua configuração que irá processar o código para mostrar conteúdo dinâmico. Ele pode executar os scripts, conectar-se aos seus bancos de dados do MariaDB para obter informações e entregar o conteúdo processado ao seu servidor Web para exibição.
Novamente, utilize o sistema apt
para instalar o PHP. Além disso, inclua alguns pacotes auxiliares para que desta vez o código PHP possa ser executado no servidor do Apache e se comunique com seu banco de dados do MariaDB:
- sudo apt install php libapache2-mod-php php-mysql
Isso deve instalar o PHP sem problemas. Vamos testar isso em instantes.
Na maioria dos casos, será vantajoso modificar a maneira que o Apache atende aos arquivos quando um diretório for solicitado. Atualmente, caso um usuário solicite um diretório do servidor, o Apache irá procurar primeiro um arquivo chamado index.html
. Queremos dizer ao servidor Web para preferir arquivos PHP em relação a outros, então faça com que o Apache procure pelo arquivo index.php
primeiro.
Para fazer isso, digite este comando para abrir o arquivo dir.conf
em um editor de texto com privilégios raiz:
- sudo nano /etc/apache2/mods-enabled/dir.conf
Ele se parecerá com isso:
<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>
Mova o arquivo de índice do PHP (destacado acima) para a primeira posição após a especificação DirectoryIndex
, desta forma:
<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>
Quando terminar, salve e feche o arquivo pressionando CTRL+X
. Confirme o salvamento digitando Y
e então clique em ENTER
para verificar a localização do arquivo de salvamento.
Após isso, reinicie o servidor Web Apache para que suas alterações sejam reconhecidas. Faça isso digitando:
- sudo systemctl restart apache2
Você também pode verificar o status do serviço apache2
usando o systemctl
:
- sudo systemctl status apache2
Sample Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:23:03 UTC; 9s ago
Process: 22209 ExecStop=/usr/sbin/apachectl stop (code=exited, status=0/SUCCESS)
Process: 22216 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 22221 (apache2)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/apache2.service
├─22221 /usr/sbin/apache2 -k start
├─22222 /usr/sbin/apache2 -k start
├─22223 /usr/sbin/apache2 -k start
├─22224 /usr/sbin/apache2 -k start
├─22225 /usr/sbin/apache2 -k start
└─22226 /usr/sbin/apache2 -k start
Para melhorar a funcionalidade do PHP, você tem a opção de instalar alguns módulos adicionais. Para ver as opções disponíveis para módulos e bibliotecas do PHP, canalize os resultados de apt search
em less
, um pager que permite que você percorra os resultados de outros comandos:
- apt search php- | less
Use as teclas de seta para se deslocar para cima e para baixo, e pressione Q
para sair.
Os resultados são todos componentes opcionais que você pode instalar. Será dado a você uma breve descrição para cada um:
OutputSorting...
Full Text Search...
bandwidthd-pgsql/stable 2.0.1+cvs20090917-10 amd64
Tracks usage of TCP/IP and builds html files with graphs
bluefish/stable 2.2.9-1+b1 amd64
advanced Gtk+ text editor for web and software development
cacti/stable 0.8.8h+ds1-10 all
web interface for graphing of monitoring systems
cakephp-scripts/stable 2.8.5-1 all
rapid application development framework for PHP (scripts)
ganglia-webfrontend/stable 3.6.1-3 all
cluster monitoring toolkit - web front-end
haserl/stable 0.9.35-2+b1 amd64
CGI scripting program for embedded environments
kdevelop-php-docs/stable 5.0.3-1 all
transitional package for kdevelop-php
kdevelop-php-docs-l10n/stable 5.0.3-1 all
transitional package for kdevelop-php-l10n
…
:
Para aprender mais sobre o que cada módulo faz, procure na Internet mais informações a respeito deles. De forma alternativa, consulte a descrição do pacote digitando:
- apt show package_name
Haverá um resultado extenso, com um campo chamado Description
que terá uma explicação mais longa da funcionalidade que o módulo fornece.
Por exemplo, para descobrir o que o módulo php-cli
faz, digite isso:
- apt show php-cli
Junto com uma grande quantidade de outras informações, você encontrará algo que se parece com isso:
Output…
Description: command-line interpreter for the PHP scripting language (default)
This package provides the /usr/bin/php command interpreter, useful for
testing PHP scripts from a shell or performing general shell scripting tasks.
.
PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used
open source general-purpose scripting language that is especially suited
for web development and can be embedded into HTML.
.
This package is a dependency package, which depends on Debian's default
PHP version (currently 7.0).
…
Se, após a pesquisa, você decidir que quer instalar um pacote, faça isso usando o comando apt install
como tem feito para os outros softwares.
Se decidiu que o php-cli
é algo que precisa, digite:
- sudo apt install php-cli
Se quiser instalar mais de um módulo, faça isso pela listagem de cada um, separados por um espaço, seguindo o comando apt install
, desta forma:
- sudo apt install package1 package2 ...
Neste ponto, sua pilha LAMP está instalada e configurada. No entanto, antes de fazer mais alterações ou implantar um aplicativo, seria útil testar proativamente sua configuração do PHP, para o caso de haver problemas que devam ser resolvidos.
Para testar se seu sistema está configurado corretamente para o PHP, crie um script bem básico do PHP chamado info.php
. Para que o Apache encontre este arquivo e o atenda corretamente, ele deve ser salvo em um diretório muito específico chamado web root.
No Debian 9, este diretório está localizado em /var/www/html/
. Crie o arquivo naquele local executando:
- sudo nano /var/www/html/info.php
Isso abrirá um arquivo em branco. Adicione o seguinte texto, que é um código válido do PHP, dentro do arquivo:
<?php
phpinfo();
?>
Quando você terminar, salve e feche o arquivo.
Agora, você pode testar se seu servidor Web consegue exibir corretamente o conteúdo gerado por este script do PHP. Para testar isso, visite esta página no seu navegador Web. O seu endereço IP público do seu servidor será necessário novamente.
O endereço a ser visitado é:
http://your_server_ip/info.php
A página que você acessar deve se parecer com esta:
Esta página fornece algumas informações básicas sobre seu servidor na perspectiva do PHP. Ela é útil para a depuração e para garantir que suas configurações estejam sendo aplicadas corretamente.
Se você puder ver essa página no seu navegador, então seu PHP está funcionando como esperado.
Provavelmente, você quer remover este arquivo após este teste porque ele poderia fornecer informações sobre seu servidor para usuários não autorizados. Para fazer isso, execute o seguinte comando:
- sudo rm /var/www/html/info.php
Você sempre pode recriar essa página se precisar acessar as informações novamente mais tarde.
Agora que tem uma pilha LAMP instalada, você tem muitas opções para o que fazer a seguir. Basicamente, você instalou uma plataforma que permitirá que instale a maioria dos tipos de sites e softwares Web no seu servidor.
]]>MySQL es un destacado sistema de administración de bases de datos de código abierto que se usa para almacenar y recuperar datos para una gran variedad de aplicaciones populares. MySQL está representado por la letra M en la sigla de la pila LAMP, un conjunto de software de código abierto de uso común, en la que también se incluyen Linux, el servidor web Apache y el lenguaje de programación de PHP.
En Debian 9, MariaDB, una ramificación de la comunidad del proyecto MySQL se incluye como la variante predeterminada de MySQL. Aunque MariaDB funciona correctamente en la mayoría de los casos, si necesita las funciones que solo encontrará en MySQL de Oracle puede instalar y usar paquetes desde un repositorio de cuyo mantenimiento se encargan los desarrolladores de MySQL.
Para instalar la última versión de MySQL, añadiremos este repositorio, instalaremos el propio software de MySQL, protegeremos la instalación y, por último, probaremos que MySQL funcione y responda a los comandos.
Antes de comenzar con este tutorial, necesitará lo siguiente:
sudo
y un firewall.Los desarrolladores de MySQL proporcionan un paquete .deb
que gestiona la configuración e instalación de los repositorios de software oficiales de MySQL. Una vez configurados los repositorios, podremos usar el comando estándar apt
de Ubuntu para instalar el software. Descargaremos este archivo .deb
con wget
y luego lo instalaremos con el comando dpkg
.
Primero, cargue la página de descarga de MySQL en su navegador web. Busque el botón Descargar en la esquina inferior derecha y haga clic para llegar a la página siguiente. En esta página se solicitará que inicie sesión o se registre para crear una cuenta web Oracle. Podemos omitir esto y en vez de eso buscar el enlace que dice No thanks, just start my download (no, gracias, simplemente quiero iniciar mi descarga). Haga clic con el botón secundario del ratón y seleccione Copiar dirección de enlace (la redacción de esta opción puede diferir según su navegador).
Ahora, descargaremos el archivo. En su servidor, diríjase a un directorio en el que pueda realizar tareas de escritura. Descargue el archivo usando wget
; recuerde pegar la dirección que acaba de copiar en lugar de la parte resaltada a continuación:
- cd /tmp
- wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
El archivo debería descargarse en el directorio actual. Enumere los archivos para asegurarse:
- ls
Debería poder ver el nombre del archivo que se indica:
Outputmysql-apt-config_0.8.10-1_all.deb
. . .
Ahora estamos listos para la instalación:
- sudo dpkg -i mysql-apt-config*
dpkg
se utiliza para instalar, eliminar e inspeccionar paquetes de software .deb
. El indicador -i
muestra que deseamos realizar la instalación con el archivo especificado.
Durante la instalación, se le presentarán una pantalla de configuración en la que podrá especificar la versión de MySQL que prefiere y una opción de instalación de repositorios para otras herramientas relacionadas con MySQL. Los valores predeterminados no añadirán más que la información sobre repositorios para la última versión estable de MySQL. Esto es lo que deseamos. Por ello, utilice la flecha hacia abajo para buscar la opción de menú Ok
y pulse ENTER
.
Con esto, el paquete terminará de añadir el repositorio. Actualice su caché de paquetes apt
para que estén disponibles los nuevos paquetes de software:
- sudo apt update
Ahora que agregamos los repositorios de MySQL, estamos listos para instalar el propio software de servidor de MySQL. Si alguna vez necesita actualizar la configuración de estos repositorios, ejecute sudo dpkg-reconfigure mysql-apt-config
, seleccione las nuevas opciones y luego use sudo apt-get update
para actualizar la caché de su paquete.
Una vez agregado el repositorio, y con nuestra caché de paquetes recién actualizada, podremos usar apt
para instalar el último paquete de servidor de MySQL:
- sudo apt install mysql-server
apt
analizará todos los paquetes mysql-server
disponibles y determinará que el paquete MySQL proporcionado es el más nuevo y adecuado. Luego, la herramienta calculará las dependencias del paquete y solicitará que apruebe la instalación. Escriba Y
y luego pulse ENTER
. Con esto, se instalará el software.
Se le solicitará establecer una contraseña de root durante la etapa de configuración de la instalación. Seleccione y confirme una contraseña segura para continuar. A continuación, aparecerá una solicitud para que seleccione un complemento de autenticación predeterminado. Lea el contenido para comprender las opciones. Si tiene dudas, la opción Use Strong Password Encrypt (usar cifrado de contraseña segura) ofrece más protección.
Con esto, MySQL debería quedar instalado y activo. Haremos una comprobación usando systemctl
:
- sudo systemctl status mysql
Output● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 15:58:21 UTC; 30s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Main PID: 12805 (mysqld)
Status: "SERVER_OPERATING"
CGroup: /system.slice/mysql.service
└─12805 /usr/sbin/mysqld
Sep 05 15:58:15 mysql1 systemd[1]: Starting MySQL Community Server...
Sep 05 15:58:21 mysql1 systemd[1]: Started MySQL Community Server.
La línea Active: active (running)
indica que MySQL está instalado y en ejecución. A continuación, haremos que la instalación sea un poco más segura.
MySQL cuenta con un comando que podemos usar para realizar algunas actualizaciones de seguridad en nuestra nueva instalación. Lo ejecutaremos:
- mysql_secure_installation
Con esto, se solicitará la contraseña de root de MySQL que estableció durante la instalación. Escríbala y pulse ENTER
. A continuación, responderemos a una serie de solicitudes con respuestas “Sí” o “No”. Vamos a verlas:
Primero, se nos hará una pregunta sobre el complemento de validación de contraseña, un complemento que puede imponer de forma automática determinadas reglas de seguridad de contraseñas para sus usuarios de MySQL. Habilitar esto es una decisión que deberá tomar según sus necesidades de seguridad particulares. Escriba y
y luego pulse ENTER
para habilitarlo, o simplemente pulse ENTER
para omitirlo. Si el parámetro se habilita, también se le solicitará elegir un nivel de 0 a 2 para establecer cuán estricta será la validación de la contraseña. Seleccione un número y pulse ENTER
para continuar.
A continuación, se le preguntará si desea cambiar la contraseña de root. Debido a que acabamos de crear la contraseña cuando instalamos MySQL, podemos omitir esto de forma segura. Pulse ENTER
para continuar sin actualizar la contraseña.
Puede responder el resto de las preguntas con la opción Yes. Se le preguntará si desea eliminar el usuario MySQL anónimo, deshabilitar el inicio de sesión de root remoto, eliminar la base de datos de prueba y volver a cargar las tablas de privilegios para garantizar que los cambios anteriores se apliquen correctamente. Todo esto es recomendable. Escriba y
y pulse ENTER
para cada pregunta.
La secuencia de comandos se cerrará una vez que responda todas las preguntas. Con esto, nuestra instalación de MySQL tendrá un nivel de protección razonable. Haremos otra prueba ejecutando un cliente que se conecte al servidor y muestre información.
mysqladmin
es un cliente administrativo de línea de comandos para MySQL. Lo usaremos para establecer conexión con el servidor y extraer información sobre la versión y el estado:
- mysqladmin -u root -p version
La parte -u root
indica a mysqladmin
que inicie sesión como el usuario root de MySQL, -p
indica al cliente que solicite una contraseña, y version
es el comando que queremos ejecutar.
El resultado nos permitirá conocer la versión del servidor MySQL que está en ejecución, su tiempo de conexión y otros datos sobre el estado:
Outputmysqladmin Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Server version 8.0.12
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 6 min 42 sec
Threads: 2 Questions: 12 Slow queries: 0 Opens: 123 Flush tables: 2 Open tables: 99 Queries per second avg: 0.029
Si recibió un resultado similar, ¡felicitaciones! Habrá instalado y protegido de forma correcta el servidor más reciente de MySQL.
De esta manera, completó una instalación básica de la versión más reciente de MySQL, que debería funcionar para muchas aplicaciones populares:
]]>Nginx es uno de los servidores web más populares del mundo y aloja algunos de los sitios más grandes y de mayor tráfico de Internet. Es más fácil de utilizar que Apache en la mayoría de los casos y puede emplearse como servidor web o proxy inverso.
En esta guía, explicaremos la manera de instalar Nginx en su servidor de Debian 9.
Antes de comenzar a aplicar esta guía, debe disponer de un usuario no root normal con privilegios sudo configurado en su servidor y un firewall activo. Puede aprender a hacerlo siguiendo la guía de configuración inicial de servidores para Debian 9.
Cuando disponga de una cuenta, inicie sesión como usuario no root para comenzar.
Debido a que Nginx está disponible en los repositorios predeterminados de Debian, es posible instalarlo desde estos repositorios usando el sistema de paquetes apt
.
Debido a que se trata de nuestra primera interacción con el sistema de paquetes apt
en esta sesión, también actualizaremos nuestro índice local de paquetes de modo que tengamos acceso los listados de paquetes más recientes. A continuación, podremos instalar nginx
:
- sudo apt update
- sudo apt install nginx
Tras aceptar el procedimiento, apt
instalará Nginx y cualquier dependencia necesaria en su servidor.
Antes de probar Nginx, se deben aplicar ajustes al software del firewall para permitir el acceso al servicio.
Enumere las configuraciones de la aplicación con las que ufw
sabe trabajar escribiendo lo siguiente:
- sudo ufw app list
Debería obtener un listado de los perfiles de aplicación:
OutputAvailable applications:
...
Nginx Full
Nginx HTTP
Nginx HTTPS
...
Como puede ver, hay tres perfiles disponibles para Nginx:
80
(tráfico web normal, no cifrado) y el puerto 443
(tráfico TLS/SSL cifrado)80
(tráfico web normal, no cifrado)443
(tráfico TLS/SSL cifrado)Se recomienda habilitar el perfil más restrictivo, que de todos modos permitirá el tráfico que configuró. Debido a que en esta guía aún no configuramos SSL para nuestro servidor, solo deberemos permitir el tráfico en el puerto 80
.
Puede habilitarlo escribiendo lo siguiente:
- sudo ufw allow 'Nginx HTTP'
Puede verificar el cambio escribiendo lo siguiente:
- sudo ufw status
Debería ver el tráfico HTTP permitido en el resultado que se muestra:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
Al final del proceso de instalación, Debian 9 inicia Nginx. El servidor web ya debería estar activo.
Realice una verificación con systemd init
para asegurarse de que el servicio esté en ejecución escribiendo lo siguiente:
- systemctl status nginx
Output● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:15:57 UTC; 3min 28s ago
Docs: man:nginx(8)
Process: 2402 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 2399 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 2404 (nginx)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/nginx.service
├─2404 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─2405 nginx: worker process
Como puede ver arriba, parece que el servicio se inició correctamente. Sin embargo, la mejor forma de comprobarlo es solicitar una página de Nginx.
Puede acceder a la página de aterrizaje predeterminada de Nginx para confirmar que el software funcione correctamente dirigiéndose a la dirección IP de su servidor. Si no conoce la dirección IP de su servidor, intente escribir lo siguiente en la instrucción de comandos de su servidor:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
Obtendrá algunas líneas. Puede probar cada una de ellas en su navegador web para ver si funcionan.
Cuando tenga la dirección IP de su servidor, introdúzcala en la barra de direcciones de su navegador:
http://your_server_ip
Debería ver la página de aterrizaje predeterminada de Nginx:
Esta página se incluye con Nginx para mostrarle que el servidor se ejecuta correctamente.
Ahora que su servidor web está listo, revisaremos algunos de los comandos básicos de administración.
Para detener su servidor web, escriba lo siguiente:
- sudo systemctl stop nginx
Para iniciar el servidor web cuando no esté activo, escriba lo siguiente:
- sudo systemctl start nginx
Para detener y luego iniciar el servicio de nuevo, escriba lo siguiente:
- sudo systemctl restart nginx
Si simplemente realizará cambios en la configuración, Nginx a menudo puede volver a cargase sin perder conexiones. Para hacer esto, escriba lo siguiente:
- sudo systemctl reload nginx
De forma predeterminada,Nginx está configurado para iniciarse automáticamente cuando lo haga el servidor. Si no es lo que quiere, deshabilite este comportamiento escribiendo lo siguiente:
- sudo systemctl disable nginx
Para volver a habilitar el servicio de modo que se cargue en el inicio, puede escribir lo siguiente:
- sudo systemctl enable nginx
Al emplear el servidor web Nginx, se pueden utilizar_ bloques de servidor_ (similares a hosts virtuales de Apache) para encapsular los detalles de la configuración y alojar más de un dominio desde un único servidor. Configuraremos un dominio llamado example.com, pero debería cambiarlo por el nombre de su propio dominio. Consulte nuestra introducción a DNS de DigitalOcean para hallar más información sobre la configuración de un nombre de dominio con DigitalOcean.
Nginx en Debian 9 tiene habilitado un bloque de servidor por defecto, que está configurado para presentar documentos desde un directorio en /var/www/html
. Si bien esto funciona bien para un solo sitio, puede ser difícil de manejar si aloja varios. En vez de modificar /var/www/html
, crearemos una estructura de directorios dentro de /var/www
para nuestro sitio example.com y dejaremos /var/www/html
como directorio predeterminado que se abastecerá si una solicitud de cliente no coincide con otros sitios.
Cree el directorio para example.com, utilizando el indicador -p
para crear cualquier directorio principal necesario:
- sudo mkdir -p /var/www/example.com/html
A continuación, asigne la propiedad del directorio con la variable de entorno $USER:
- sudo chown -R $USER:$USER /var/www/example.com/html
Los permisos de sus root web deberían ser correctos si no modificó el valor umask
, pero puede comprobarlo escribiendo lo siguiente:
- sudo chmod -R 755 /var/www/example.com
A continuación, cree una página de ejemplo index.html
utilizando nano
o su editor favorito:
- nano /var/www/example.com/html/index.html
Dentro de ella, agregue el siguiente ejemplo de HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com server block is working!</h1>
</body>
</html>
Guarde y cierre el archivo cuando termine.
Para que Nginx presente este contenido, es necesario crear un bloque de servidor con las directivas correctas. En vez de modificar el archivo de configuración predeterminado directamente, crearemos uno nuevo en /etc/nginx/sites-available/example.com
:
- sudo nano /etc/nginx/sites-available/example.com
Péguelo en el siguiente bloque de configuración, similar al predeterminado, pero actualizado para nuestro nuevo directorio y nombre de dominio:
server {
listen 80;
listen [::]:80;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
}
Observe que actualizamos la configuración root
en nuestro nuevo directorio y el server_name
para nuestro nombre de dominio.
A continuación, habilitaremos el archivo creando un enlace entre él y el directorio sites-enabled
, en el cual Nginx obtiene lecturas durante el inicio:
- sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
Ahora, contamos con dos bloques de servidor habilitados y configurados para responder a las solicitudes conforme a las directivas listen
y server_name
(puede obtener más información sobre cómo Nginx procesa estas directivas aquí):
example.com
: responderá a solicitudes de example.com
y www.example.com
.default
: responderá a cualquier solicitud en el puerto 80
que no coincida con los otros dos bloques.Para evitar un problema de memoria de depósito de hash que pueda surgir al agregar nombres de servidor, es necesario aplicar ajustes a un valor en el archivo /etc/nginx/nginx.conf
. Abra el archivo:
- sudo nano /etc/nginx/nginx.conf
Encuentre la directiva server_names_hash_bucket_size
y elimine el símbolo #
para quitar el comentario de la línea:
...
http {
...
server_names_hash_bucket_size 64;
...
}
...
Guarde y cierre el archivo cuando termine.
A continuación, compruebe que no haya errores de sintaxis en ninguno de sus archivos de Nginx:
- sudo nginx -t
Si no hay problemas, verá el siguiente resultado:
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Una vez superada la prueba de configuración, reinicie Nginx para habilitar sus cambios:
- sudo systemctl restart nginx
Con esto, Nginx debería proporcionar su nombre de dominio. Puede comprobarlo visitando http://example.com
, donde debería ver algo como lo siguiente:
Ahora que sabe administrar el propio servicio de Nginx, debería tomarse unos minutos para familiarizarse con algunos directorios y archivos importantes.
/var/www/html
: el contenido web real, que por defecto solo consta de la página predeterminada de Nginx que vio antes, se presenta desde el directorio /var/www/html
. Esto se puede cambiar modificando los archivos de configuración de Nginx./etc/nginx
: directorio de configuración de Nginx. En él se encuentran todos los archivos de configuración de Nginx./etc/nginx/nginx.conf
: archivo de configuración principal de Nginx. Esto se puede modificar para realizar cambios en la configuración general de Nginx./etc/nginx/sites-available/
: directorio en el que se pueden guardar bloques de servidor por sitio. Nginx no utilizará los archivos de configuración de este directorio a menos que estén vinculados al directorio sites-enabled
. Normalmente, toda la configuración del bloque de servidor se realiza en este directorio y luego se habilita estableciendo un vínculo con el otro directorio./etc/nginx/sites-enabled/
: directorio en el que se almacenan los bloques de servidor habilitados por sitio. Normalmente, estos se crean estableciendo vínculos con los archivos de configuración del directorio sites-available
./etc/nginx/snippets
: este directorio contiene fragmentos de configuración que pueden incluirse en otras partes de la configuración de Nginx. Los segmentos de configuración potencialmente repetibles reúnen las condiciones para la conversión a fragmentos./var/log/nginx/access.log
: cada solicitud a su servidor web se registra en este archivo de registro, a menos que Nginx esté configurado para hacer algo diferente./var/log/nginx/error.log
: cualquier error de Nginx se asentará en este registro.Ahora que instaló su servidor web, dispone de muchas opciones respecto del tipo de contenido que puede ofrecer y de las tecnologías que puede utilizar para brindar una experiencia más completa a sus usuarios.
]]>Una pila “LAMP” es un grupo de aplicaciones de software de código abierto que se instalan normalmente juntas para permitir que un servidor aloje aplicaciones web y sitios web dinámicos. Este término es en realidad un acrónimo que representa al sistema operativo Linux y al servidor web Apache. Los datos del sitio se almacenan en una base de datos MariaDB y el contenido dinámico se procesa mediante PHP.
En esta guía, instalaremos una pila LAMP en un servidor Debian 9.
Para completar este tutorial, deberá disponer de un servidor de Debian 9 con una cuenta de usuario sudo
no root y un firewall básico. Esto se puede configurar usando nuestra guía de configuración inicial de servidores para Debian 9.
El servidor web Apache está entre los más populares del mundo. Cuenta con una completa documentación y se ha usado extensamente durante gran parte de la historia de la Web, lo que lo convierte en una gran opción para alojar un sitio web.
Instale Apache usando apt
, el administrador de paquetes de Debian:
- sudo apt update
- sudo apt install apache2
Ya que este es un comando sudo
, estas operaciones se ejecutan con privilegios root. Le pedirá su contraseña de usuario normal para verificar sus intenciones.
Una vez que introduzca su contraseña, apt
le indicará los paquetes que planea instalar y el espacio adicional que ocuparán en el disco duro. Pulse Y
y luego ENTER
para continuar. Con esto, la instalación continuará.
A continuación, suponiendo que seguió las instrucciones de configuración inicial del servidor instalando y habilitando el firewall UFW, asegúrese de que su firewall permita el tráfico HTTP y HTTPS.
Cuando se instala en Debian 9, UFW se carga con perfiles de app que puede usar para ajustar la configuración de su firewall. Vea la lista completa de perfiles de aplicaciones ejecutando lo siguiente:
- sudo ufw app list
Los perfiles WWW
se utilizan para administrar los puertos usados por servidores web:
OutputAvailable applications:
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
Si inspecciona el perfil WWW Full
, este muestra que permite el tráfico a los puertos 80
y 443
.
- sudo ufw app info "WWW Full"
OutputProfile: WWW Full
Title: Web Server (HTTP,HTTPS)
Description: Web Server (HTTP,HTTPS)
Ports:
80,443/tcp
Permita el tráfico HTTP y HTTPS para este perfil:
- sudo ufw allow in "WWW Full"
Puede realizar una comprobación de inmediato para verificar que todo está en orden visitando la dirección IP pública de su servidor en su navegador web.
http://your_server_ip
Verá la página web predeterminada de Apache de Debian 9, que se encuentra allí para fines informativos y de prueba. Debería tener un aspecto similar a este:
Si ve esta página, su servidor web estará correctamente instalado y el acceso a él será posible a través de su firewall.
Si no conoce la dirección IP pública de su servidor, hay varias formas de encontrarla. Por lo general, es la dirección que utiliza para establecer conexión con su servidor a través de SSH.
Existen varias formas de hacerlo desde la línea de comandos. Primero, podría usar las herramientas de iproute2
para obtener su dirección IP escribiendo esto:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
Esto nos brindará dos o tres líneas. Todas estas direcciones son correctas, pero su computadora puede usar una de ellas. Por ello, no dude en probarlas todas.
Un método alternativo consiste en usar la utilidad curl
para contactar a una parte externa a fin de que le indique su evaluación del servidor. Esto se hace solicitando a un servidor específico su dirección IP:
- sudo apt install curl
- curl http://icanhazip.com
Independientemente del método que utilice para obtener su dirección IP, escríbala en la barra de direcciones de su navegador web para ver la página predeterminada de Apache.
Ahora que su servidor web está listo, es el momento de instalar MariaDB. MariaDB es un sistema de administración de bases de datos. Básicamente, organizará y brindará acceso a bases de datos en las que su sitio puede almacenar información.
MariaDB es una ramificación de MySQL creada por la comunidad. En Debian 9, el servidor de MySQL predeterminado es MariaDB 10.1, y mysql-server
, que normalmente se utiliza para instalar MySQL, es un paquete de transición que instalará MariaDB. Sin embargo, se recomienda instalar MariaDB usando el paquete real del programa: mariadb-server
.
Una vez más, utilice apt
para adquirir e instalar este software:
- sudo apt install mariadb-server
Nota: En este caso, no es necesario que ejecute sudo apt update
antes del comando. Esto se debe a que recientemente lo ejecutó en los comandos anteriores para instalar Apache y el índice de paquetes de su computadora debería estar actualizado.
Este comando, además, le mostrará una lista de los paquetes que se instalarán y el espacio que ocuparán en el disco. Ingrese Y
para continuar.
Cuando se complete la instalación, ejecute una secuencia de comandos de seguridad sencilla que viene con MariaDB previamente instalada. Con esto, se eliminarán algunos ajustes predeterminados y se bloqueará el acceso a su sistema de bases de datos. Inicie la secuencia de comandos interactiva ejecutando lo siguiente:
- sudo mysql_secure_installation
Con esto, verá una serie de solicitudes mediante las cuales podrá realizar cambios en las opciones de seguridad de su instalación de MariaDB. En la primera solicitud se pedirá que introduzca la contraseña root de la base de datos actual. Esta es una cuenta administrativa de MariaDB que tiene mayores privilegios. Considérela como algo similar a la cuenta root para el propio servidor (aunque la que está configurando ahora sea una cuenta específica de MariaDB). Debido a que acaba de instalar MariaDB y aún no realizó aún cambios en la configuración, el espacio de esta contraseña estará en blanco. Por ello, pulse ENTER
en la solicitud.
En la siguiente solicitud se pregunta si desea configurar una contraseña root de la base de datos. Escriba N
y pulse ENTER
. En Debian, la cuenta root para MariaDB está estrechamente vinculada al mantenimiento del sistema automatizado, por tanto no deberíamos cambiar los métodos de autenticación configurados para esa cuenta. Hacer esto permitiría que una actualización de paquetes dañara el sistema de bases de datos eliminando el acceso a la cuenta administrativa. Más tarde, se explicará la manera configurar de forma opcional una cuenta administrativa adicional para el acceso con contraseña si la autenticación del socket no es apropiada para su caso de uso.
Desde allí, puede pulsar Y
y luego ENTER
para aceptar los valores predeterminados para todas las preguntas siguientes. Con esto, se eliminarán algunos usuarios anónimos y la base de datos de prueba, se deshabilitarán las credenciales de inicio de sesión remoto de root y se cargarán estas nuevas reglas para que MariaDB aplique de inmediato los cambios que realizó.
En las nuevas instalaciones en sistemas Debian, el usuario root de MariaDB está configurado para autenticarse usando el complemento unix_socket
de forma predeterminada en lugar de una contraseña. Esto proporciona una mayor seguridad y utilidad en muchos casos, pero también puede generar complicaciones cuando necesita otorgar derechos administrativos a un programa externo (por ejemplo, phpMyAdmin).
Debido a que el servidor utiliza la cuenta root para tareas como la rotación de registros y el inicio y la deteneción del servidor, es mejor no cambiar los detalles de autenticación root de la cuenta. La modificación de las credenciales de la cuenta en /etc/mysql/debian.cnf
puede funcionar al principio, pero las actualizaciones de paquetes pueden sobrescribir esos cambios. En vez de modificar la cuenta root, los encargados del mantenimiento de paquetes recomiendan crear una cuenta administrativa independiente si necesita configurar un acceso basado en contraseña.
Para hacerlo, crearemos una nueva cuenta llamada admin
con las mismas capacidades que la cuenta root, pero configurada para la autenticación de contraseñas. Para hacer esto, abra la instrucción de MariaDB desde su terminal:
- sudo mariadb
Ahora, podremos crear un nuevo usuario con privilegios root y acceso basado en contraseña. Cambie el nombre de usuario y la contraseña según sus preferencias:
- GRANT ALL ON *.* TO 'admin'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
Vacíe los privilegios para garantizar que se guarden y estén disponibles en la sesión actual:
- FLUSH PRIVILEGES;
Después de esto, cierre el shell de MariaDB:
- exit
Ahora, en cualquier momento que desee acceder a su base de datos como su nuevo usuario administrativo, deberá autenticarse como ese usuario con la contraseña que acaba de configurar usando el siguiente comando:
- mariadb -u admin -p
En este momento, el sistema de base de datos está configurado y puede proceder a instalar PHP, el componente final de la pila LAMP.
PHP es el componente de su configuración que procesará código para mostrar contenido dinámico. Puede ejecutar secuencias de comandos, establecer conexión con sus bases de datos de MariaDB para obtener información y entregar el contenido procesado a su servidor web para su visualización.
Una vez más, utilice el sistema apt
para instalar PHP. Además, incluye algunos paquetes de helper esta vez para que el código de PHP pueda ejecutarse con el servidor Apache y comunicarse con su base de datos de MariaDB:
- sudo apt install php libapache2-mod-php php-mysql
Con esto, PHP debería instalarse sin problemas. Lo probaremos en un momento.
En la mayoría de los casos, le convendrá modificar la forma en que Apache presenta los archivos cuando se solicita un directorio. Actualmente, si un usuario solicita un directorio del servidor, Apache buscará primero un archivo llamado index.html
. Queremos indicar al servidor web que priorice los archivos PHP respecto de otros. Por ello, haga que Apache busque primero un archivo index.php
.
Para hacer esto, escriba este comando a fin de abrir el archivo dir.conf
en un editor de texto con privilegios root:
- sudo nano /etc/apache2/mods-enabled/dir.conf
Tendrá un aspecto similar a este:
<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>
Mueva el archivo de índice PHP (resaltado arriba) a la primera posición después de la especificación DirectoryIndex
, como se muestra a continuación:
<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>
Cuando termine, guarde y cierre el archivo pulsando CTRL+X
. Confirme que desea guardar los cambios escribiendo Y
y luego pulse ENTER
para verificar la ubicación en la que se guardará el archivo.
Después de esto, reinicie el servidor web Apache para que se reconozcan sus cambios. Hágalo escribiendo lo siguiente:
- sudo systemctl restart apache2
También puede verificar el estado del servicio apache2
usando systemctl
:
- sudo systemctl status apache2
Sample Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:23:03 UTC; 9s ago
Process: 22209 ExecStop=/usr/sbin/apachectl stop (code=exited, status=0/SUCCESS)
Process: 22216 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 22221 (apache2)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/apache2.service
├─22221 /usr/sbin/apache2 -k start
├─22222 /usr/sbin/apache2 -k start
├─22223 /usr/sbin/apache2 -k start
├─22224 /usr/sbin/apache2 -k start
├─22225 /usr/sbin/apache2 -k start
└─22226 /usr/sbin/apache2 -k start
Para mejorar la funcionalidad de PHP, tendrá la opción de instalar algunos módulos adicionales. Para ver las opciones disponibles para módulos y bibliotecas PHP, canalice los resultados de apt search
en less
, un localizador que le permite desplazarse por el resultado de otros comandos:
- apt search php- | less
Utilice las teclas de flecha para desplazarse arriba y abajo, y pulse Q
para salir.
Como resultado, obtendrá todos los componentes opcionales que puede instalar. Verá una breve descripción de cada uno:
OutputSorting...
Full Text Search...
bandwidthd-pgsql/stable 2.0.1+cvs20090917-10 amd64
Tracks usage of TCP/IP and builds html files with graphs
bluefish/stable 2.2.9-1+b1 amd64
advanced Gtk+ text editor for web and software development
cacti/stable 0.8.8h+ds1-10 all
web interface for graphing of monitoring systems
cakephp-scripts/stable 2.8.5-1 all
rapid application development framework for PHP (scripts)
ganglia-webfrontend/stable 3.6.1-3 all
cluster monitoring toolkit - web front-end
haserl/stable 0.9.35-2+b1 amd64
CGI scripting program for embedded environments
kdevelop-php-docs/stable 5.0.3-1 all
transitional package for kdevelop-php
kdevelop-php-docs-l10n/stable 5.0.3-1 all
transitional package for kdevelop-php-l10n
…
:
En Internet podrá obtener más información sobre lo que hace cada módulo. También puede consultar la descripción extensa del paquete escribiendo lo siguiente:
- apt show package_name
Verá muchos resultados; habrá un campo llamado Description
, que tendrá una explicación más larga de la funcionalidad que proporciona el módulo.
Por ejemplo, para saber qué hace el módulo php-cli,
puede escribir lo siguiente:
- apt show php-cli
Junto con una gran cantidad de información, encontrará algo similar a esto:
Output…
Description: command-line interpreter for the PHP scripting language (default)
This package provides the /usr/bin/php command interpreter, useful for
testing PHP scripts from a shell or performing general shell scripting tasks.
.
PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used
open source general-purpose scripting language that is especially suited
for web development and can be embedded into HTML.
.
This package is a dependency package, which depends on Debian's default
PHP version (currently 7.0).
…
Si después de investigar decide que desea instalar un paquete, puede hacerlo usando el comando apt install
como lo ha hecho con el otro software.
Si determina que necesita php-cli
, puede escribir lo siguiente:
- sudo apt install php-cli
Si desea instalar más de un módulo, puede hacerlo enumerando cada uno de ellos, separados por un espacio, con el comando apt install
, como se muestra a continuación:
- sudo apt install package1 package2 ...
En este punto, su pila LAMP estará instalada y confgurada. Antes de realizar más cambios o implementar una aplicación, sería útil probar de forma proactiva su configuración PHP en caso de que haya problemas que deban abordarse.
A fin de verificar que su sistema esté configurado de forma adecuada para PHP, cree una secuencia de comandos PHP muy básica llamada info.php
. Para que Apache encuentre este archivo y lo presente correctamente, debe guardarse en un directorio muy específico llamado web root.
En Debian 9, este directorio se encuentra en /var/www/html/
. Cree el archivo en esa ubicación ejecutando lo siguiente:
- sudo nano /var/www/html/info.php
Con esto se abrirá un archivo vacío. Añada el siguiente texto, que es el código PHP válido, dentro del archivo:
<?php
phpinfo();
?>
Cuando termine, guarde y cierre el archivo.
Ahora puede probar si su servidor web puede mostrar correctamente el contenido generado por esta secuencia de comandos PHP. Para probar esto, visite esta página en su navegador web. Necesitará de nuevo la dirección IP pública de su servidor.
La dirección que le convendrá visitar es la siguiente:
http://your_server_ip/info.php
La página a la que llegue debería tener un aspecto similar a este:
En esta página se proporciona información básica sobre su servidor desde la perspectiva de PHP. Es útil para la depuración y para asegurarse de que sus ajustes se apliquen correctamente.
Si puede ver esta página en su navegador, su PHP funcionará como se espera.
Probablemente desee eliminar este archivo tras esta prueba, ya que podría proporcionar información sobre su servidor a usuarios no autorizados. Para hacer esto, ejecute el siguiente comando:
- sudo rm /var/www/html/info.php
Siempre puede recrear esta página si necesita acceder a la información posteriormente.
Ahora que dispone de una pila LAMP instalada, tiene muchas opciones respecto de lo que puede hacer a continuación. Básicamente, instaló una plataforma que le permitirá instalar la mayoría de los tipos de sitios y recursos de software web en su servidor.
]]>Os sistemas de controle de versão de software permitem que você mantenha o controle do seu software no nível de código. Com ferramentas de versão, é possível rastrear as alterações, retornar para etapas anteriores e ramificar para criar versões alternativas de arquivos e diretórios.
O Git é um dos sistemas de controle de versão mais populares disponíveis atualmente. Muitos arquivos de projetos são mantidos em um repositório Git, e sites como o GitHub, o GitLab, e o Bitbucket ajudam a facilitar o compartilhamento e colaboração de projetos de desenvolvimento de software.
Neste tutorial, vamos instalar e configurar o Git em um servidor Debian 9. Iremos cobrir como instalar o software em duas maneiras diferentes, cada uma delas tem seus próprios benefícios dependendo das suas necessidades específicas.
Para completar este tutorial, é necessário ter um usuário não raiz com privilégios sudo
em um servidor Debian 9. Para aprender a como chegar a essa configuração, siga nosso guia de configuração inicial do servidor Debian 9.
Com seu servidor e usuário configurados, você está pronto para começar.
Os repositórios padrão do Debian fornecem a você um método rápido para instalar o Git. Note que a versão que você instala por esses repositórios pode ser mais antiga que a versão mais nova atualmente disponível. Se for necessário a última versão, considere se mudar para a próxima seção deste tutorial para aprender como instalar e compilar o Git da fonte.
Primeiramente, utilize as ferramentas de gerenciamento de pacotes apt para atualizar seu índice de pacotes local. Com a atualização completa, é possível baixar e instalar o Git:
- sudo apt update
- sudo apt install git
É possível confirmar que você instalou o Git corretamente executando o seguinte comando:
- git --version
Outputgit version 2.11.0
Com o Git instalado com sucesso, agora é possível seguir em frente para a seção Como configurar o Git deste tutorial para completar sua configuração.
Um método mais flexível de instalar o Git é compilar o software do código. Isso leva mais tempo e não será mantido através do seu gerenciador de pacotes, mas ele irá permitir que você baixe a versão mais recente e dará a você controle sobre as opções que desejar personalizar.
Antes de começar, é necessário instalar o software que o Git depende. Tudo isso está disponível nos repositórios padrão, para que possamos atualizar nosso índice de pacotes e em seguida instalar os pacotes.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
Após instalar as dependências necessárias, é possível prosseguir e obter a versão do Git que quiser ao visitar o Mirror do projeto Git no GitHub, disponível pelo seguinte URL:
https://github.com/git/git
A partir daqui, certifique-se de que está no ramo principal
. Clique no link Tag e selecione sua versão Git desejada. A menos que tenha um motivo para baixar uma versão do release candidate (marcado como rc), procure evitá-lo, uma vez que eles podem ser instáveis.
Em seguida, no lado direito da página, clique no botão Clonar ou download, e então clique no botão Download ZIP e copie o endereço de link que termina em .zip
.
Volte ao seu servidor Debian 9, vá para o diretório tmp
e baixe os arquivos temporários.
- cd /tmp
A partir daí, é possível usar o comando wget
para instalar o link do arquivo zip copiado. Vamos dar um novo nome para o arquivo: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Descompacte o arquivo que baixou e mova ele para o diretório resultante digitando:
- unzip git.zip
- cd git-*
Agora, é possível fazer o pacote e instalá-lo digitando esses dois comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Para garantir que a instalação foi bem sucedida, digite git --version
e receba a saída relevante que especifica a versão atual do Git.
Agora que tem o Git instalado, se quiser atualizar para uma versão mais recente, será possível clonar o repositório, e então compilar e instalar. Para encontrar o URL para usar para a operação de clone, navegue até o ramo ou tag que quiser na página GitHub do projeto e, em seguida, copie o URL clone no lado direito:
No momento em que este artigo está sendo escrito, o URL relevante é:
https://github.com/git/git.git
Altere seu diretório inicial e utilize o git clone
no URL que acabou de copiar:
- cd ~
- git clone https://github.com/git/git.git
Isso irá criar um novo diretório dentro do seu diretório atual, onde pode reconstruir o pacote e instalar a versão mais recente, do jeito que fez acima. Isso irá sobrepor sua versão mais antiga com a nova versão:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Terminado isso, certifique-se de que sua versão do está atualizada.
Agora que tem o Git instalado, será necessário configurá-lo para que as mensagens de entrega geradas contenham as suas informações corretas.
Isso pode ser alcançado utilizando o comando git config
. Especificamente, precisamos dar nosso e endereço de e-mail porque o Git incorpora esta informação em cada entrega que fazemos. Podemos seguir em frente e adicionar esta informação digitando:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
Podemos ver todos os itens de configuração que foram configurados digitando:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
A informação que digitou está armazenada no seu arquivo de configuração Git, que você pode editar opcionalmente com um editor de texto como este:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
Há muitas outras opções que é possível definir, mas essas duas são necessárias. Se pular este passo, provavelmente verá avisos quando colocar o Git em funcionamento. Isso dará mais trabalho para você pois será necessário revisar as entregas que tiver feito com as informações corretas.
Agora, você deve ter o Git instalado e pronto para usar no seu sistema.
Para aprender mais sobre como usar o Git, verifique esses artigos e séries:
]]>Los sistemas de control de versión de software le permiten controlar su software al nivel de la fuente. Con herramientas de control de versiones, puede realizar un seguimiento de los cambios, volver a etapas anteriores y producir ramificaciones para crear versiones alternativas de archivos y directorios.
Git es uno de los sistemas de control de versión más populares disponibles actualmente. Los archivos de muchos proyectos se mantienen en un repositorio Git y sitios como GitHub, GitLab y Bitbucket facilitan el intercambio y la colaboración en proyectos de desarrollo de software.
En este tutorial, instalaremos y configuraremos Git en un servidor de Debian 9. Abarcaremos la instalación del software de dos formas diferentes, cada una con sus propios beneficios según sus necesidades específicas.
Para completar este tutorial, debe tener un usuario no root con privilegios sudo
en un servidor de Debian 9. Para aprender a realizar esta configuración, siga nuestra Guía de configuración inicial para servidores de Debian 9.
Con su servidor y usuario configurado, estará listo para comenzar.
Los repositorios predeterminados de Debian le proporcionan un método rápido para instalar Git. Tenga en cuenta que la versión que instale a través de estos repositorios puede ser anterior a la más reciente disponible en la actualidad. Si necesita la última versión, considere pasar a la siguiente sección de este tutorial para aprender a instalar y compilar Git desde la fuente.
Primero, utilice las herramientas de gestión de paquetes apt para actualizar su índice local de paquetes. Con la actualización completa, puede descargar e instalar Git:
- sudo apt update
- sudo apt install git
Puede confirmar que instaló Git de forma correcta ejecutando el siguiente comando:
- git --version
Outputgit version 2.11.0
Una vez que instale Git correctamente, podrá pasar a la sección Configurar Git de este tutorial para completar su configuración.
Un método más flexible para instalar Git consiste en compilar el software desde la fuente. Esto toma más tiempo y no se mantendrá en su administrador de paquetes, pero le permitirá descargar la versión más reciente y le brindará cierto control sobre las opciones que incluya si desea personalizar.
Antes de comenzar, debe instalar el software necesario para Git. Se encuentra disponible en los repositorios predeterminados, de modo que podemos actualizar nuestro índice de paquetes locales y luego instalar los paquetes.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
Después de instalar las dependencias necesarias, puede obtener la versión de Git que desee visitando el espejo del proyecto de Git en GitHub, disponible a través de la siguiente URL:
https://github.com/git/git
A partir de este punto, asegúrese de encontrarse en la rama maestra
, Haga clic en el enlace Tags y seleccione la versión de Git que desee. A menos que tenga una razón para descargar una versión candidata (marcada como rc), intente evitar opciones como esta porque pueden ser inestables.
Luego, en la parte derecha de la página, haga clic con el botón primario en Clone or download y con el secundario en Download ZIP, y copie la dirección del enlace que termina en .zip
.
En su servidor de Debian 9, diríjase al directorio tmp
para descargar archivos temporales.
- cd /tmp
Desde allí, puede usar el comando wget
para instalar el enlace del archivo zip copiado. Especificaremos un nuevo nombre para el archivo: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Descomprima el archivo que descargó y vaya al directorio resultante escribiendo lo siguiente:
- unzip git.zip
- cd git-*
Ahora, podrá crear el paquete e instalarlo escribiendo estos dos comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Para asegurarse de que la instalación se haya completado de forma correcta, puede escribir git --version
deberá obtener un resultado pertinente que especifique la versión actualmente instalada de Git.
Ahora que instaló Git, si desea realizar una actualización a una versión posterior podrá clonar el repositorio y luego proceder con la compilación e instalación. Para encontrar la URL que usará en la operación de clonación, diríjase hasta la rama o etiqueta que desee en la página de GitHub del proyecto y luego copie la URL de clonación en el lado derecho:
En el momento de escribir, la URL pertinente es la siguiente:
https://github.com/git/git.git
Cambie la posición a su directorio de inicio y utilice el git clone
en la URL que acaba de copiar:
- cd ~
- git clone https://github.com/git/git.git
Con esto, dentro de su directorio actual se creará un nuevo directorio en el que podrá reconstruir el paquete y reinstalar la versión más reciente, como antes. Con esto se sobrescribirá su versión anterior con la nueva:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Una vez completado esto, podrá estar seguro de que su versión de Git estará actualizada.
Ahora que instaló Git, debe configurarlo de modo que los mensajes de confirmación generados contengan su información correcta.
Esto es posible usando el comando git config
. Específicamente, debemos proporcionar nuestro nombre y nuestra dirección de correo electrónico debido a que Git inserta esta información en cada confirmación que hacemos. Podemos añadir esta información escribiendo lo siguiente:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
Podemos ver todos los elementos de configuración ajustados escribiendo lo siguiente:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
La información que introduce se almacena en su archivo de configuración de Git. Tendrá la opción de modificarlo con un editor de texto de la siguiente manera:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
Existen muchas otras opciones que puede configurar, pero estas son las dos esenciales que se necesitan. Si omite este paso, probablemente verá las advertencias cuando realice una confirmación con Git. Esto implica un mayor trabajo para usted, pues tendrá que revisar las confirmaciones que haya realizado con la información corregida.
De esta manera, deberá tener Git instalado y listo para usarse en su sistema.
Para obtener más información sobre cómo usar Git, consulte los artículos y las series siguientes:
]]>O Virtual Network Computing, ou VNC, é um sistema de conexão que permite que você use seu teclado e mouse para interagir com um ambiente gráfico da área de trabalho em um servidor remoto. Isso facilita o gerenciamento de arquivos, software e configurações em um servidor remoto para os usuários que ainda não se sentem confortáveis com a linha de comando.
Neste guia, você irá configurar um servidor VNC em um servidor Debian 9 e irá se conectar a ele de forma segura por um túnel SSH. Você usará o TightVNC, um pacote de controle remoto rápido e leve. Esta escolha irá garantir que nossa conexão VNC será suave e estável mesmo em conexões de Internet mais lentas.
Para completar este tutorial, será necessário:
sudo
e um firewall.o vinagre,
krdc,
RealVNC ou o TightVNC.Por padrão, um servidor Debian 9 não vem com um ambiente gráfico de área de trabalho ou um servidor VNC instalado. Assim, vamos começar por instalar esses. Especificamente, iremos instalar os pacotes de ambiente de área de trabalho - Xfce - e do TightVNC mais recentes disponíveis no repositório oficial do Debian.
No seu servidor, atualize sua lista de pacotes:
- sudo apt update
Agora, instale o ambiente de área de trabalho Xfce no seu servidor:
- sudo apt install xfce4 xfce4-goodies
Durante a instalação, será solicitado que você selecione o layout do seu teclado a partir de uma lista de opções possíveis. Escolha o que for apropriado para o seu idioma e pressione Enter
. A instalação continuará.
Uma vez que a instalação tiver terminada, instale o servidor do TightVNC:
- sudo apt install tightvncserver
Para completar a configuração inicial do servidor VNC após a instalação, utilize o comando vncserver
para configurar uma senha segura e crie os arquivos de configuração iniciais:
- vncserver
Será solicitado que você digite e verifique uma senha para acessar sua máquina remotamente:
OutputYou will require a password to access your desktops.
Password:
Verify:
A senha deve ter entre seis e oito caracteres. Senhas com mais de 8 caracteres serão truncadas automaticamente.
Uma vez verificada a senha, você terá a opção de criar uma senha somente para exibição. Usuários que fizerem login com a senha somente para exibição não poderão controlar a instância VNC com seus respectivos mouses ou teclados. Esta é uma opção útil se quiser demonstrar algo para outras pessoas utilizando seu servidor VNC, mas isso não é necessário.
Depois, o processo criará os arquivos necessários de configuração padrão e informações de conexão para o servidor:
OutputWould you like to enter a view-only password (y/n)? n
xauth: file /home/sammy/.Xauthority does not exist
New 'X' desktop is your_hostname:1
Creating default startup script /home/sammy/.vnc/xstartup
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Agora, vamos configurar o servidor VNC.
O servidor VNC precisa saber quais comandos executar quando ele iniciar. Especificamente, o VNC precisa saber a qual ambiente gráfico de área de trabalho ele deve se conectar.
Estes comandos estão localizados em um arquivo de configuração chamado de xstartup
na pasta .vnc
sob o seu diretório inicial. O script de inicialização foi criado quando você executou o vncserver
no passo anterior, mas vamos criar o nosso próprio script para iniciar a área de tabalho Xfce.
Quando o VNC é configurado pela primeira vez, ele inicia uma instância de servidor padrão na porta 5901
. Essa porta é chamada de porta de exibição e é referida pelo VNC como :1
. O VNC pode iniciar várias instâncias em outras porta de exibição, como :2
, :3
e assim por diante.
Uma vez que vamos alterar a configuração do servidor VNC, primeiramente, interrompa a instância do servidor VNC que estiver em execução na porta 5901
com o seguinte comando:
- vncserver -kill :1
O resultado deve se parecer com este, embora você veja um PID diferente:
OutputKilling Xtightvnc process ID 17648
Antes de modificar o arquivo xstartup
, faça um back-up do original:
- mv ~/.vnc/xstartup ~/.vnc/xstartup.bak
Agora, crie um novo arquivo de xstartup
e abra ele no seu editor de texto:
- nano ~/.vnc/xstartup
Os comandos neste arquivo são executados de maneira automática, sempre que você iniciar ou reiniciar o servidor VNC. Precisamos que o VNC inicie nosso ambiente de área de trabalho, caso ainda não tiver sido iniciado. Adicione esses comandos ao arquivo:
~/.vnc/xstartup#!/bin/bash
xrdb $HOME/.Xresources
startxfce4 &
O primeiro comando no arquivo, xrdb $HOME/ .Xresources
, diz ao framework GUI do VNC para ler o usuário do servidor .Xresources
arquivo. O .Xresources
é onde um usuário pode fazer alterações em certas configurações do ambiente de trabalho gráfico, como as cores de terminal, temas de cursor e rendering de fontes. O segundo comando diz ao servidor para iniciar o Xfce, que é onde você encontrará todos os software gráficos necessários para gerenciar confortavelmente seu servidor.
Para garantir que o servidor VNC será capaz de usar esse novo arquivo de inicialização corretamente, precisaremos torná-lo executável.
- sudo chmod +x ~/.vnc/xstartup
Agora, reinicie o servidor VNC.
- vncserver
Você verá um resultado semelhante a este:
OutputNew 'X' desktop is your_hostname:1
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Com a configuração implementada, vamos conectar nossa máquina local ao servidor.
O VNC propriamente dito não utiliza protocolos de segurança ao se conectar. Utilizaremos um túnel SSH para conectar ao nosso servidor com segurança. Em seguida, diremos ao nosso cliente VNC para usar aquele túnel e não fazer uma conexão direta.
Crie uma conexão SSH no seu computador local; ela encaminhará a conexão localhost
para o VNC. É possível fazer isto através do terminal no Linux ou no macOS com o seguinte comando:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
O switch -L
especifica as ligações de porta. Neste caso, estamos ligando a porta 5901
da conexão remota à porta 5901
de sua máquina local. O switch -C
habilita a compressão, enquanto o switch -N
diz ao ssh
que não queremos executar um comando remoto. O switch -l
especifica o nome de login remoto.
Lembre-se de substituir o sammy
e o your_server_ip
pelo nome de usuário não raiz do comando sudo e o endereço de IP do seu servidor.
Se estiver usando um cliente SSH gráfico, como o PuTTY, utilize o your_server_ip
como o IP de conexão e defina o localhost:5901
como uma nova porta redirecionada nas configurações de túnel SSH do programa.
Assim que o túnel estiver em execução, utilize um cliente VNC para se conectar ao localhost:5901
. Será solicitado que autentique usando a senha definida no Passo 1.
Uma vez que estiver conectado, verá o área de trabalho padrão Xfce.
Selecione a opção Usar configuração padrão para configurar sua área de trabalho rapidamente.
É possível acessar arquivos em seu diretório inicial com o gerenciador de arquivos ou da linha de comando, como visto aqui:
Na máquina local, pressione CTRL+C
no terminal para parar o túnel SSH e voltar ao seu prompt. Isto também irá desconectar sua sessão VNC.
A seguir, vamos configurar o servidor de VNC como um serviço.
Depois, vamos configurar o servidor VNC como um serviço de systemd para que possamos iniciar, parar e reiniciar se necessário, como em qualquer outro serviço. Isso também irá garantir que o VNC inicie quando seu servidor reinicializar.
Primeiramente, crie um novo arquivo de unidade chamado /etc/systemd/system/vncserver@.service
, usando seu editor de texto favorito:
- sudo nano /etc/systemd/system/vncserver@.service
O símbolo @
no final do nome permitirá que enviemos um argumento que poderemos usar na configuração do serviço. Vamos usar isso para especificar a porta de exibição do VNC que queremos usar quando gerenciarmos o serviço.
Adicione as linhas a seguir ao arquivo. Certifique-se de alterar o valor do User, Group, WorkingDirectory e o nome de usuário no valor do PIDFILE para corresponder ao seu nome de usuário:
/etc/systemd/system/vncserver@.service[Unit]
Description=Start TightVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=sammy
Group=sammy
WorkingDirectory=/home/sammy
PIDFile=/home/sammy/.vnc/%H:%i.pid
ExecStartPre=-/usr/bin/vncserver -kill :%i > /dev/null 2>&1
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :%i
ExecStop=/usr/bin/vncserver -kill :%i
[Install]
WantedBy=multi-user.target
O comando ExecStartPre
interrompe o VNC , se ele já estiver em execução. O comando ExecStart
inicia o VNC e define a profundidade de cor para 24 bits com uma resolução de 1280x800. Também é possível modificar essas opções de inicialização para atender suas necessidades.
Salve e feche o arquivo.
A seguir, faça com que o sistema saiba do novo arquivo de unidade.
- sudo systemctl daemon-reload
Habilite o arquivo de unidade.
- sudo systemctl enable vncserver@1.service
O 1
que depois do símbolo @
significa sobre qual número de exibição o serviço deve aparecer, neste caso, o padrão :1,
como foi discutido no Passo 2.
Interrompa a instância atual do servidor VNC se ele ainda estiver em execução.
- vncserver -kill :1
Então, inicie-o como você iniciaria qualquer outro serviço systemd.
- sudo systemctl start vncserver@1
É possível verificar se ele iniciou com este comando:
- sudo systemctl status vncserver@1
Se ele iniciou corretamente, a saída deverá se parecer com isto:
Output● vncserver@1.service - Start TightVNC server at startup
Loaded: loaded (/etc/systemd/system/vncserver@.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 16:47:40 UTC; 3s ago
Process: 4977 ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :1 (code=exited, status=0/SUCCESS)
Process: 4971 ExecStartPre=/usr/bin/vncserver -kill :1 > /dev/null 2>&1 (code=exited, status=0/SUCCESS)
Main PID: 4987 (Xtightvnc)
...
Seu servidor VNC agora estará disponível quando reiniciar a máquina.
Inicie seu túnel SSH novamente:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
Então, faça uma nova conexão usando seu software de cliente VNC com o localhost:5901
para conectar à sua máquina.
Agora, você tem um servidor de VNC seguro em execução no seu servidor Debian 9. Agora, poderá gerenciar seus arquivos, software e configurações com uma interface gráfica conhecida e fácil de usar e será capaz de executar software gráficos como navegadores de Web remotamente.
]]>Virtual Network Computing, o VNC, es un sistema de conexión que le permite usar su teclado y mouse para interactuar con un entorno de escritorio gráfico en un servidor remoto. Hace que administrar archivos, software y ajustes en un servidor remoto sea más fácil para los usuarios que aún no se sienten cómodos con la línea de comandos.
A través de esta guía, configurará un servidor VNC en un servidor de Debian 9 y se conectará de forma segura a través de un túnel SSH. Usará TightVNC, un paquete de control remoto rápido y ligero. Esta opción garantizará que nuestra conexión VNC sea perfecta y estable, incluso en las conexiones a Internet más lentas.
Para completar este tutorial, necesitará lo siguiente:
sudo
y un firewall.Por defecto, el servidor Debian 9 no viene con un entorno de escritorio gráfico o un servidor VNC instalado, por lo que comenzaremos instalándolos. Específicamente, instalaremos paquetes para el entorno de escritorio Xfce más reciente y el paquete TightVNC disponible en el repositorio oficial de Debian.
En su servidor, actualice su lista de paquetes:
- sudo apt update
Ahora, instale el entorno de escritorio Xfce en su servidor:
- sudo apt install xfce4 xfce4-goodies
Durante la instalación, se le solicitará seleccionar la distribución de su teclado de una lista de posibles opciones. Seleccione la que corresponda para su idioma y presione Enter
. La instalación continuará.
Cuando finalice la instalación, instale el servidor TightVNC:
- sudo apt install tightvncserver
Para completar la configuración inicial del servidor VNC tras su instalación, utilice el comando vncserver
para configurar una contraseña segura y crear los archivos de configuración iniciales:
- vncserver
Se le indicará que introduzca y verifique una contraseña para acceder a su máquina de forma remota:
OutputYou will require a password to access your desktops.
Password:
Verify:
La contraseña debe tener entre seis y ocho caracteres de largo. Las contraseñas de más de 8 caracteres se reducirán automáticamente.
Una vez que verifique la contraseña, tendrá la opción de crear una contraseña de solo vista. Los usuarios que inicien sesión con la contraseña de solo vista no podrán controlar la instancia de VNC con su mouse o teclado. Esta es una opción útil si desea demostrar algo a otras personas que utilizan su servidor VNC, pero no es un requisito necesario.
Luego, el proceso crea los archivos de configuración predeterminados que se requieren y la información de conexión para el servidor:
OutputWould you like to enter a view-only password (y/n)? n
xauth: file /home/sammy/.Xauthority does not exist
New 'X' desktop is your_hostname:1
Creating default startup script /home/sammy/.vnc/xstartup
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Ahora, vamos a configurar el servidor VNC.
El servidor VNC debe saber qué comandos ejecutar cuando se inicia. Específicamente, VNC debe saber a qué escritorio gráfico deberá conectarse.
Estos comandos se encuentran en un archivo de configuración llamado xstartup
en la carpeta .vnc
de su directorio principal. La secuencia de comandos de inicio se creó cuando ejecutó vncserver
en el paso anterior, pero crearemos una propia para abrir el escritorio Xfce.
Cuando VNC se configura por primera vez, se abre como una instancia del servidor predeterminada en el puerto 5901
. Este puerto se llama puerto de visualización, y VNC se refiere a él como :1
. VNC puede abrir múltiples instancias en otros puertos de visualización, como :2
, :3
, y así sucesivamente.
Debido a que vamos a cambiar la configuración del servidor VNC, primero, detenga la instancia del servidor VNC que se está ejecutando en el puerto 5901
con el siguiente comando:
- vncserver -kill :1
El resultado debería tener este aspecto, pero verá un PID diferente:
OutputKilling Xtightvnc process ID 17648
Antes de modificar el archivo xstartup
, realice una copia de seguridad del original:
- mv ~/.vnc/xstartup ~/.vnc/xstartup.bak
Ahora, cree un nuevo archivo xstartup
y ábralo en su editor de texto:
- nano ~/.vnc/xstartup
Los comandos de este archivo se ejecutan automáticamente siempre que inicie o reinicie el servidor VNC. Necesitamos que VNC inicie nuestro entorno de escritorio, si aún no está iniciado. Añada estos comandos al archivo:
~/.vnc/xstartup#!/bin/bash
xrdb $HOME/.Xresources
startxfce4 &
El primer comando del archivo, xrdb $HOME/. Xresources
, le indica al marco de trabajo de la GUI de VNC que lea el archivo .Xresources
del usuario del servidor. El archivo .Xresources
es donde un usuario puede realizar cambios a ajustes concretos del escritorio gráfico, como los colores de la terminal, los temas del cursor y la renderización de fuentes. El segundo comando le indica al servidor que inicie Xfce, que es donde encontrará todo el software gráfico que necesita para administrar cómodamente su servidor.
Para garantizar que el servidor VNC pueda usar este nuevo archivo de inicio correctamente, deberá hacerlo ejecutable.
- sudo chmod +x ~/.vnc/xstartup
Ahora, reinicie el servidor VNC.
- vncserver
Visualizará un resultado similar al siguiente:
OutputNew 'X' desktop is your_hostname:1
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Con la configuración establecida, vamos a conectar con el servidor desde nuestra máquina local.
VNC no utiliza protocolos seguros cuando se conecta. Usaremos un túnel SSH para establecer una conexión segura con nuestro servidor y, luego, le indicaremos a nuestro cliente VNC que utilice ese túnel en vez de realizar una conexión directa.
Cree una conexión SSH en su equipo local que se reenvíe de forma segura a la conexión localhost
para VNC. Puede hacerlo a través de la terminal en Linux o macOS con el siguiente comando:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
El conmutador -L
especifica las vinculaciones del puerto. En este caso, estamos vinculando el puerto 5901
de la conexión remota con el puerto 5901
de su máquina local. El conmutador -C
permite la compresión, mientras que el conmutador -N
le indica a ssh
que no queremos ejecutar el comando remoto. El conmutador -l
especifica el nombre de inicio de sesión remoto.
Recuerde sustituir sammy
y your_server_ip
por el nombre de usuario sudo no root y la dirección IP de su servidor.
Si usa un cliente SSH gráfico, como PuTTY, recurra a your_server_ip
para la conexión IP y establezca localhost:5901
como un nuevo puerto reenviado en los ajustes del túnel SSH del programa.
Una vez que el túnel se esté ejecutando, utilice un cliente VNC para conectar a localhost:5901
. Se le pedirá que autentique usando la contraseña que configuró en el Paso 1.
Una vez conectado, verá el escritorio Xfce predeterminado.
Seleccione Use default config para configurar su escritorio de forma rápida.
Puede acceder a los archivos en su directorio principal con el administrador de archivos o desde la línea de comandos, como se indica aquí:
En su máquina local, pulse CTRL+C
en su terminal para detener el túnel SSH y regresar a su línea de comandos. Esto también desconectará su sesión de VNC.
A continuación, configuraremos el servidor VNC como servicio.
A continuación, configuraremos el servidor VNC como un servicio de systemd para poder iniciarlo, detenerlo y reiniciarlo según lo necesitemos, como cualquier otro servicio. Esto también garantizará que VNC se inicie al reiniciar su servidor.
Primero, cree un nuevo archivo de unidad llamado /etc/systemd/system/vncserver@.service
con el editor de texto de su preferencia:
- sudo nano /etc/systemd/system/vncserver@.service
El símbolo @
al final del nombre nos dejará pasar un argumento que podemos usar en la configuración del servicio. Lo usaremos para especificar el puerto de visualización de VNC que queremos usar cuando administremos el servicio.
Añada las siguientes líneas al archivo. Asegúrese de cambiar el valor de User, Group, WorkingDirectory y el nombre de usuario en el valor de PIDFILE para que coincidan con su nombre de usuario:
/etc/systemd/system/vncserver@.service[Unit]
Description=Start TightVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=sammy
Group=sammy
WorkingDirectory=/home/sammy
PIDFile=/home/sammy/.vnc/%H:%i.pid
ExecStartPre=-/usr/bin/vncserver -kill :%i > /dev/null 2>&1
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :%i
ExecStop=/usr/bin/vncserver -kill :%i
[Install]
WantedBy=multi-user.target
El comando ExecStartPre
detiene VNC si ya se está ejecutando. El comando ExecStart
inicia VNC y establece la profundidad del color a color de 24 bits con una resolución de 1280 x 800. También puede modificar estas opciones de inicio según sus necesidades.
Guarde y cierre el archivo.
A continuación, informe al sistema la existencia de un nuevo archivo de unidad.
- sudo systemctl daemon-reload
Habilite el archivo de unidad.
- sudo systemctl enable vncserver@1.service
El 1
tras el signo @
indica en qué número de pantalla debe aparecer el servicio, en este caso, el valor predeterminado es :1
, como se explicó en el paso 2.
Detenga la instancia actual del servidor VNC si aún se está ejecutando.
- vncserver -kill :1
Luego, inícielo como iniciaría cualquier otro servicio de systemd.
- sudo systemctl start vncserver@1
Puede verificar si ha comenzado con este comando:
- sudo systemctl status vncserver@1
Si se inició correctamente, el resultado debería verse así:
Output● vncserver@1.service - Start TightVNC server at startup
Loaded: loaded (/etc/systemd/system/vncserver@.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 16:47:40 UTC; 3s ago
Process: 4977 ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :1 (code=exited, status=0/SUCCESS)
Process: 4971 ExecStartPre=/usr/bin/vncserver -kill :1 > /dev/null 2>&1 (code=exited, status=0/SUCCESS)
Main PID: 4987 (Xtightvnc)
...
Ahora, su servidor VNC estará disponible cuando reinicie la máquina.
Inicie su túnel SSH de nuevo:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
Luego, realice una nueva conexión usando el software de su cliente VNC con localhost:5901
para conectarse con su máquina.
Ahora dispondrá de un servidor VNC seguro configurado y activo en su servidor Debian 9. Ya puede administrar sus archivos, software y ajustes con una interfaz gráfica familiar y fácil de usar, y puede ejecutar software gráfico, como navegadores web, de forma remota.
]]>SSH или защищенная оболочка — это шифрованный протокол, используемый для администриования и связи с серверами. При работе с сервером Debian вы проведете больше всего времени в сеансах терминала с подключением к серверу через SSH.
В этом обучающем модуле мы расскажем о настройке ключей SSH для базовой версии Debian 9. Ключи SSH обеспечивают удобный и защищенный способ входа на сервер, и их рекомендуется использовать для всех пользователей.
Первый шаг — создание пары ключей на клиентской системе (обычно на вашем компьютере):
- ssh-keygen
По умолчанию команда ssh-keygen
создает пару 2048-битных ключей RSA. Этот уровень защиты достаточен для большинства случаев (но при желании вы можете использовать флаг -b 4096
, чтобы создать более надежный 4096-битный ключ).
Восле ввода команды вы должны увидеть следующее:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Нажмите ENTER, чтобы сохранить пару ключей в подкаталог .ssh/
домашнего каталога или укажите альтернативный путь.
Если вы ранее создали пару ключей SSH, вы можете увидеть следующую строку:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
Если вы решите перезаписать ключ на диске, вы больше не сможете выполнять аутентификацию с помощью предыдущего ключа. Будьте осторожны при выборе варианта yes, потому что этот процесс уничтожает ключи, и его нельзя отменить.
Затем вы должны увидеть следующую строку:
OutputEnter passphrase (empty for no passphrase):
Здесь вы можете ввести защищенный пароль, что настоятельно рекомендуется сделать. Пароль добавляет дополнительный уровень безопасности для защиты от входа в систему несанкционированных пользователей. Дополнительную информацию о безопасности можно найти в нашем обучающем модуле Настройка аутентификации на базе ключей SSH на сервере Linux.
Вы должны увидеть следующий результат:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Теперь у вас есть открытый и закрытый ключи, которые вы можете использовать для аутентификации. Наследующем шаге вам нужно разместить открытый ключ на сервере, чтобы вы могли использовать аутентификацию на базе ключей SSH для входа в систему.
Самый быстрый способ скопировать открытый ключ на хост Debian — использовать утилиту ssh-copy-id
. Это самый простой способ, поэтому его рекомендуется использовать, если он доступен. Если на клиентском компьютере нет утилиты ssh-copy-id
, вы можете использовать один из двух альтернативных методов, описанных в этом разделе (копирование через SSH на базе пароля или копирование ключа вручную).
ssh-copy-id
Утилита ssh-copy-id
по умолчанию входит в состав многих операционных систем, поэтому она может быть доступна на вашем локальном компьютере. Чтобы этот метод сработал, вы должны уже настроить защищенный паролем доступ к серверу через SSH.
Для использования этой утилиты вам нужно только указать удаленный хост, к которому вы хотите подключиться, и учетную запись пользователя, к которой у вас есть доступ через SSH с использованием пароля. Ваш открытый ключ SSH будет скопирован в эту учетную запись.
Синтаксис выглядит следующим образом:
- ssh-copy-id username@remote_host
Вы можете увидеть следующее сообщение:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Это означает, что ваш локальный компьютер не распознает удаленный хост. Это произойдет при первом подключении к новому хосту. Введите «yes» и нажмите ENTER
, чтобы продолжить.
Затем утилита проведет сканирование локальной учетной записи для поиска ранее созданного ключа id_rsa.pub
. Когда ключ будет найден, вам будет предложено ввести пароль учетной записи удаленного пользователя:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Введите пароль (для безопасности вводимый текст не будет отображаться) и нажмите ENTER
. Утилита подключится к учетной записи на удаленном хосте, используя указанный вами пароль. Затем содержимое ключа ~/.ssh/id_rsa.pub
будет скопировано в основной каталог ~/.ssh
удаленной учетной записи в файл с именем authorized_keys
.
Вы должны увидеть следующий результат:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
Теперь ваш ключ id_rsa.pub
key выгружен в удаленную учетную запись. Вы можете переходить к шагу 3.
Если у вас нет ssh-copy-id
, но вы активировали защищенный паролем доступ к учетной записи на вашем сервере через SSH, вы можете выгрузить ключи с помощью стандартного метода SSH.
Для этого нужно использовать команду cat
, чтобы прочитать содержимое открытого ключа SSH на локальном компьютере и передать его через соединение SSH на удаленный сервер.
Также мы можем убедиться, что каталог ~/.ssh
и имеет правильные разрешения для используемой нами учетной записи.
Мы можем вывести переданное содержимое в файл с именем authorized_keys
в этом каталоге. Мы используем символ перенаправления >>
, чтобы дополнять содержимое, а не заменять его. Это позволяет добавлять ключи без уничтожения ранее добавленных ключей.
Полная команда выглядит следующим образом:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
Вы можете увидеть следующее сообщение:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Это означает, что ваш локальный компьютер не распознает удаленный хост. Это произойдет при первом подключении к новому хосту. Введите «yes» и нажмите ENTER
, чтобы продолжить.
После этого вам нужно будет ввести пароль учетной записи удаленного пользователя:
Outputusername@203.0.113.1's password:
После ввода пароля содержимое ключа id_rsa.pub
будет скопировано в конец файла authorized_keys
учетной записи удаленного пользователя. Если операция выполнена успешно, переходите к шагу 3.
Если для вашего сервера не настроен защищенный паролем доступ через SSH, вам нужно будет выполнить вышеописанную процедуру вручную.
Мы вручную добавим содержимое вашего файла id_rsa.pub
в файл ~/.ssh/authorized_keys
на удаленном компьютере.
Чтобы вывести содержимое ключа id_rsa.pub
, введите на локальном компьютере следующую команду:
- cat ~/.ssh/id_rsa.pub
Вы увидите содержимое ключа, которое должно выглядеть следующим образом:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Получите доступ к удаленному хосту с использованием любого доступного метода.
После получения доступа к учетной записи на удаленном сервере убедитесь, что каталог ~/.ssh
существует. При необходимости эта команда создаст каталог, а если каталог уже существует, команда ничего не сделает.
- mkdir -p ~/.ssh
Теперь вы можете создать или изменить файл authorized_keys
в этом каталоге. Вы можете добавить содержимое файла id_rsa.pub
в конец файла authorized_keys
и при необходимости создать его с помощью этой команды:
- echo public_key_string >> ~/.ssh/authorized_keys
В вышеуказанной команде замените public_key_string
результатами команды cat ~/.ssh/id_rsa.pub
, выполненной на локальном компьютере. Она должна начинаться с ssh-rsa AAAA...
.
Наконец, нужно убедиться, что каталог ~/.ssh
и файл authorized_keys
имеют соответствующий набор разрешений:
- chmod -R go= ~/.ssh
При этом будут рекурсивно удалены все разрешения «group» и «other» для каталога ~/.ssh/
.
Если вы используете учетную запись root
для настройки ключей учетной записи пользователя, важно учитывать, что каталог ~/.ssh
принадлежит пользователю, а не пользователю root
:
- chown -R sammy:sammy ~/.ssh
В этом обучающем модуле мы используем имя пользователя sammy, но вы можете заменить его в вышеприведенной команде другим используемым вами именем.
Теперь мы можем попробовать настроить аутентификацию без пароля на нашем сервере Debian.
Если вы успешно выполнили одну из вышеописанных процедур, вы сможете войти на удаленный хост без пароля учетной записи для удаленного хоста.
Базовый процесс выглядит аналогично:
- ssh username@remote_host
Если вы подключаетесь к этому хосту первый раз (если вы используете указанный выше последний метод), вы сможете увидеть следующее:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Это означает, что ваш локальный компьютер не распознает удаленный хост. Введите «yes» и нажмите ENTER
, чтобы продолжить.
Если вы не указывали пароль для своего закрытого ключа, вы войдете в систему немедленно. Если вы указали пароль закрытого ключа при создании ключа, вам будет предложено ввести его сейчас (для безопасности вводимые символы не будут отображаться в сеансе терминала). После аутентификации в оболочке откроется новый сеанс с настроенной учетной записью на сервере Debian.
Если аутентификация на базе ключа выполнена успешно, вы можете перейти к изучению дополнительных возможностей защиты системы посредством отключения аутентификации с помощью пароля.
Если вы смогли войти в свою учетную запись с помощью SSH без пароля, это означает, что вы успешно настроили для своей учетной записи аутентификацию на базе ключей SSH. Однако механизм аутентификации по паролю все еще активен, то есть ваш сервер может подвергнуться атаке посредством простого перебора паролей.
Прежде чем выполнять описанные в настоящем разделе шаги, убедитесь, что вы настроили аутентификацию на базе ключей SSH для учетной записи root на этом сервере, или (предпочтительно) вы настроили аутентификацию на базе ключей SSH для учетной записи сервера без привилегий root и с привилегиями sudo
. На этом шаге вход в систему по паролю будет заблокирован, поэтому очень важно сохранить возможность доступа с правами администратора.
Подтвердив права администратора для удаленной учетной записи, выполните вход на удаленный сервер с помощью ключей SSH как пользователь с привилегиями root или как пользователь с привилегиями sudo
. Затем откройте файл конфигурации демона SSH:
- sudo nano /etc/ssh/sshd_config
Найдите в файле директиву PasswordAuthentication
. Она может быть помечена как комментарий. Удалите символ комментария в начале строки и установите значение «no». После этого вы не сможете выполнять вход в систему через SSH с использованием паролей учетной записи:
...
PasswordAuthentication no
...
Сохраните и закройте файл, нажав CTRL
+ X
, затем нажмите Y
для подтверждения сохранения файла, а затем нажмите ENTER
для выхода из nano. Для фактического внесения этих изменений нужно перезапустить службу sshd
:
- sudo systemctl restart ssh
В качестве меры предосторожности откройте новое окно терминала и проверьте работу службы SSH, прежде чем закрывать этот сеанс:
- ssh username@remote_host
После проверки работы службы SSH вы сможете безопасно закрыть все текущие сеансы сервера.
Теперь демон SSH на вашем сервере Debian будет реагировать только на ключи SSH. Аутентификация на базе паролей успешно отключена.
Теперь на вашем сервере должна быть настроена аутентификация на базе ключей SSH, чтобы вы могли входить в систему без пароля учетной записи.
Если вы хотите узнать больше о работе с SSH, посмотрите наше Руководство по основам SSH.
]]>Хотите организовать безопасный и защищенный доступ к интернету на смартфоне или ноутбуке при подключении к ненадежной сети, например к сети WiFi в гостинице или кафе? Виртуальная частная сеть (VPN) позволит вам конфиденциально и безопасно работать в незащищенных сетях, как если бы вы находились в частной сети. Трафик поступает с сервера VPN и продолжает движение в пункт назначения.
В сочетании с соединениями HTTPS данная схема позволяет защитить учетные данные и транзакции в беспроводной сети. Вы можете обойти географические ограничения и цензуру и скрыть свое местоположение и любой нешифруемый трафик HTTP от незащищенной сети.
OpenVPN — полнофункциональное решение SSL VPN с открытым исходным кодом, поддерживающее широкий ассортимент конфигураций. В этом обучающем модуле вы настроите сервер OpenVPN на сервере Debian 9, а затем настроите доступ к нему из Windows, macOS, iOS и/или Android. Приведенные в этом обучающем модуле шаги по установке и настройке максимально упрощены для каждого из вариантов.
Примечание. Если вы планируете настроить сервер OpenVPN на DigitalOcean Droplet, то мы, как и многие поставщики хостинга, будем взимать плату за превышение лимита пропускной способности. По этой причине необходимо следить за объемом трафика, который обрабатывает ваш сервер.
Дополнительную информацию можно найти на этой странице.
Чтобы пройти этот обучающий модуль вам потребуется доступ к серверу Debian 9, где будет размещаться служба OpenVPN. Перед началом прохождения обучающего модуля вам нужно будет настроить пользователя без привилегий root с привилегиями sudo
. Вы можете воспользоваться нашим руководством Начальная настройка сервера Debian 9, чтобы создать пользователя с соответствующими разрешениями. Настоящий обучающий модуль предусматривает использование брандмауэра, описание настройки которого приведено в доступном по ссылке обучающем модуле.
Также вам потребуется отдельный компьютер для выполнения функций центра сертификации (CA). Хоте в качестве центра сертификации технически возможно использовать сервер OpenVPN на локальном компьютере, это делать не рекомендуется, поскольку при этом в вашей сети VPN могут возникнуть некоторые уязвимости системы безопасности. Согласно официальной документации OpenVPN, вы должны разместить CA на отдельном компьютер, который будет отвечать за импорт и подписание запросов сертификатов. По этой причине в данном обучающем модуле предполагается, что ваш CA располагается на отдельном сервере Debian 9, где также имеются пользователь без привилегий root с привилегиями sudo
и базовый брандмауэр.
Обратите внимание, что если вы отключите аутентификацию с помощью пароля при настройке этих серверов, вы можете столкнуться с трудностями при передаче файлов между ними, как предусматривается последующими разделами этого обучающего модуля. Чтобы устранить эту проблему, вам нужно будет заново включить аутентификацию с помощью пароля на каждом сервере. Также вы можете сгенерировать пару ключей SSH для каждого сервера и добавить публичный ключ SSH сервера OpenVPN в файл authorized_keys
на компьютере CA, и наоборот. Дополнительные инструкции по этим решениям можно найти в обучающем модуле Настройка ключей SSH в Debian 9.
Убедившись в выполнении предварительных требований, вы можете перейти к шагу 1 настоящего обучающего модуля.
Для начала выполните обновление индекса пакетов вашего сервера VPN и установите OpenVPN. OpenVPN имеется в хранилищах Debian по умолчанию, и поэтому вы можете использовать для установки apt
:
- sudo apt update
- sudo apt install openvpn
OpenVPN — это VPN с TLS/SSL. Это означает, что она использует сертификаты для шифрования трафика между сервером и клиентами. Для выпуска доверенных сертификатов необходимо создать собственный простой центр сертификации (CA). Для этого нужно загрузить последнюю версию EasyRSA, которую мы используем для создания инфраструктуры открытых ключей CA (PKI) из официального репозитория проекта на GitHub.
Как указывалось в предварительных требованиях, мы создадим CA на отдельном сервере. Мы выбрали такой подход, потому что если злоумышленник сможет взломать ваш сервер, он получит доступ к закрытому ключу CA и сможет использовать его для подписания новых сертификатов, предоставляя им доступ к вашей VPN. Соответственно с этим, для управления CA с отдельного компьютера нужно не допустить доступ несанкционированных пользователей к вашей VPN. В качестве дополнительной меры предосторожности рекомендуется выключать сервер CA, когда он не используется для подписания ключей.
Чтобы начать построение CA и инфраструктуры PKI, используйте wget
для загрузки последней версии EasyRSA на систему CA и на сервер OpenVPN. Чтобы получить последнюю версию, откройте страницу Релизы на официальном сервере EasyRSA на проекте GitHub, скопируйте ссылку для загрузки файла с расширением .tgz
и вставьте ее в следующую команду:
- wget -P ~/ https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.4/EasyRSA-3.0.4.tgz
Затем извлеките tarball:
- cd ~
- tar xvf EasyRSA-3.0.4.tgz
Вы успешно установили все требуемое программное обеспечение на свой сервер и на систему CA. Далее необходимо настроить переменные для EasyRSA и каталог CA, откуда будут генерироваться ключи и сертификаты, необходимые вашему серверу и клиентам для доступа к VPN.
В комплект установки EasyRSA входит файл конфигурации, в котором можно изменять или задавать определенные переменные CA.
Откройте на компьютере CA каталог EasyRSA:
- cd ~/EasyRSA-3.0.4/
В этом каталоге есть файл с именем vars.example
. Создайте копию этого файла и присвойте ей имя vars
без расширения:
- cp vars.example vars
Откройте новый файл в предпочитаемом текстовом редакторе:
- nano vars
Найдите настройки параметров по умолчанию для новых сертификатов. Он будет выглядеть примерно так:
. . .
#set_var EASYRSA_REQ_COUNTRY "US"
#set_var EASYRSA_REQ_PROVINCE "California"
#set_var EASYRSA_REQ_CITY "San Francisco"
#set_var EASYRSA_REQ_ORG "Copyleft Certificate Co"
#set_var EASYRSA_REQ_EMAIL "me@example.net"
#set_var EASYRSA_REQ_OU "My Organizational Unit"
. . .
Уберите значки комментария из этих строк и замените выделенные значения предпочитаемыми, но не оставляйте их пустыми:
. . .
set_var EASYRSA_REQ_COUNTRY "US"
set_var EASYRSA_REQ_PROVINCE "NewYork"
set_var EASYRSA_REQ_CITY "New York City"
set_var EASYRSA_REQ_ORG "DigitalOcean"
set_var EASYRSA_REQ_EMAIL "admin@example.com"
set_var EASYRSA_REQ_OU "Community"
. . .
После завершения редактирования сохраните и закройте файл.
В каталоге EasyRSA имеется скрипт easyrsa
, который вызывается для выполнения разнообразных задач, связанных с построением и управлением CA. Запустите этот скрипт с опцией init-pki
, чтобы запустить инфраструктуру открытых ключей на сервере CA:
- ./easyrsa init-pki
Output. . .
init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /home/sammy/EasyRSA-3.0.4/pki
Затем снова запустите скрипт easyrsa
с опцией build-ca
. В результате будет создан центр сертификации и два важных файла, ca.crt
и ca.key
, представляющие открытую и закрытую части сертификата SSL.
ca.crt
— файл открытой части сертификата CA, который используется сервером и клиентом OpenVPN, чтобы информировать друг друга о том, что они входят в единую сеть доверия и что между ними отсутствует потенциальный злоумышленник в качестве посредника. В связи с этим, копия файла ca.crt
потребуется для вашего сервера и для всех ваших клиентов.ca.key
— закрытый ключ CA, используемый для подписания ключей и сертификатов серверов и клиентов. Если злоумышленник получит доступ к CA и файлу ca.key
, он сможет подписывать запросы сертификатов и получать доступ к вашей VPN, что нарушит ее безопасность. Поэтому файл ca.key
должен храниться только на компьютере CA, и для дополнительной безопасности компьютер CA следует выключать, когда он не используется для подписывания запросов сертификатов.Если вы не хотите вводить пароль при каждом взаимодействии с CA, вы можете запустить команду build-ca
с опцией nopass
:
- ./easyrsa build-ca nopass
После выполнения команды вам будет предложено подтвердить обычное имя CA:
Output. . .
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:
Обычное имя — это имя, которое будет использоваться для этой системы в контексте центра сертификации. Вы можете выбрать любое имя CA, но в данном случае проще всего нажать ENTER
, чтобы принять имя по умолчанию.
Теперь ваш центр сертификации установлен и готов подписывать запросы сертификатов.
Теперь ваш центр сертификации готов к работе, и вы можете сгенерировать закрытый ключ и запрос сертификата с сервера, а затем передать запрос в CA для подписания и создания требуемого сертификата. Также вы можете создать дополнительные файлы для использования в процессе шифрования.
Для начала откройте каталог EasyRSA на сервере OpenVPN:
- cd EasyRSA-3.0.4/
Запустите на сервере скрипт easyrsa
с опцией init-pki
. Хотя вы уже запускали эту команду в системе с CA, ее необходимо запустить здесь, потому что ваш сервер и центр сертификации используют разные каталоги PKI:
- ./easyrsa init-pki
Вызовите скрипт easyrsa
еще раз и используйте опцию gen-req
с обычным именем этого компьютера. Вы можете использовать любое имя, но лучше всего выбрать запоминающийся вариант. В этом обучающем модуле для сервера OpenVPN мы будем использовать обычное имя «сервер». Обязательно добавьте опцию nopass.
Без этого файл запроса будет защищен паролем, что впоследствии может привести к проблемам с разрешениями:
Примечание. Если вы выберете любое другое имя, кроме «server», вы должны будете следовать некоторым из приведенных ниже инструкций с изменениями. Например, при копировании сгенерированных файлов в каталог /etc/openvpn
вам нужно будет указать правильные имена. Позднее вам нужно будет изменить файл etc/openvpn/server.conf
, чтобы он указывал на соответствующие файлы .crt
и .key
.
- ./easyrsa gen-req server nopass
В результате будет создан закрытый ключ для сервера и файл запроса сертификата с именем server.req
. Скопируйте ключ сервера в каталог /etc/openvpn/:
- sudo cp ~/EasyRSA-3.0.4/pki/private/server.key /etc/openvpn/
Используя безопасный метод (например, SCP как в примере ниже), переместите файл server.req
на компьютер CA:
- scp ~/EasyRSA-3.0.4/pki/reqs/server.req sammy@your_CA_ip:/tmp
Откройте на компьютере CA каталог EasyRSA:
- cd EasyRSA-3.0.4/
Снова запустите скрипт easyrsa
и импортируйте файл server.req
, добавив в путь к файлу обычное имя:
- ./easyrsa import-req /tmp/server.req server
Затем подпишите запрос, запустив скрипт easyrsa
с опцией sign-req
и указанием типа запроса и обычного имени. Запрос может относиться к типу клиента
или сервера
, и для запроса сертификата сервера OpenVPN следует использовать запрос типа сервера
:
- ./easyrsa sign-req server server
В результатах вам будет предложено убедиться, что запрос поступил из надежного источника. Для подтверждения введите yes
и нажмите ENTER
:
You are about to sign the following certificate.
Please check over the details shown below for accuracy. Note that this request
has not been cryptographically verified. Please be sure it came from a trusted
source or that you have verified the request checksum with the sender.
Request subject, to be signed as a server certificate for 3650 days:
subject=
commonName = server
Type the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Если вы зашифровали ключ CA, вам будет предложено ввести пароль.
Затем переместите подписанный сертификат на сервер VPN, используя защищенный метод:
- scp pki/issued/server.crt sammy@your_server_ip:/tmp
Прежде чем выполнять выход из системы на компьютере CA, переместите файл ca.crt
на ваш сервер:
- scp pki/ca.crt sammy@your_server_ip:/tmp
Затем снова выполните вход в систему на сервере OpenVPN и скопируйте файлы server.crt
и ca.crt
в каталог /etc/openvpn/
:
- sudo cp /tmp/{server.crt,ca.crt} /etc/openvpn/
После этого перейдите в каталог EasyRSA:
- cd EasyRSA-3.0.4/
Создайте надежный ключ Диффи-Хеллмана, который будет использоваться при обмене ключами:
- ./easyrsa gen-dh
Для этого может потребоваться несколько минут. После завершения сгенерируйте подпись HMAC для укрепления возможностей сервера по проверке целостности TLS:
- sudo openvpn --genkey --secret ta.key
Когда команда будет выполнена, скопируйте два новых файла в каталог /etc/openvpn/:
- sudo cp ~/EasyRSA-3.0.4/ta.key /etc/openvpn/
- sudo cp ~/EasyRSA-3.0.4/pki/dh.pem /etc/openvpn/
Теперь все необходимые вашему серверу сертификаты и файлы ключей сгенерированы. Вы готовы создать соответствующие сертификаты и ключи, которые клиентский компьютер будет использовать для доступа к серверу OpenVPN.
Хотя вы можете сгенерировать закрытый ключ и запрос сертификата на клиентской системе и отправить их в CA для подписания, в этом обучающем модуле мы рассмотрим процесс генерирования запроса сертификата на сервере. Преимущество этого способа заключается в том, что мы можем создать скрипт, который будет автоматически генерировать файлы конфигурации клиентов, содержащие все необходимые ключи и сертификаты. Благодаря этому вам не нужно будет передавать ключи, сертификаты и файлы конфигурации на клиентские системы, и процесс подключения к VPN ускорится.
В этом обучающем модуле мы создадим одну пару из ключа и сертификата для клиентской системы. Если у вас несколько клиентских систем, вы можете повторить этот процесс для каждой такой системы. Обратите внимание, что для каждого клиента в скрипте нужно указать уникальное имя. В этом обучающем модуле мы будем использовать первую пару сертификат/ключ под именем client1
.
Вначале создайте в домашнем каталоге структуру каталогов, где будут храниться файлы сертификатов и ключей клиентской системы:
- mkdir -p ~/client-configs/keys
Поскольку в этом каталоге будут храниться пары сертификат/ключ ваших клиентов и файлы конфигурации, для него следует закрыть все разрешения:
- chmod -R 700 ~/client-configs
Вернитесь в каталог EasyRSA и запустите скрипт easyrsa
с опциями gen-req
и nopass
, указав обычное имя клиента:
- cd ~/EasyRSA-3.0.4/
- ./easyrsa gen-req client1 nopass
Нажмите ENTER
, чтобы подтвердить обычное имя. Скопируйте файл client1.key
в ранее созданный каталог /client-configs/keys/
:
- cp pki/private/client1.key ~/client-configs/keys/
Затем переместите файл client1.req
на компьютер CA, используя безопасный метод:
- scp pki/reqs/client1.req sammy@your_CA_ip:/tmp
Войдите в систему на компьютере CA, откройте каталог EasyRSA и импортируйте запрос сертификата:
- ssh sammy@your_CA_IP
- cd EasyRSA-3.0.4/
- ./easyrsa import-req /tmp/client1.req client1
Затем подпишите запрос, как сделали это для сервера на предыдущем шаге. Однако в этот раз обязательно укажите тип запроса client
:
- ./easyrsa sign-req client client1
В диалоге введите yes
, чтобы подтвердить, что вы планируете подписать запрос сертификата, и что он поступил из доверенного источника:
OutputType the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Если вы зашифровали свой ключ CA вам будет предложено ввести пароль.
В результате будет создан файл клиентского сертификата с именем client1.crt
. Переместите этот файл обратно на сервер.
- scp pki/issued/client1.crt sammy@your_server_ip:/tmp
Подключитесь к серверу OpenVPN через SSH и скопируйте клиентский сертификат в каталог /client-configs/keys/
:
- cp /tmp/client1.crt ~/client-configs/keys/
Затем скопируйте файлы ca.crt
и ta.key
в каталог /client-configs/keys/
:
- sudo cp ~/EasyRSA-3.0.4/ta.key ~/client-configs/keys/
- sudo cp /etc/openvpn/ca.crt ~/client-configs/keys/
Теперь вы сгенерировали ключи и сертификаты для сервера и клиента и сохранили их в соответствующих каталогах на вашем сервере. С этими файлами еще предстоит выполнить несколько действий, но к ним мы вернемся позднее. Сейчас же вы можете начать настройку OpenVPN на своем сервере.
Вы сгенерировали сертификаты и ключи для клиента и для сервера, и теперь можете начать настройку службы OpenVPN для использования этих учетных данных.
Прежде всего скопируйте файл с образцом конфигурации OpenVPN в каталог configuration и извлеките его, чтобы использовать в качестве основы:
- sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
- sudo gzip -d /etc/openvpn/server.conf.gz
Откройте файл конфигурации сервера в предпочитаемом текстовом редакторе:
- sudo nano /etc/openvpn/server.conf
Найдите раздел HMAC, выполнив поиск директивы tls-auth
. Комментарии из этой строки должны быть уже удалены, но если строка еще закомментирована, уберите символ «;» в начале строки:
tls-auth ta.key 0 # This file is secret
Затем найдите раздел криптографических шифров, выполнив поиск строк комментариев с текстом cipher
. Шифр AES-256-CBC
обеспечивает хороший уровень шифрования и хорошо поддерживается. Комментарии из этой строки должны быть уже удалены, но если строка еще закомментирована, уберите символ «;» в начале строки:
cipher AES-256-CBC
Добавьте под этой строкой директиву auth
, чтобы выбрать алгоритм обработки сообщений HMAC. Для этого хорошо подойдет SHA256
:
auth SHA256
Затем найдите строку с директивой dh
, которая определяет параметры алгоритма Диффи-Хеллмана. В связи с недавними изменениями EasyRSA имя файла ключа Диффи-Хеллмана может отличаться от указанного в файле образца конфигурации сервера. Если потребуется, измените указанное здесь имя файла, удалив цифры 2048
для соответствия ключу, сгенерированному на предыдущем шаге:
dh dh.pem
Наконец, найдите настройки user
и group
и удалите «;» из начала каждой строки:
user nobody
group nogroup
Внесенные в файл образца server.conf
изменения необходимы для работы OpenVPN. Описанные ниже изменения не обязательны, однако они также необходимы для многих распространенных вариантов использования.
Вышеуказанные настройки создадут соединение VPN между двумя компьютерными системами, но не заставят никакие соединения использовать туннель. Если вы хотите использовать VPN для перенаправления всего вашего трафика, вам нужно будет передать настройки DNS на клиентские компьютеры.
В файле server.conf
имеется несколько директив, которые нужно изменить для активации этой функции. Найдите раздел redirect-gateway
и удалите точку с запятой «;» в начале строки redirect-gateway,
чтобы убрать режим комментария:
push "redirect-gateway def1 bypass-dhcp"
Под этой строкой найдите раздел dhcp-option.
Снова удалите символ «;» в начале каждой из строк, чтобы убрать режим комментария:
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"
Это поможет клиентам изменить настройки DNS, чтобы туннель VPN использовался как шлюз по умолчанию.
По умолчанию сервер OpenVPN использует для подключения клиентов порт 1194
и протокол UDP. Если вам потребуется использовать другой порт из-за ограничений сети клиента, вы можете изменить номер порта
. Если вы не храните веб-контент на сервере OpenVPN, вам подойдет порт 443
, поскольку его обычно не запрещают правила брандмауэра.
# Optional!
port 443
Довольно часто этот порт также ограничивает протокол. Если это так, измените значение proto
с UDP на TCP:
# Optional!
proto tcp
Если вы действительно смените протокол на TCP, вам нужно будет изменить значение директивы explicit-exit-notify
с 1
на 0
, поскольку эта директива используется только протоколом UDP. В противном случае при запуске службы OpenVPN возможны ошибки протокола TCP :
# Optional!
explicit-exit-notify 0
Если вам не нужно использовать другие порт и протокол, лучше всего оставить настройки по умолчанию.
Если вы выбрали другое имя при использовании команды ./build-key-server
, измените строки cert
и key
так, чтобы они указывали на соответствующие файлы .crt
и .key
. Если вы использовали имя по умолчанию «server», ничего изменять не нужно:
cert server.crt
key server.key
После завершения редактирования сохраните и закройте файл.
После внесения необходимых изменений в конфигурацию OpenVPN вашего сервера, вы можете начать вносить изменения в настройки сети сервера.
Чтобы OpenVPN мог правильно перенаправлять трафик через сеть VPN, необходимо изменить некоторые параметры конфигурации сети сервера. Прежде всего нужно изменить параметр IP forwarding, который определяет необходимость перенаправления IP-трафика. Это необходимо для реализации функций VPN, обеспечиваемых вашим сервером.
Для изменения настройки переадресации IP по умолчанию на вашем сервере следует отредактировать файл /etc/sysctl.conf
:
- sudo nano /etc/sysctl.conf
Найдите в файле строку комментария с параметром net.ipv4.ip_forward
. Удалите символ «#» из начала строки, чтобы убрать режим комментария для этой настройки:
net.ipv4.ip_forward=1
Сохраните файл и закройте его после завершения.
Чтобы прочитать файл и изменить значения для текущей сессии, введите:
- sudo sysctl -p
Outputnet.ipv4.ip_forward = 1
Если вы следовали указаниям обучающего модуля «Начальная настройка сервера Debian 9», указанного в предварительных требованиях, у вас должен быть установлен брандмауэр UFW. Вне зависимости от того, используете ли вы брандмауэр для блокировки нежелательного трафика (что нужно делать почти всегда), в этом обучающем модуле брандмауэр вам потребуется для определенных манипуляций с трафиком, поступающим на сервер. Некоторые правила брандмауэра нужно изменить, чтобы включить маскарадинг (это концепция iptables, обеспечивающая динамическую трансляцию сетевых адресов (NAT) для правильной маршрутизации клиентских соединений.
Прежде чем открыть файл конфигурации брандмауэра для добавления правил маскарадинга, нужно предварительно найти публичный сетевой интерфейс компьютера. Для этого введите:
- ip route | grep default
Публичный интерфейс компьютера показан в результатах выполнения этой команды после слова «dev». Например, в этом результате показан интерфейс с именем eth0
, который выделен ниже:
Outputdefault via 203.0.113.1 dev eth0 onlink
Когда у вас будет интерфейс, связанный с маршрутом по умолчанию, откройте файл /etc/ufw/before.rules
, чтобы добавить соответствующую конфигурацию:
- sudo nano /etc/ufw/before.rules
Правила UFW обычно добавляются с помощью команды ufw
. Правила, перечисленные в файле before.rules
, считываются и активируются до загрузки обычных правил UFW. Добавьте в верхнюю часть файла выделенные ниже строки. После этого будет задана политика по умолчанию для цепочки POSTROUTING
в таблице nat
, и любой трафик из VPN будет маскироваться. Обязательно замените eth0
в строке -A POSTROUTING
на интерфейс, определенный с помощью следующей команды:
#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
# ufw-before-input
# ufw-before-output
# ufw-before-forward
#
# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!)
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
# END OPENVPN RULES
# Don't delete these required lines, otherwise there will be errors
*filter
. . .
Сохраните файл и закройте его после завершения.
Затем вам нужно будет указать UFW разрешать перенаправление пакетов по умолчанию. Для этого откройте файл /etc/default/ufw
:
- sudo nano /etc/default/ufw
Найдите в файле директиву DEFAULT_FORWARD_POLICY
и измените значение с DROP
на ACCEPT
:
DEFAULT_FORWARD_POLICY="ACCEPT"
Сохраните файл и закройте его после завершения.
Затем измените настройки брандмауэра, чтобы разрешить трафик OpenVPN. Если вы не изменили порт и протокол в файле /etc/openvpn/server.conf
, вам нужно будет открыть трафик UDP на порту 1194
. Если вы изменили порт или протокол, замените выбранные здесь значения.
Если вы забыли добавить порт SSH при выполнении обязательного обучающего модуля, добавьте его сейчас:
- sudo ufw allow 1194/udp
- sudo ufw allow OpenSSH
После добавления этих правил отключите и заново включите UFW, чтобы загрузить изменения из всех измененных файлов:
- sudo ufw disable
- sudo ufw enable
Теперь ваш сервер настроен для правильной обработки трафика OpenVPN.
Теперь вы готовы запустить службу OpenVPN на своем сервере. Для этого нужно использовать утилиту systemd systemctl
.
Запустите сервер OpenVPN, указав имя файла конфигурации в качестве переменной экземпляра после имени файла systemd. Файл конфигурации вашего сервера имеет имя /etc/openvpn/server.conf
, поэтому при его вызове в конец файла нужно добавить @server
:
- sudo systemctl start openvpn@server
Еще раз убедитесь, что служба успешно запущена, введя следующую команду:
- sudo systemctl status openvpn@server
Если все прошло хорошо, результат будет выглядеть примерно следующим образом:
Output● openvpn@server.service - OpenVPN connection to server
Loaded: loaded (/lib/systemd/system/openvpn@.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2016-05-03 15:30:05 EDT; 47s ago
Docs: man:openvpn(8)
https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
https://community.openvpn.net/openvpn/wiki/HOWTO
Process: 5852 ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid (code=exited, sta
Main PID: 5856 (openvpn)
Tasks: 1 (limit: 512)
CGroup: /system.slice/system-openvpn.slice/openvpn@server.service
└─5856 /usr/sbin/openvpn --daemon ovpn-server --status /run/openvpn/server.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/server.conf --writepid /run/openvpn/server.pid
Также вы можете проверить доступность интерфейса OpenVPN tun0
с помощью следующей команды:
- ip addr show tun0
Будет выведен настроенный интерфейс:
Output4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100
link/none
inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
valid_lft forever preferred_lft forever
После запуска службы активируйте ее, чтобы она автоматически запускалась при загрузке:
- sudo systemctl enable openvpn@server
Теперь служба OpenVPN запущена и работает. Прежде чем начать использовать ее, необходимо создать файл конфигурации для клиентской системы. В этом обучающем модуле мы уже говорили о создании пар сертификат/ключ для клиентских систем, и на следующем шаге мы продемонстрируем создание инфраструктуры, которая будет легко генерировать файлы конфигурации клиентских систем.
Создание файлов конфигурации для клиентов OpenVPN может быть связано с этой задачей, поскольку каждый клиент должен иметь собственную конфигурацию, и каждая из этих конфигураций должна соответствовать параметрам, заданным в файле конфигурации сервера. Вместо создания единого файла конфигурации, который можно использовать только для одного клиента, на этом шаге мы определим процесс создания инфраструктуры клиентской конфигурации, который вы сможете использовать для быстрого генерирования файлов конфигурации. Вначале вы создадите «базовый» файл конфигурации, а затем сценарий, который позволит по мере необходимости генерировать уникальные файлы конфигурации клиентов, сертификаты и ключи.
Для начала создайте новый каталог для хранения файлов конфигурации клиентов в ранее созданном каталоге client-configs
:
- mkdir -p ~/client-configs/files
Затем скопируйте файл с образцом конфигурации клиента в каталог client-configs
, чтобы использовать его как базовую конфигурацию:
- cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf
Откройте новый файл в текстовом редакторе:
- nano ~/client-configs/base.conf
Найдите в файле директиву remote
. Она указывает клиенту адрес сервера OpenVPN, т. е. публичный IP-адрес вашего сервера OpenVPN. Если вы решили изменить порт, который будет прослушивать сервер OpenVPN, вам нужно будет заменить 1194
на выбранный порт:
. . .
# The hostname/IP and port of the server.
# You can have multiple remote entries
# to load balance between the servers.
remote your_server_ip 1194
. . .
Протокол должен соответствовать значениям, используемым в конфигурации сервера:
proto udp
Уберите режим комментариев для директив user
и group
, удалив символ «;» из начала каждой строки:
# Downgrade privileges after initialization (non-Windows only)
user nobody
group nogroup
Найдите директивы, задающие ca
, cert
и key
. Поставьте знак комментария перед строками этих директив, поскольку вы вскоре добавите сертификаты и ключи в сам файл:
# SSL/TLS parms.
# See the server config file for more
# description. It's best to use
# a separate .crt/.key file pair
# for each client. A single ca
# file can be used for all clients.
#ca ca.crt
#cert client.crt
#key client.key
Также добавьте знак комментария в начале строки директивы tls-auth
, поскольку вы добавите ключ ta.key
непосредственно из файла конфигурации клиента:
# If a tls-auth key is used on the server
# then every client must also have the key.
#tls-auth ta.key 1
Создайте зеркальное отражение настроек cipher
и auth
, заданных в файле /etc/openvpn/server.conf
:
cipher AES-256-CBC
auth SHA256
Затем добавьте в файл директиву key-direction
. Вы должны задать значение «1», чтобы VPN правильно работала на клиентском компьютере:
key-direction 1
В заключение, добавьте несколько строк комментариев. Хотя вы можете включить эти директивы в каждый файл конфигурации клиента, их нужно включать только для клиентов Linux с файлом /etc/openvpn/update-resolv-conf
. Этот скрипт использует утилиту resolvconf
для обновления данных DNS клиентов Linux.
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
Если ваш клиент работает под управлением Linux, и на нем есть файл /etc/openvpn/update-resolv-conf
, удалите знак комментария в начале этих строк файла конфигурации клиента, когда он будет сгенерирован.
Сохраните файл и закройте его после завершения.
Затем создайте простой скрипт, который скомпилирует базовую конфигурацию с соответствующим сертификатом, ключом и файлами шифрования, и поместите сгенерированную конфигурацию в каталог ~/client-configs/files
. Откройте новый файл с именем make_config.sh
в каталоге ~/client-configs
:
- nano ~/client-configs/make_config.sh
Добавьте в файл следующий код, заменив sammy
на имя учетной записи пользователя сервера без привилегий root:
#!/bin/bash
# First argument: Client identifier
KEY_DIR=/home/sammy/client-configs/keys
OUTPUT_DIR=/home/sammy/client-configs/files
BASE_CONFIG=/home/sammy/client-configs/base.conf
cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn
Сохраните файл и закройте его после завершения.
Прежде чем продолжить, отметьте этот файл как исполняемый, введя следующую команду:
- chmod 700 ~/client-configs/make_config.sh
Этот скрипт создает копию созданного вами файла base.conf
, собирает все созданные вами для клиента файлы сертификатов и ключей, извлекает их содержимое, добавляет их в копию базового файла конфигурации и экспортирует все это в новый файл конфигурации клиента. Это означает, что вся необходимая информация хранится в одном месте, и вам не нужно по отдельности управлять файлами конфигурации клиента, сертификатами и ключами. Если в будущем вам потребуется добавить клиент, вы можете просто запустить этот скрипт, чтобы быстро создать файл конфигурации. Вся важная информация хранится в одном удобном для доступа месте.
Учтите, что при добавлении каждого нового клиента вам нужно будет сгенерировать для него новые ключи и сертификаты, прежде чем запускать этот скрипт и генерировать файл конфигурации. На следующем шаге вы сможете потренироваться в использовании этого скрипта.
Если вы следовали указаниям руководства, на шаге 4 вы создали клиентский сертификат и ключ с именами client1.crt
и client1.key
соответственно. Вы можете сгенерировать файл конфигурации для этих учетных данных, перейдя в каталог ~/client-configs
и запустив скрипт, созданный в конце предыдущего шага:
- cd ~/client-configs
- sudo ./make_config.sh client1
При этом файл client1.ovpn
будет создан в каталоге ~/client-configs/files
:
- ls ~/client-configs/files
Outputclient1.ovpn
Этот файл нужно будет переместить на устройство, которое вы планируете использовать в качестве клиента. Например, это может быть ваш локальный компьютер или мобильное устройство.
Хотя конкретные приложения для передачи зависят от операционной системы устройства и ваших предпочтений, один из наиболее надежных и безопасных способов — использовать SFTP (протокол передачи файлов SSH) или SCP (защищенное копирование) на стороне сервера. При этом файлы аутентификации VPN вашего клиента будут передаваться через шифрованное соединение.
Вот пример команды SFTP с использованием образца client1.ovpn
, который можно запустить с локального компьютера (macOS или Linux). Он помещает файл .ovpn
в домашний каталог:
- sftp sammy@your_server_ip:client-configs/files/client1.ovpn ~/
Вот несколько инструментов и обучающих модулей для безопасной передачи файлов с сервера на локальный компьютер:
В этом разделе рассказывается о том, как установить клиентский профиль VPN в Windows, macOS, Linux, iOS и Android. Эти инструкции не зависят друг от друга, так что вы можете сразу перейти к той, которая относится к вашему устройству.
Подключение OpenVPN будет иметь имя, совпадающее с именем файла .ovpn
. В этом обучающем модуле это означает, что соединение будет иметь имя client1.ovpn
, что соответствует первому сгенерированному клиентскому файлу.
Установка
Загрузите клиентское приложение OpenVPN для Windows со страницы загрузки OpenVPN. Выберите подходящую версию программы установки для вашей версии Windows.
Примечание. Для установки OpenVPN требуются привилегии администратора.
После установки OpenVPN, скопируйте файл .ovpn
в:
C:\Program Files\OpenVPN\config
При запуске OpenVPN профиль будет автоматически обнаружен и сделан доступным.
Вы должны запускать OpenVPN от имени администратора каждый раз, даже если используете учетную запись администратора. Чтобы вам не нужно было при каждом запуске VPN нажимать правую кнопку мыши и выбриать Запуск от имени администратора, такой запуск следует настроить в учетной записи администратора. Это также означает, что обычным пользователям нужно будет ввести пароль администратора, чтобы использовать OpenVPN. Обычные пользователи не смогут правильно подключиться к серверу, если у приложения OpenVPN на клиентской системе нет прав администратора, поэтому необходим повышенный уровень привилегий.
Чтобы настроить приложение OpenVPN для запуска от имени администратора при каждом запуске, нажмите правой кнопкой мыши на его ярлык и выберите пункт Свойства. Внизу вкладки Совместимость нажмите кнопку Изменить параметры для всех пользователей. В новом окне установите отметку Запускать эту программу от имени администратора.
Подключение
При каждом запуске графического интерфейса OpenVPN операционная система Windows будет спрашивать, разрешаете или вы этой программме внести изменения на вашем компьютере. Нажмите Да. При запуске клиентского приложения OpenVPN в области задач появляется значок приложения, с помощью которого вы сможете подключать и отключать соединение VPN; соединение VPN не устанавливается автоматически.
После запуска OpenVPN нажмите правой кнопкой на значок OpenVPN в области задач, чтобы создать соединение. Откроется контекстное меню. Выберите client1 в верхней части меню (это ваш профиль client1.ovpn
) и нажмите Подключиться.
Откроется окно состояния, где будут выведены данные журнала при установке соединения, и после подключения клиента будет выведено сообщение.
Отключение от VPN выполняется аналогично: перейдите в область задач, нажмите значок приложения OpenVPN правой кнопкой мыши, выберите профиль клиента и нажмите Отключиться.
Установка
Tunnelblick — бесплатный клиент OpenVPN с открытым исходным кодом для macOS. Вы можете загрузить последний образ этого клиентского приложения со страницы загрузки Tunnelblick. Дважды щелкните загруженный файл .dmg
и следуйте указаниям по установке.
В конце процесса установки Tunnelblick спросит, есть ли у вас файлы конфигурации. Для удобства ответьте Нет и дайте Tunnelblick завершить установку. Откройте окно Finder и дважды нажмите client1.ovpn
. Tunnelblick установит клиентский профиль. Для этого требуются привилегии администратора.
Подключение
Запустите Tunnelblick, дважды щелкнув Tunnelblick в папке Приложения. После запуска Tunnelblick в панели меню в правом верхнем углу экрана появится значок Tunnelblick для управления соединениями. Нажмите на значок, а затем нажмите на пункт меню Подключиться, чтобы создать соединение VPN. Выберите соединение client1.
Если вы используете Linux, вы можете различные инструменты в зависимости от вашего дистрибутива. Диспетчер окон или среда рабочего стола также могут содержать утилиты для подключения.
Однако проще всего будет использовать для этой цели программное обеспечение OpenVPN.
В Ubuntu или Debian вы можете установить его так же, как и на сервере, введя следующую команду:
- sudo apt update
- sudo apt install openvpn
В CentOS вы можете активировать репозитории EPEL и выполнить установку, введя следующую команду:
- sudo yum install epel-release
- sudo yum install openvpn
Проверьте, включен ли в ваш дистрибутив скрипт /etc/openvpn/update-resolv-conf
:
- ls /etc/openvpn
Outputupdate-resolv-conf
Затем отредактируйте перемещенный файл конфигурации клиента OpenVPN:
- nano client1.ovpn
Если вы нашли файл update-resolv-conf
, уберите значок комментария из начала трех строк, добавленных для изменения настроек DNS:
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
Если вы используете CentOS, измените директиву group
с nogroup
на nobody
для соответствия доступным группам дистрибутива:
group nobody
Сохраните и закройте файл.
Теперь для подключения к VPN вы можете просто указать команде openvpn
файл конфигурации клиента:
- sudo openvpn --config client1.ovpn
Эта команда должна установить подключение к вашей VPN.
Установка
Найдите в магазине приложений iTunes App Store приложение OpenVPN Connect, официальный клиент OpenVPN для iOS, и установите его. Чтобы переместить конфигурацию клиента iOS на устройство, подключите его к компьютеру напрямую.
Здесь описан процесс завершения передачи с помощью iTunes. Откройте iTunes на компьютере и нажмите iPhone > приложения. Прокрутите страницу до раздела Общий доступ к файлам и нажмите на приложение OpenVPN. Пустое окно справа OpenVPN Documents предназначено для общего доступа к файлам. Перетащите файл .ovpn
в окно OpenVPN Documents.
Запустите приложение OpenVPN на iPhone. Вы получите уведомление, что новый профиль готов к импортированию. Нажмите зеленый значок плюс, чтобы импортировать его.
Подключение
Приложение OpenVPN готово к использованию нового профиля. Установите соединение, передвинув кнопку Подключиться в положение Вкл. Для отключения передвиньте эту же кнопку в положение Выкл.
Примечание. Переключатель VPN в разделе Настройки нельзя использовать для подключения к VPN. Если вы попробуете сделать это, вы получите уведомление о том, что для подключения нужно использовать приложение OpenVPN.
Установка
Откройте магазин приложений Google Play Store. Найдите приложение Android OpenVPN Connect, официальное клиентское приложение OpenVPN для Android, и установите его.
Вы можете переместить профиль .ovpn
, подключив устройство Android к вашему компьютеру через интерфейс USB и скопировав файл. Если у вас в компьютере есть разъем для SD-карт, мы можете извлечь SD-карту из устройства, скопировать на нее профиль и вставить карту обратно в устройство Android.
Запустите приложение OpenVPN и нажмите на меню, чтобы импортировать профиль.
Затем перейдите в местоположение сохраненного профиля (на снимке экрана используется расположение /sdcard/Download/
) и выберите файл. Приложение покажет, что профиль импортирован.
Подключение
Чтобы подключиться, просто нажмите кнопку Подключиться. Вам нужно будет указать, что вы доверяете приложению OpenVPN. Нажмите OK, чтобы установить соединение. Чтобы отключиться от VPN, вернитесь в приложение OpenVPN и выберите пункт Отключиться.
Примечание. Этот метод тестирования соединения VPN будет работать, только если вы настроите перенаправление всего трафика через VPN на шаге 5.
После завершения установки нужно провести простую проверку, чтобы убедиться, что все работает нормально. Не активируйте соединение VPN, откройте браузер и перейдите в DNSLeakTest.
Сайт покажет IP-адрес, назначенный вашим интернет-провайдером и видный остальному миру. Чтобы проверить настройки DNS через этот же сайт, нажмите Расширенный тест, и вы увидите, какие серверы DNS вы используете.
Теперь подключите клиент OpenVPN к VPN вашего сервера и обновите браузер. Вы увидите совершенно другой IP-адрес (адрес вашего сервера VPN), и именно этот адрес будет виден остальному миру. Итак, в приложении DNSLeakTest функция Расширенный тест проверит ваши настройки DNS и подтвердит, что вы используете параметры DNS, заданные вашей VPN.
Иногда вам может понадобиться отозвать клиентский сертификат, чтобы предотвратить дальнейший доступ к серверу OpenVPN.
Для этого перейдите в каталог EasyRSA на компьютере CA:
- cd EasyRSA-3.0.4/
Затем запустите скрипт easyrsa
с опцией revoke
, а затем укажите имя клиента, у которого хотите отозвать сертификат:
- ./easyrsa revoke client2
Система предложит вам подтвердить отзыв сертификата. Введите yes
:
OutputPlease confirm you wish to revoke the certificate with the following subject:
subject=
commonName = client2
Type the word 'yes' to continue, or any other input to abort.
Continue with revocation: yes
После подтверждения действия CA полностью отзовет сертификат клиента. Однако в текущий момент ваш сервер OpenVPN не может проверить, отозваны ли сертификаты каких-либо клиентов, и все клиенты будут иметь доступ к сети VPN. Чтобы устранить эту проблему, нужно создать на компьютере CA список отзыва сертификатов (CRL):
- ./easyrsa gen-crl
Эта команда генерирует файл crl.pem
. Выполните защищенное перемещение этого файла на ваш сервер OpenVPN:
- scp ~/EasyRSA-3.0.4/pki/crl.pem sammy@your_server_ip:/tmp
На сервере OpenVPN скопируйте этот файл в каталог /etc/openvpn/
:
- sudo cp /tmp/crl.pem /etc/openvpn
Затем откройте файл конфигурации сервера OpenVPN:
- sudo nano /etc/openvpn/server.conf
Добавьте в конце файла опцию crl-verify
, чтобы сервер OpenVPN проверял созданный нами список отзыва сертификатов при каждой попытке подключения:
crl-verify crl.pem
Сохраните и закройте файл.
Перезапустите OpenVPN, чтобы завершить отзыв сертификата:
- sudo systemctl restart openvpn@server
Клиент больше не сможет подключаться к серверу, используя старые учетные данные.
Чтобы запретить доступ другим клиентам, повторите эту процедуру:
./easyrsa revoke client_name
.crl.pem
на сервер OpenVPN и скопируйте его в каталог /etc/openvpn
, чтобы перезаписать старый список.С помощью этой процедуры вы можете отозвать любые сертификаты, которые ранее выпустили для вашего сервера.
Теперь вы можете безопасно использовать интернет, защитив данные о своей личности и расположении и свой трафик от снуперов и цензоров. Если вам больше не нужно выдавать сертификаты, выключите свой компьютер CA или отключите его от интернета до тех пор, пока вам не потребуется добавить или отозвать сертификат. Это не даст злоумышленникам получить доступ к вашей VPN.
Для настройки других клиентских систем вам нужно будет повторить шаги 4 и 9-11 для каждого дополнительного устройства. Чтобы запретить доступ клиентским системам, следуйте указаниям в шаге 12.
]]>UFW (Uncomplicated Firewall или «простой брандмауэр») представляет собой интерфейс iptables
, предназначенный для упрощения процесса настройки брандмауэра. Хотя iptables
— надежный и гибкий инструмент, начинающим бывает сложно научиться использовать его для правильной настройки брандмауэра. Если вы ищете способ защитить вашу сеть и не знаете, какой инструмент для этого использовать, UFW может отлично вам подойти.
В этом обучающем модуле вы научитесь настраивать брандмауэр с помощью UFW в Debian 9.
Для выполнения этого обучающего модуля вам потребуется один сервер Debian с пользователем sudo без привилегий root. Настройку этого сервера можно выполнить в соответствии с указаниями обучающего модуля Начальная настройка сервера Debian 9.
По умолчанию Debian не устанавливает UFW. Если вы выполнили весь обучающий модуль Начальная настройка сервера, вы уже установили и активировали UFW. Если нет, выполните установку с помощью apt
:
- sudo apt install ufw
На следующих шагах мы выполним настройку и активацию UFW.
Этот обучающий модуль предусматривает использование протокола IPv4, но подходит и для IPv6, если вы активировали этот протокол. Если на вашем сервере Debian активирован протокол IPv6, настройте UFW для поддержки IPv6, чтобы UFW управлял правилами брандмауэра и для IPv6, и для IPv4. Для этого откройте конфигурацию UFW с помощью nano
или своего предпочитаемого редактора.
- sudo nano /etc/default/ufw
Убедитесь, что параметр IPV6
имеет значение yes
. Конфигурация должна выглядеть следующим образом:
IPV6=yes
Сохраните и закройте файл. После активации UFW будет настроен для записи правил брандмауэра для IPv4 и для IPv6. Однако перед включением UFW нужно убедиться, что ваш брандмауэр настроен для подключения через SSH. Для начала настроим политики по умолчанию.
Если вы только начинаете работать с брандмауэром, прежде всего нужно настроить политики по умолчанию. Эти правила определяют обработку трафика, который явно не соответствует каким-либо другим правилам. По умолчанию UFW настроен так, чтобы запрещать все входящие соединения и разрешать все исходящие соединения. Это означает, что любые попытки связаться с вашим сервером будут заблокированы, но любые приложения на вашем сервере будут иметь связь с внешним миром.
Восстановим правила UFW по умолчанию, чтобы продолжить выполнение этого обучающего модуля. Для восстановления настроек по умолчанию для UFW необходимо использовать следующие команды:
- sudo ufw default deny incoming
- sudo ufw default allow outgoing
Эти команды задают значения по умолчанию, запрещая входящие соединения и разрешая исходящие. Такие параметры брандмауэра по умолчанию достаточны для персонального компьютера, но серверам обычно нужно отвечать на поступающие запросы внешних пользователей. Мы рассмотрим это чуть позже.
Если мы сейчас активируем брандмауэр UFW, все входящие соединения будут запрещены. Это означает, что нам нужно создать правила, прямо разрешающие легитимные входящие соединения (например, SSH или HTTP), если мы хотим, чтобы сервер отвечал на такие запросы. Если вы используете облачный сервер, вы наверное хотите разрешить входящие соединения SSH, чтобы можно было подключаться к серверу и управлять им.
Чтобы разрешить на сервере входящие соединения SSH, вы можете использовать следующую команду:
- sudo ufw allow ssh
Эта команда создаст правила брандмауэра, которые разрешат все соединения на порту 22
, который демон SSH прослушивает по умолчанию. UFW знает, какой порт имеет в виду команда allow ssh
, потому что он указан как услуга в файле /etc/services
.
Однако мы можем записать эквивалентное правило, указав номер порта вместо имени службы. Например, эта команда работает так же, как показанная выше:
- sudo ufw allow 22
Если вы настроили демон SSH для использования другого порта, вам нужно будет указать соответствующий порт. Например, если ваш сервер SSH прослушивает порт 2222
, вы можете использовать эту команду, чтобы разрешить соединения с этим портом:
- sudo ufw allow 2222
Теперь ваш брандмауэр настроен, чтобы разрешать входящие соединения SSH, и мы можем его активировать.
Чтобы активировать UFW, используйте следующую команду:
- sudo ufw enable
Вы получите предупреждение о том, что команда может нарушить существующие соединения SSH. Мы уже установили правило брандмауэра, разрешающее соединения SSH, и теперь мы можем продолжить. Введите y
в диалоговом окне и нажмите ENTER
.
Теперь брандмауэр включен. Запустите команду sudo ufw status verbose
, чтобы посмотреть заданные правила. В остальной части этого обучающего модуля более подробно рассказывается об использовании UFW, например, о запрете различных видов соединений.
К этому моменту вы должны были разрешить все другие соединения, необходимые вашему серверу. Состав разрешаемых соединений должен соответствовать вашим конкретным потребностям. К счастью, вы уже знаете, как писать правила, разрешающие соединения по имени службы или номеру порта, поскольку мы уже делали это для SSH на порту 22
. Также вы можете использовать это для следующих соединений:
sudo ufw allow http
или sudo ufw allow 80
sudo ufw allow https
или sudo ufw allow 443
Помимо указания порта или службы есть другие способы разрешить другие соединения.
С помощью UFW вы можете указывать диапазоны портов. Некоторые приложения используют для соединений не один порт, а несколько.
Например, чтобы разрешить соединения X11, которые используют порты 6000
-6007
, нужно использовать следующие команды:
- sudo ufw allow 6000:6007/tcp
- sudo ufw allow 6000:6007/udp
Когда вы указываете диапазоны портов с помощью UFW, вы должны указать протокол (tcp
или udp
), к которому должны применяться эти правила. Мы не упоминали этого ранее, поскольку если протокол не указать, оба протокола будут разрешены, что подходит для большинства случаев.
При работе с UFW вы также можете указывать конкретные IP-адреса. Например, если вы хотите разрешить соединения с определенного IP-адреса, например с рабочего или домашнего адреса 203.0.113.4
, вам нужно использовать опцию from
, а затем указать IP-адрес:
- sudo ufw allow from 203.0.113.4
Также вы можете указать определенный порт, к которому IP-адресу разрешено подключаться. Для этого нужно добавить опцию to any port
, а затем указать номер порта. Например, если вы хотите разрешить IP-адресу 203.0.113.4
подключаться к порту 22
(SSH), нужно использовать следующую команду:
- sudo ufw allow from 203.0.113.4 to any port 22
Если вы хотите разрешить подсеть IP-адресов, вы можете указать маску сети с помощью нотации CIDR. Например, если вы хотите разрешить все IP-адреса в диапазоне от 203.0.113.1
до 203.0.113.254
, вы можете использовать следующую команду:
- sudo ufw allow from 203.0.113.0/24
Также вы можете указывать порт назначения, к которому разрешено подключаться подсети 203.0.113.0/24
. В качестве примера мы используем порт 22
(SSH):
- sudo ufw allow from 203.0.113.0/24 to any port 22
Если вы хотите создать правило брандмауэра, применимое только к определенному сетевому интерфейсу, вы можете использовать для этого опцию «allow in on», а затем указать имя сетевого интерфейса.
Прежде чем продолжить, вам может понадобиться просмотреть сетевые интерфейсы. Для этого нужно использовать следующую команду:
- ip addr
Output Excerpt2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
. . .
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
. . .
В выделенной части результатов показаны имена сетевых интерфейсов. Обычно они носят имена вида eth0
или enp3s2
.
Если на вашем сервере имеется публичный сетевой интерфейс под названием eth0,
вы можете разрешить трафик HTTP (порт 80
) для этого интерфейса с помощью следующей команды:
- sudo ufw allow in on eth0 to any port 80
Это позволит вашему серверу принимать запросы HTTP из публичной части интернета.
Если вы хотите использовать сервер базы данных MySQL (порт 3306
) для прослушивания соединений на интерфейсе частной сети (например, eth1
), вы можете использовать следующую команду:
- sudo ufw allow in on eth1 to any port 3306
Это позволит другим серверам в вашей частной сети подключаться к вашей базе данных MySQL.
Если вы не изменяли политику по умолчанию для входящих соединений, UFW настроен на запрет всех входящих соединений. Это упрощает процесс создания защищенной политики брандмауэра, поскольку вам нужно создавать правила, прямо разрешающие соединения через конкретные порты и IP-адреса.
Однако в некоторых случаях вам может понадобиться запретить определенные соединения по IP-адресу источника или подсети, например, в случае атаки с этого адреса. Если вы захотите изменить политику по умолчанию для входящих соединений на allow (что не рекомендуется), вам нужно будет создать правила deny для любых служб или IP-адресов, которым вы не хотите разрешать подключение.
Для записи правил deny можно использовать описанные выше команды, заменяя allow на deny.
Например, чтобы запретить соединения по протоколу HTTP, вы можете использовать следующую команду:
- sudo ufw deny http
Если вы захотите запретить все подключения с IP-адреса 203.0.113.4
, вы можете использовать следующую команду:
- sudo ufw deny from 203.0.113.4
Теперь посмотрим, как можно удалять правила.
Знать процедуру удаления правил брандмауэра так же важно, как и знать процедуру их создания. Существует два разных способа указывать правила для удаления: по номеру правила или по фактическому правилу (так же, как правила задавались при их создании). Начнем с метода удаления по номеру правила, поскольку этот метод проще.
Если вы используете номер правила для удаления правил брандмауэра, прежде всего нужно получить список правил брандмауэра. Команда UFW status имеет опцию отображение номеров рядом с каждым правилом, как показано здесь:
- sudo ufw status numbered
Numbered Output:Status: active
To Action From
-- ------ ----
[ 1] 22 ALLOW IN 15.15.15.0/24
[ 2] 80 ALLOW IN Anywhere
Если мы решим удалить правило 2, разрешающее соединения через порт 80 (HTTP), мы можем указать его в команде UFW delete, как показано здесь:
- sudo ufw delete 2
После этого откроется диалогового окна подтверждения удаления правила 2, разрешающего соединения HTTP. Если вы включили поддержку IPv6, вы можете также удалить соответствующее правило для IPv6.
Вместо номеров правил можно указывать фактические имена удаляемых правил. Например, если вы хотите удалить правило allow http
, вы можете использовать следующую команду:
- sudo ufw delete allow http
Также вы можете указать это правило как allow 80
, а не указывать имя службы:
- sudo ufw delete allow 80
Этот метод удалит правила IPv4 и IPv6, если они существуют.
Вы можете проверить состояние UFW в любое время с помощью следующей команды:
- sudo ufw status verbose
Если UFW отключен (по умолчанию), вы увидите примерно следующее:
OutputStatus: inactive
Если UFW активен (т. е. вы выполнили шаг 3), в результатах будет показано, что он активен, и будет выведен список заданных правил. Например, если настройки брандмауэра разрешают соединения SSH (порт 22
) из любого источника, результат может выглядеть примерно так:
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
Используйте команду status
, если хотите проверить настройку брандмауэра UFW.
Если вы решите прекратить использовать UFW, вы можете отключить его с помощью следующей команды:
- sudo ufw disable
Любые правила, созданные с помощью UFW, больше не будут активными. Если впоследствии вы захотите активировать UWF, вы всегда сможете использовать команду sudo ufw enable
.
Если вы уже настроили правила UFW, но решите начать заново, вы можете использовать команду reset:
- sudo ufw reset
Эта команда отключит UFW и удалит все ранее заданные правила. Помните, что если вы изменяли политики по умолчанию, их первоначальные настройки не восстановятся. Это позволит заново начать настройку UFW.
Теперь ваш брандмауэр настроен так, чтобы разрешать (как минимум) соединения SSH. Обязательно разрешите другие входящие соединения для вашего сервера, но при этом ограничьте все ненужные соединения, чтобы сделать свой сервер функциональным и безопасным.
Чтобы узнать о других распространенных конфигурациях UFW, ознакомьтесь с обучающим модулем Основы UFW: распространенные правила и команды брандмауэра.
]]>HTTP-сервер Apache — самый широко используемый веб-сервер в мире. Он имеет множество мощных функций, включая динамически загружаемые модули, надежную поддержку различных медиа-форматов и интеграцию с другим популярным программным обеспечением.
В этом обучающем модуле мы расскажем, как установить веб-сервер Apache на сервере Debian 9.
Прежде чем начать прохождение настоящего обучающего модуля, необходимо настроить на сервере обычного пользователя без привилегий root и с привилегиями sudo. Также вам потребуется включить базовый брандмауэр, чтобы заблокировать все порты, кроме необходимых. Вы научитесь настраивать учетную запись обычного пользователя и брандмауэр для вашего сервера, следуя указаниям руководства Начальная настройка сервера Debian 9.
Создав учетную запись, войдите в систему как пользователь без привилегий root.
Apache поставляется с используемыми по умолчанию хранилищами программного обеспечения Debian, что позволяет использовать для его установки инструменты управления из стандартных пакетов.
Для начала выгрузим указатель локальных пакетов, чтобы отразить последние изменения на предыдущих уровнях:
- sudo apt update
Затем установим пакет apache2
:
- sudo apt install apache2
После подтверждения установки apt
выполнит установку Apache и всех требуемых зависимостей.
Прежде чем тестировать Apache, необходимо изменить настройки брандмауэра, чтобы разрешить доступ к веб-портам по умолчанию. Если вы выполнили предварительные указания , у вас должен быть установлен брандмауэр UFW, настроенный для ограничения доступа к вашему серверу.
Во время установки Apache регистрируется в UFW, предоставляя несколько профилей приложений, которые можно использовать для включения или отключения доступа к Apache через брандмауэр.
Выведите список профилей приложений ufw
, введя следующую команду:
- sudo ufw app list
Вы увидите список профилей приложений:
OutputAvailable applications:
AIM
Bonjour
CIFS
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
Профили Apache начинаются с WWW:
Рекомендуется применять самый ограничивающий профиль, который будет разрешать заданный трафик. Поскольку в этом модуле мы еще не настроили SSL для нашего сервера, нам нужно будет только разрешить трафик на порту 80.
- sudo ufw allow 'WWW'
Для проверки изменений введите:
- sudo ufw status
В результатах вы должны увидеть, что трафик HTTP разрешен:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW (v6) ALLOW Anywhere (v6)
Как видите, профиль был активирован для разрешения доступа к веб-серверу.
В конце процесса установки Debian 9 запускает Apache. Веб-сервер уже должен быть запущен и работать.
Используйте команду systemd
init system, чтобы проверить работу службы:
- sudo systemctl status apache2
Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 19:21:48 UTC; 13min ago
Main PID: 12849 (apache2)
CGroup: /system.slice/apache2.service
├─12849 /usr/sbin/apache2 -k start
├─12850 /usr/sbin/apache2 -k start
└─12852 /usr/sbin/apache2 -k start
Sep 05 19:21:48 apache systemd[1]: Starting The Apache HTTP Server...
Sep 05 19:21:48 apache systemd[1]: Started The Apache HTTP Server.
Как видно из результатов, служба успешно запущена. Однако лучше всего протестировать ее запуск посредством запроса страницы из Apache.
Откройте страницу Apache по умолчанию, чтобы подтвердить работу программного обеспечения через ваш IP-адрес: Если вы не знаете IP-адрес вашего сервера, есть несколько способов узнать его с помощью командной строки.
Попробуйте ввести в командной строке сервера следующую команду:
- hostname -I
Вы получите несколько адресов, разделенных пробелами. Вы можете попробовать каждый из них в браузере, чтобы убедиться в их работоспособности.
Альтернатива заключается в использовании инструмента curl
, который должен предоставить вам публичный IP-адрес, отображаемый в других местах в интернете.
Вначале выполните установку curl
с помощью apt
:
- sudo apt install curl
Затем используйте curl
для получения icanhazip.com с помощью IPv4:
- curl -4 icanhazip.com
Когда вы узнаете IP-адрес вашего сервера, введите его в адресную строку браузера:
http://your_server_ip
Вы увидите веб-страницу Debian 9 Apache по умолчанию:
Эта страница показывает, что Apache работает корректно. Также на ней содержится информация о важных файлах Apache и расположении каталогов.
Теперь ваш веб-сервер запущен и работает, и настало время изучить некоторые простые команды управления.
Чтобы остановить веб-сервер, введите:
- sudo systemctl stop apache2
Чтобы запустить остановленный веб-сервер, введите:
- sudo systemctl start apache2
Чтобы остановить и снова запустить службу, введите:
- sudo systemctl restart apache2
Если вы просто вносите изменения в конфигурацию, во многих случаях Apache может перезагружаться без отключения соединений. Для этого нужно использовать следующую команду:
- sudo systemctl reload apache2
По умолчанию Apache настроен на автоматический запуск при загрузке сервера. Если вы не хотите этого, отключите такое поведение с помощью следующей команды:
- sudo systemctl disable apache2
Чтобы перезагрузить службу для запуска во время загрузки, введите:
- sudo systemctl enable apache2
Теперь Apache должен запуститься автоматически при следующей загрузке сервера.
При использовании веб-сервера Apache вы можете использовать виртуальные хосты (аналогичные серверным блокам в Nginx) для инкапсуляции данных конфигурации и размещения на одном сервере нескольких доменов. Мы создадим домен example.com, но вы должны заменить это имя собственным доменным именем. Чтобы узнать больше о настройке доменного имени с помощью DigitalOcean, пройдите наш обучающий модуль Введение в DigitalOcean DNS.
В Apache в Debian 9 по умолчанию включен один серверный блок, настроенный на обслуживание документов из каталога /var/www/html
. Хотя это хорошо работает для отдельного сайта, при хостинге нескольких сайтов это неудобно. Вместо изменения /var/www/html
мы создадим в /var/www
структуру каталогов для нашего сайта example.com, оставив /var/www/html
как каталог по умолчанию для вывода в случае, если запросу клиента не соответствуют никакие другие сайты.
Создайте каталог для example.com следующим образом, используя опцию -p
для создания необходимых родительских каталогов:
sudo mkdir -p /var/www/example.com/html
Затем назначьте владение каталогом с помощью переменной среды $USER
:
- sudo chown -R $USER:$USER /var/www/example.com/html
Разрешения корневых каталогов веб-сервера должны быть правильными, если вы не изменяли значение unmask
. Тем не менее, вы можете проверить это с помощью следующей команды:
- sudo chmod -R 755 /var/www/example.com
Затем создайте в качестве примера страницу index.html
, используя nano
или свой любимый редактор:
- nano /var/www/example.com/html/index.html
Добавьте в страницу следующий образец кода HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com virtual host is working!</h1>
</body>
</html>
Сохраните файл и закройте его после завершения.
Для обслуживания этого контента Apache необходимо создать файл виртуального хоста с правильными директивами. Вместо изменения файла конфигурации по умолчанию /etc/apache2/sites-available/000-default.conf
мы создадим новый файл /etc/apache2/sites-available/example.com.conf
:
- sudo nano /etc/apache2/sites-available/example.com.conf
Введите следующий блок конфигурации, который похож на заданный по умолчанию, но обновлен с учетом нового каталога и доменного имени:
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Обратите внимание, что мы изменили DocumentRoot
на новый каталог, а ServerAdmin
— на адрес электронной почты, доступный администратору сайта example.com. Также мы добавили две директивы: директиву ServerName
, которая устанавливает базовый домен и должна соответствовать определению виртуального хоста, и директиву ServerAlias
, которая задает дополнительные имена, которые должны давать совпадение, как если бы они были базовыми именами.
Сохраните файл и закройте его после завершения.
Активируем файл с помощью инструмента a2ensite
:
- sudo a2ensite example.com.conf
Отключите сайт по умолчанию, определеный в 000-default.conf
:
- sudo a2dissite 000-default.conf
Затем проверим ошибки конфигурации:
- sudo apache2ctl configtest
Вы должны увидеть следующий результат:
OutputSyntax OK
Перезапустие Apache для внесения изменений:
- sudo systemctl restart apache2
Теперь Apache должен обслуживать ваше доменное имя. Вы можете проверить это, открыв в браузере адрес http://example.com
, после чего должны увидеть примерно следующее:
Теперь вы научились управлять службой Apache, и настало время познакомиться с несколькими важными каталогами и файлами.
/var/www/html
: веб-контент, в состав которого по умолчанию входит только показанная ранее страница Apache по умолчанию, выводится из каталога /var/www/html
. Это можно изменить путем изменения файлов конфигурации Apache./etc/apache2
: каталог конфигурации Apache. Здесь хранятся все файлы конфигурации Apache./etc/apache2/apache2conf
: главный файл конфигурации Apache. Его можно изменить для внесения изменений в глобальную конфигурацию Apache. Этот файл отвечает за загрузку многих других файлов в каталоге конфигурации./etc/apache2/ports.conf
: этот файл задает порты, которые будет прослушивать Apache. По умолчанию Apache прослушивает порта 80, а если активирован модуль с функциями SSL, он также прослушивает порт 443./etc/apache2/sites-available/
: каталог, где можно хранить виртуальные хосты для каждого сайта. Apache не будет использовать файлы конфигурации из этого каталога, если они не будут связаны с каталогом sites-enabled
. Обычно все изменения конфигурации серверных блоков выполняются в этом каталоге, а затем активируются посредством ссылки на другой каталог с помощью команды a2ensite
./etc/apache2/sites-enabled/
: каталог, где хранятся активные виртуальные хосты для каждого сайта. Обычно они создаются посредством создания ссылок на файлы конфигурации из каталога sites-available
с помощью команды a2ensite
. Apache считывает файлы конфигурации и ссылки из этого каталога при запуске или перезагрузке, когда компилируется полная конфигурация./etc/apache2/conf-available/
, /etc/apache2/conf-enabled/
: эти каталоги имеют те же отношения, что и каталоги sites-available
и sites-enabled
, но используются для хранения фрагментов конфигурации, которые не принадлежат виртуальному хосту. Файлы из каталога conf-available
можно активировать с помощью команды a2enconf
и отключить с помощью команды a2disconf
./etc/apache2/mods-available/
, /etc/apache2/mods-enabled/
: эти каталоги содержат доступны и активированные модули соответственно. Файлы с расширением .load
содержат фрагменты для загрузки определенных модулей, а файлы с расширением .conf
содержат конфигурации этих модулей. Модули можно активировать и отключать с помощью команд a2enmod
и a2dismod
./var/log/apache2/access.log
: по умолчанию каждый запрос веб-сервера регистрируется в этом файле журналда, если Apache не настроен по другому./var/log/apache2/error.log
: по умолчанию все ошибки регистрируются в этом файле. Директива LogLevel
в конфигурации Apache указывает, насколько детальные записи регистрируются в журналах ошибок.Теперь вы установили веб-сервер и у вас есть богатые возможности выбора типа обслуживаемого контента и технологий для расширения возможностей пользователя.
Если вы хотите развернуть более полный комплекс приложений, ознакомьтесь с этой статьей Настройка набора LAMP в Debian 9.
]]>Системы контроля версий программного обеспечения помогают отслеживать программное обеспечение на уровне исходного кода. С помощью инструментов контроля версий вы сможете отслеживать изменения, возвращаться к предыдущим версиям и создавать ответвления для создания альтернативных версий файлов и директорий.
Git — одна из наиболее популярных систем управления версиями из доступных сегодня. Многие проектные файлы хранятся в репозитории Git, а такие сайты, как GitHub, GitLab и Bitbucket, упрощают работу над проектами разработки программного обеспечения и совместную работу.
В этом обучающем модуле мы научимся устанавливать и настраивать Git на сервере Debian 9. Мы расскажем, как выполнить установку программного обеспечения двумя различными способами, каждый из которых имеет свои преимущества в зависимости от ваших конкретных потребностей.
Для выполнения этого обучающего руководства у вас должен быть пользователь без прав root с привилегиями sudo
на сервере Debian 9. Данная настройка описывается в нашем руководстве по начальной настройке сервера Debian 9.
После настройки сервера и пользователя вы можете продолжить.
Один из самых быстрых способов установки Git — использование репозиториев Debian, заданных по умолчанию. Обратите внимание, что версия, которую вы устанавливаете через эти хранилища, может отличаться от новейшей доступной версии. Если вам потребуется последняя версия, перейдите к следующему разделу этого обучающего руководства, чтобы узнать, как выполнить установку и компиляцию Git из заданного вами источника.
Во-первых, воспользуйтесь инструменты управления пакетами apt для обновления локального индекса пакетов. После завершения обновления вы сможете загрузить и установить Git:
- sudo apt update
- sudo apt install git
Вы можете убедиться, что установка Git выполнена корректно, запустив следующую команду:
- git --version
Outputgit version 2.11.0
После успешной установки Git вы можете переходить Настройка Git данного обучающего руководства и выполнению настройки.
Более гибкий метод установки Git — это компиляция программного обеспечения из исходного кода. Это метод требует больше времени, а полученный результат не будет сохранен в менеджере пакетов, но он позволяет загрузить последнюю версию и дает определенный контроль над параметрами, которые вы включаете в ПО при необходимости индивидуальной настройки.
Перед началом установки вам нужно установить программное обеспечение, от которого зависит Git. Его можно найти в репозиториях по умолчанию, поэтому мы можем обновить локальный индекс пакетов, а после этого установить пакеты.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
После установки необходимых зависимостей вы можете продолжить работу и получить нужную вас версию Git, посетив зеркало проекта Git на GitHub, доступное по следующему URL-адресу:
https://github.com/git/git
Перейдя по ссылке, убедитесь, что вы находитесь в ветке master
. Нажмите ссылку Tags и выберите желаемую версию Git. Если у вас нет оснований для загрузки версии-кандидата (помеченная rc), постарайтесь избежать этого, поскольку такие версии могут быть нестабильными.
Затем нажмите кнопку Clone or download на правой стороне страницы, потом нажмите правой кнопкой мыши Download ZIP и скопируйте адрес ссылки, заканчивающийся на .zip
.
Вернитесь на сервер Debian 9 и перейдите в директорию tmp
, чтобы загрузить временные файлы.
- cd /tmp
Здесь вы можете использовать команду wget
для установки скопированной ссылки на файл с архивом. Мы укажем новое имя для файла: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Разархивируйте файл, который вы загрузили, и переместите в полученную директорию:
- unzip git.zip
- cd git-*
Теперь вы можете создать пакет и установить его, введя эти две команды:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Чтобы убедиться, что установка прошла успешно, вы можете ввести git --version
, после чего вы должны получить соответствующий вывод, указывающий текущую установленную версию Git.
Теперь, когда вы установили Git, если вы захотите обновиться до более поздней версии, вы можете клонировать репозиторий, а потом выполнить сборку и установку. Чтобы найти URL-адрес для использования при клонировании, перейдите к нужной ветке или тегу на странице проекта в GitHub и скопируйте клонируемый URL-адрес с правой стороны:
На момент написания соответствующий URL должен выглядеть следующим образом:
https://github.com/git/git.git
Измените домашнюю директорию и используйте git clone
для URL-адреса, который вы только что скопировали:
- cd ~
- git clone https://github.com/git/git.git
В результате будет создана новая директория внутри текущей директории, где вы можете выполнить повторную сборку проекта и переустановить новую версию, как вы уже делали выше. В результате старая версия будет перезаписана на новую:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
После выполнения этих действий вы можете быть уверены, что используете актуальную версию Git.
Теперь, когда вы установили Git, вам нужно настроить его, чтобы сгенерированные сообщения о внесении содержали корректную информацию.
Это можно сделать с помощью команды git config
. В частности, нам нужно указать наше имя и адрес электронной почты, поскольку Git вставляет эту информацию в каждое внесение. Мы можем двигаться дальше и добавить эту информацию с помощью следующей команды:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
Мы можем просмотреть все пункты конфигурации, которые были настроены, введя следующую команду:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
Информация, которую вы вводите, сохраняется в файле конфигурации Git, и вы можете при желании изменить ее вручную с помощью текстового редактора:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
Существует множество других вариантов настроек, но эти две опции устанавливаются в обязательном порядке. Если вы пропустите этот шаг, вы, скорее всего, будете видеть предупреждения при внесении изменений в Git. Это будет требовать дополнительной работы, поскольку вам нужно будет исправлять вносимые изменения, которые вы делали, вводя корректную информацию.
Вы установили Git и готовы к его использованию в системе.
Чтобы узнать больше об использовании Git, прочитайте эти статьи и разделы:
]]>Docker — это приложение, упрощающее процесс управления процессами приложения в контейнерах. Контейнеры позволяют запускать приложения в процессах с изолированными ресурсами. Они похожи на виртуальные машины, но более портативные, более эффективно расходуют ресурсы и в большей степени зависят от операционной системы хоста.
Подробное описание различных компонентов контейнера Docker см. в статье Экосистема Docker: знакомство с базовыми компонентами.
Это руководство поможет установить и использовать Docker Community Edition (CE) в Debian 9. Вы самостоятельно установите Docker, поработаете с контейнерами и изображениями, а также подготовите образ репозитория Docker.
Для выполнения этого руководства вам потребуется следующее:
Пакет установки Docker, доступный в официальном репозитории Debian, может представлять собой не самую последнюю версию. Чтобы точно использовать самую актуальную версию, мы будем устанавливать Docker из официального репозитория Docker. Для этого мы добавим новый источник пакета, ключ GPG от Docker, чтобы гарантировать загрузку рабочих файлов, а затем установим пакет.
Первым делом обновите существующий список пакетов:
- sudo apt update
Затем установите несколько необходимых пакетов, которые позволяют apt
использовать пакеты через HTTPS:
- sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
Добавьте ключ GPG для официального репозитория Docker в вашу систему:
- curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
Добавьте репозиторий Docker в источники APT:
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
Потом обновите базу данных пакетов и добавьте в нее пакеты Docker из недавно добавленного репозитория:
- sudo apt update
Убедитесь, что установка будет выполняться из репозитория Docker, а не из репозитория Debian по умолчанию:
- apt-cache policy docker-ce
Вы должны получить следующий вывод, хотя номер версии Docker может отличаться:
docker-ce:
Installed: (none)
Candidate: 18.06.1~ce~3-0~debian
Version table:
18.06.1~ce~3-0~debian 500
500 https://download.docker.com/linux/debian stretch/stable amd64 Packages
Обратите внимание, что docker-ce
не установлен, но является кандидатом на установку из репозитория Docker для Debian 9 (версия Stretch
).
Установите Docker:
- sudo apt install docker-ce
Docker должен быть установлен, демон-процесс запущен, а для процесса активирован запуск при загрузке. Проверьте, что он запущен:
- sudo systemctl status docker
Вывод должен выглядеть примерно следующим образом, указывая, что служба активна и запущена:
Output● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
Docs: https://docs.docker.com
Main PID: 21319 (dockerd)
CGroup: /system.slice/docker.service
├─21319 /usr/bin/dockerd -H fd://
└─21326 docker-containerd --config /var/run/docker/containerd/containerd.toml
После установки Docker у вас будет доступ не только к службе Docker (демон-процесс), но и к утилите командной строки docker
или клиенту Docker. Мы узнаем, как использовать команду docker
позже в этом обучающем руководстве.
По умолчанию команда docker
может быть запущена только пользователем root или пользователем из группы docker, которая автоматически создается при установке Docker. Если вы попытаетесь запустить команду docker
без префикса sudo
или с помощью пользователя, который не находится в группе docker, то получите следующий вывод:
Outputdocker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
Если вы не хотите каждый раз вводить sudo
при запуске команды docker
, добавьте свое имя пользователя в группу docker
:
- sudo usermod -aG docker ${USER}
Чтобы применить добавление нового члена группы, выйдите и войдите на сервер или введите следующее:
- su - ${USER}
Вы должны будете ввести пароль вашего пользователя, чтобы продолжить.
Проверьте, что ваш пользователь добавлен в группу docker, введя следующее:
- id -nG
Outputsammy sudo docker
Если вам нужно добавить пользователя в группу docker
, для которой вы не выполнили вход, объявите имя пользователя явно, используя следующую команду:
- sudo usermod -aG docker username
В дальнейшем в статье подразумевается, что вы запускаете команду docker
от имени пользователя в группе docker. В обратном случае вам необходимо добавлять к командам префикс sudo
.
Давайте перейдем к знакомству с командой docker
.
Использование docker
подразумевает передачу ему цепочки опций и команд, за которыми следуют аргументы. Синтаксис имеет следующую форму:
- docker [option] [command] [arguments]
Чтобы просмотреть все доступные субкоманды, введите:
- docker
Для 18-й версии Docker полный список субкоманд выглядит следующим образом:
Output
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Чтобы просмотреть параметры, доступные для конкретной команды, введите:
- docker docker-subcommand --help
Чтобы просмотреть общесистемную информацию о Docker, введите следующее:
- docker info
Давайте изучим некоторые из этих команд. Сейчас мы начнем работать с образами.
Контейнеры Docker получают из образов Docker. По умолчанию Docker загружает эти образы из Docker Hub, реестр Docker, контролируемые Docker, т.е. компанией, реализующей проект Docker. Любой может размещать свои образы Docker на Docker Hun, поэтому большинство приложений и дистрибутивов Linux, которые вам потребуется, хранят там свои образы.
Чтобы проверить, можно ли получить доступ к образам из Docker Hub и загрузить их, введите следующую команду:
- docker run hello-world
Данный вывод говорит о том, что Docker работает корректно:
OutputUnable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
Docker первоначально не смог найти локальный образ hello-world
, поэтому он загрузил образ из Docker Hub, который является репозиторием по умолчанию. После того как образ был загружен, Docker создал контейнер из образа, а приложение внутри контейнера было исполнено, отобразив сообщение.
Вы можете выполнять поиск доступных на Docker Hub с помощью команды docker
с субкомандой search
. Например, чтобы найти образ Ubuntu, введите:
- docker search ubuntu
Скрипт пробежится по Docker Hub и вернет список всех образов с именами, совпадающими со строкой запроса. В данном случае вывод будет выглядеть примерно следующим образом:
OutputNAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 8320 [OK]
dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 214 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 170 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 128 [OK]
ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 95 [OK]
ubuntu-upstart Upstart is an event-based replacement for th… 88 [OK]
neurodebian NeuroDebian provides neuroscience research s… 53 [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 43 [OK]
ubuntu-debootstrap debootstrap --variant=minbase --components=m… 39 [OK]
nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK]
tutum/ubuntu Simple Ubuntu docker images with SSH access 18
i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13
1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 12 [OK]
ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12
eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK]
darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK]
codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK]
1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK]
pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 2
1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK]
ossobv/ubuntu Custom ubuntu image from scratch (based on o… 0
smartentry/ubuntu ubuntu with smartentry 0 [OK]
1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK]
pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0
paasmule/bosh-tools-ubuntu Ubuntu based bosh-cli 0 [OK]
...
В столбце OFFICIAL OK указывает на образ, созданный и поддерживаемый компанией, реализующей проект. После того как вы определили образ, который хотели бы использовать, вы можете загрузить его на свой компьютер с помощью субкоманды pull
.
Запустите следующую команду, чтобы загрузить официальный образ ubuntu
на свой компьютер:
- docker pull ubuntu
Вывод должен выглядеть следующим образом:
OutputUsing default tag: latest
latest: Pulling from library/ubuntu
6b98dfc16071: Pull complete
4001a1209541: Pull complete
6319fc68c576: Pull complete
b24603670dc3: Pull complete
97f170c87c6f: Pull complete
Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d
Status: Downloaded newer image for ubuntu:latest
После того как образ будет загружен, вы сможете запустить контейнер с помощью загруженного образа с помощью субкоманды run
. Как вы уже видели на примере hello-world
, если образ не был загружен, когда docker
выполняется с субкомандой run
, клиент Docker сначала загружает образ, а затем запускает контейнер с этим образом.
Чтобы просмотреть образы, которые были загружены на ваш компьютер, введите:
- docker images
Вывод должен выглядеть примерно следующим образом:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 16508e5c265d 13 days ago 84.1MB
hello-world latest 2cb0d9787c4d 7 weeks ago 1.85kB
Как вы увидите далее в этом обучающем руководстве, образы, которые вы используете для запуска контейнеров, можно изменить и использовать для создания новых образов, которые затем могут быть загружены (помещены) на Docker Hub или другие реестры Docker.
Давайте более подробно рассмотрим, как запускаются контейнеры.
Контейнер hello-world
, который вы запустили на предыдущем шаге, служит примером контейнера, который запускается и завершает работу после отправки тестового сообщения. Контейнеры могут быть гораздо более полезными, чем в примере выше, а также могут быть интерактивными. В конечном счете, они очень похожи на виртуальные машины, но более бережно расходуют ресурсы.
В качестве примера мы запустим контейнер с самым последним образом образ Ubuntu. Сочетание переключателей -i и -t предоставляет вам доступ к интерактивной командной оболочке внутри контейнера:
- docker run -it ubuntu
Необходимо изменить приглашение к вводу команды, чтобы отразить тот факт, что вы работаете внутри контейнера, и должны иметь следующую форму:
Outputroot@d9b100f2f636:/#
Обратите внимание на идентификатор контейнер в запросе команды. В данном примере это d9b100f2f636
. Вам потребуется этот идентификатор для определения контейнера, когда вы захотите его удалить.
Теперь вы можете запустить любую команду внутри контейнера. Например, сейчас мы обновим базу данных пакетов внутри контейнера. Вам не потребуется начинать любую команду с sudo
, потому что вы работаете внутри контейнера как root-пользователь:
- apt update
После этого вы можете установите любое приложение внутри контейнера. Давайте установим Node.js:
- apt install nodejs
Эта команда устанавливает Node.js внутри контейнера из официального репозитория Ubuntu. После завершения установки проверьте, что Node.js был установлен успешно:
- node -v
Вы увидите номер версии, отображаемый в терминале:
Outputv8.10.0
Любые изменения, которые вы вносите внутри контейнера, применяются только к контейнеру.
Чтобы выйти из контейнера, введите exit
.
Далее мы рассмотрим управление контейнерами в нашей системе.
После использования Docker в течение определенного времени, у вас будет много активных (запущенных) и неактивных контейнеров на компьютере. Чтобы просмотреть активные, используйте следующую команду:
- docker ps
Вывод будет выглядеть примерно следующим образом:
OutputCONTAINER ID IMAGE COMMAND CREATED
В этом обучающем руководстве вы запустили два контейнера: один из образа hello-world
и другой из образа ubuntu
. Оба контейнера больше не запущены, но все еще существует в вашей системе.
Чтобы просмотреть все контейнеры — активные и неактивные, воспользуйтесь командой docker ps
с переключателем -a
:
- docker ps -a
Вывод будет выглядеть следующим образом:
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 8 minutes ago sharp_volhard
01c950718166 hello-world "/hello" About an hour ago Exited (0) About an hour ago festive_williams
Чтобы просмотреть последний созданный вами контейнер, передайте переключатель -l
:
- docker ps -l
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard
Чтобы запустить остановленный контейнер, воспользуйтесь docker start
с идентификатором контейнера или именем контейнера. Давайте запустим контейнер на базе Ubuntu с идентификатором d9b100f2f636
:
- docker start d9b100f2f636
Контейнер будет запущен, а вы сможете использовать docker ps
, чтобы просматривать его статус:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Up 8 seconds sharp_volhard
Чтобы остановить запущенный контейнер, используйте docker stop
с идентификатором или именем контейнера. На этот раз мы будем использовать имя, которое Docker привязал к контейнеру, т.е. sharp_volhard
:
- docker stop sharp_volhard
После того как вы решили, что вам больше не потребуется контейнер, удалите его с помощью команды docker rm
, снова добавив идентификатор контейнера или его имя. Используйте команду docker ps -a
, чтобы найти идентификатор или имя контейнера, связанного с образом hello-world
, и удалить его.
- docker rm festive_williams
Вы можете запустить новый контейнер и присвоить ему имя с помощью переключателя --name
. Вы также можете использовать переключатель --rm
, чтобы создать контейнер, который удаляется после остановки. Изучите команду docker run help
, чтобы получить больше информации об этих и прочих опциях.
Контейнеры можно превратить в образы, которые вы можете использовать для создания новых контейнеров. Давайте посмотрим, как это работает.
После запуска образа Docker вы можете создавать, изменять и удалять файлы так же, как и с помощью виртуальной машины. Эти изменения будут применяться только к данному контейнеру. Вы можете запускать и останавливать его, но после того как вы уничтожите его с помощью команды docker rm
, изменения будут утрачены навсегда.
Данный раздел показывает, как сохранить состояние контейнера в виде нового образа Docker.
После установки Node.js внутри контейнера Ubuntu у вас появился контейнер, запускающий образ, но этот контейнер отличается от образа, который вы использовали для его создания. Но позже вам может снова потребоваться этот контейнер Node.js в качестве основы для новых образов.
Затем внесите изменения в новый экземпляр образа Docker с помощью следующей команды.
- docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name
Переключатель -m используется в качестве сообщения о внесении изменений, которое помогает вам и остальным узнать, какие изменения вы внесли, в то время как -a используется для указания автора. container_id
— это тот самый идентификатор, который вы отмечали ранее в этом руководстве, когда запускали интерактивную сессию Docker. Если вы не создавали дополнительные репозитории на Docker Hub, repository
, как правило, является вашим именем пользователя на Docker Hub.
Например, для пользователя sammy с идентификатором контейнера d9b100f2f2f6
команда будет выглядеть следующим образом:
- docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs
Когда вы вносите образ, новый образ сохраняется локально на компьютере. Позже в этом обучающем руководстве вы узнаете, как добавить образ в реестр Docker, например, на Docker Hub, чтобы другие могли получить к нему доступ.
Список образов Docker теперь будет содержать новый образ, а также старый образ, из которого он будет получен:
- docker images
Вывод будет выглядеть следующим образом:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB
ubuntu latest 113a43faa138 4 weeks ago 81.2MB
hello-world latest e38bc07ac18e 2 months ago 1.85kB
В данном примере ubuntu-nodejs
является новым образом, который был получен из образа ubuntu
на Docker Hub. Разница в размере отражает внесенные изменения. В данном примере изменение состояло в том, что NodeJS был установлен. В следующий раз, когда вам потребуется запустить контейнер, использующий Ubuntu с предустановленным NodeJS, вы сможете использовать новый образ.
Вы также можете создавать образы из Dockerfile
, что позволяет автоматизировать установку программного обеспечения в новом образе. Однако это не относится к предмету данного обучающего руководства.
Теперь мы поделимся новым образом с другими, чтобы они могли создавать из него контейнеры.
Следующим логическим шагом после создания нового образа из существующего является предоставление доступа к этому образу нескольким вашим друзьям или всему миру на Docker Hub или в другом реестре Docker, к которому вы имели доступ. Чтобы добавить образ на Docker Hub или любой другой реестр Docker, у вас должна быть там учетная запись.
Данный раздел посвящен добавлению образа Docker на Docker Hub. Чтобы узнать, как создать свой собственный частный реестр Docker, ознакомьтесь со статьей Настройка частного реестра Docker на Ubuntu 14.04.
Чтобы загрузить свой образ, выполните вход в Docker Hub.
- docker login -u docker-registry-username
Вам будет предложено использовать для аутентификации пароль Docker Hub. Если вы указали правильный пароль, аутентификации должна быть выполнена успешно.
Примечание. Если ваше имя пользователя в реестре Docker отличается от локального имени пользователя, которое вы использовали при создании образа, вам потребуется пометить ваш образ именем пользователя в реестре. Для примера, приведенного на последнем шаге, вам необходимо ввести следующую команду:
- docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs
Затем вы можете загрузить свой образ с помощью следующей команды:
- docker push docker-registry-username/docker-image-name
Чтобы загрузить образ ubuntu-nodejs
в репозиторий sammy, необходимо использовать следующую команду:
- docker push sammy/ubuntu-nodejs
Данный процесс может занять некоторое время, необходимое на загрузку образов, но после завершения вывод будет выглядеть следующим образом:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed
...
После добавления образа в реестр он должен отображаться в панели вашей учетной записи, как на изображении ниже.
Если при попытке добавления возникает подобная ошибка, вы, скорее всего, не выполнили вход:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required
Выполните вход с помощью команды docker login
и повторите попытку загрузки. Проверьте, появился ли образ на вашей странице репозитория Docker Hub.
Теперь вы можете использовать docker pull sammy/ubuntu-nodejs
, чтобы загрузить образ на новый компьютер и использовать его для запуска нового контейнера.
В этом обучающем руководстве вы установили Docker, поработали с образами и контейнерами, а также добавили измененный образ на Docker Hub. После знакомства с основами, вы можете переходить к другим обучающим руководствам Docker в сообществе.
]]>Virtual Network Computing или VNC — это система подключения, позволяющая использовать клавиатуру и мышь для взаимодействия с графической средой рабочего стола на удаленном сервере. Данная система упрощает управление файлами, программным обеспечением и настройками на удаленном сервере для пользователей, которые еще не очень знакомы с управлением через командную строку.
С помощью этого обучающего модуля вы научитесь настраивать сервер VNC на сервере Debian 9 и подключаться к нему через защищенный туннель SSH. Мы будем использовать TightVNC, быстрый и компактный пакет дистанционного управления. Благодаря этому наше соединение VNC будет стабильным и удобным даже при низкой скорости подключения к интернету.
Для завершения данного обучающего модуля вам потребуется:
sudo
.По умолчанию сервер Debian 9 поставляется без графической среды рабочего стола и без сервера VNC, так что для начала мы их установим. В частности, мы установим пакеты новейшей среды рабочего стола Xfce и пакет TightVNC, доступный в официальном хранилище Debian.
Обновите список пакетов на своем сервере:
- sudo apt update
Установите на свой сервер среду рабочего стола Xfce:
- sudo apt install xfce4 xfce4-goodies
Во время установки вам будет предложено выбрать раскладку клавиатуры из списка возможных вариантов. Выберите наиболее подходящий для вашего языка вариант и нажмите ENTER
. После этого установка будет продолжена.
После завершения установки установите сервер TightVNC:
- sudo apt install tightvncserver
Для завершения начальной настройки сервера VNC после установки используйте команду vncserver
, чтобы задать безопасный пароль и создать начальные файлы конфигурации:
- vncserver
Вам будет предложено ввести и подтвердить пароль для удаленного доступа к системе:
OutputYou will require a password to access your desktops.
Password:
Verify:
Пароль должен иметь длину от 6 до 8 символов. Пароли длиной более 8 символов будут автоматически обрезаны.
После подтверждения пароля вы сможете создать пароль только для просмотра. Пользователи, входящие с паролем только для просмотра, не смогут контролировать экземпляр сервера VNC с помощью мыши или клавиатуры. Это полезная возможность, если вам нужно что-то продемонстрировать с помощью сервера VNC, однако использовать ее необязательно.
Затем процесс создает необходимые файлы конфигурации по умолчанию и данные подключения для сервера:
OutputWould you like to enter a view-only password (y/n)? n
xauth: file /home/sammy/.Xauthority does not exist
New 'X' desktop is your_hostname:1
Creating default startup script /home/sammy/.vnc/xstartup
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Теперь настроим сервер VNC.
Сервер VNC должен знать, какие команды следует выполнять при запуске. В частности, VNC должен знать, к какому графическому рабочему столу следует подключиться.
Эти команды находятся в файле конфигурации xstartup
в папке .vnc
в каталоге home. Сценарий startup был создан при запуске vncserver
на предыдущем шаге, однако мы создадим собственный сценарий для запуска рабочего стола Xfce.
При начальной настройке VNC запускается экземпляр сервера по умолчанию на порту 5901
. Этот порт называется портом дисплея и учитывается VNC как :1
. Возможен запуск нескольких экземпляров VNC на других портах дисплея, в том числе :2
, :3
и т. д.
Поскольку мы изменяем настройку сервера VNC, вначале нужно остановить экземпляр сервера VNC, работающий на порту 5901
, с помощью следующей команды:
- vncserver -kill :1
Результат должен выглядеть следующим образом, хотя вы увидите другой PID:
OutputKilling Xtightvnc process ID 17648
Прежде чем изменять файл xstartup
, следует создать резервную копию исходного файла:
- mv ~/.vnc/xstartup ~/.vnc/xstartup.bak
Создайте новый файл xstartup
и откройте его в текстовом редакторе:
- nano ~/.vnc/xstartup
Команды из этого файла автоматически выполняются при запуске или перезапуске сервера VNC. Сервер VNC должен запустить нашу среду рабочего стола, если она еще не запущена. Добавьте в файл следующие команды:
~/.vnc/xstartup#!/bin/bash
xrdb $HOME/.Xresources
startxfce4 &
Первая команда в файле, xrdb $HOME/. Xresources
указывает системе графического интерфейса VNC прочитать файл пользователя сервера . Xresources
. В файле Xresources
пользователь может изменять определенные параметры графического рабочего стола, такие как цвета терминала, темы курсора и рендеринг шрифтов. Вторая команда указывает серверу запустить пакет Xfce, включающий все графическое программное обеспечение для удобного управления сервером.
Чтобы сервер VNC мог использовать новый файл startup, нужно сделать его исполняемым.
- sudo chmod +x ~/.vnc/xstartup
Перезапустите сервер VNC.
- vncserver
Результат будет выглядеть примерно так:
OutputNew 'X' desktop is your_hostname:1
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Завершив настройку, подключимся к серверу с локального компьютера.
Сервер VNC не использует защищенные протоколы при подключении. Мы используем туннель SSH для безопасного подключения к серверу, а затем укажем клиенту VNC использовать этот туннель, а не создавать прямое соединение.
Создайте на локальном компьютере соединение SSH, которое безопасно перенаправляется в соединение localhost
для VNC. Для этого можно ввести черех терминал в Linux или macOS следующую команду:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
Опция -L
указывает на привязку портов. В данном случае мы привязываем порт 5901
удаленного подключения к порту 5901
локального компьютера. Опция -C
активирует сжатие, а опция -N
указывает ssh
, что мы не хотим выполнять удаленную команду. Опция -l
указывает имя для удаленного входа в систему.
Не забудьте заменить sammy
и your_server_ip
именем пользователя sudo без привилегий root и IP-адресом вашего сервера.
Если вы используете графический клиент SSH (например, PuTTY), используйте your_server_ip
как IP-адрес для подключения, и задайте localhost:5901
как новый порт переадресации в настройках туннеля SSH программы.
После запуска туннеля используйте клиент VNC для подключения к localhost:5901
. Вам будет предложено пройти аутентификацию, используя пароль, заданный на шаге 1.
После подключения вы увидите рабочий стол Xfce по умолчанию.
Выберите пункт «Использовать конфигурацию по умолчанию» для быстрой настройки системы.
Для доступа к файлам в каталоге home вы можете использовать менеджер файлов или командную строку, как показано здесь:
Нажмите CTRL+C
на локальном компьютере, чтобы остановить туннель SSH и вернуться к командной строке. При этом сеанс VNC также будет отключен.
Теперь мы настроим сервер VNC как службу.
Далее мы настроим сервер VNC как системную службу, которую мы сможем запускать, останавливать и перезапускать как любую другую службу. Это также обеспечит запуск VNC при перезагрузке вашего сервера.
Создайте новый файл блока с именем /etc/systemd/system/vncserver@.service
в своем любимом текстовом редакторе:
- sudo nano /etc/systemd/system/vncserver@.service
Символ @
позволит нам передать аргумент, который мы сможем использовать при настройке службы. Мы будем использовать его, чтобы задать порт дисплея VNC, который хотим использовать при управлении службой.
Добавьте в файл следующие строки. Оюязательно измените значения параметров User, Group, WorkingDirectory и username на значения PIDFILE, соответствующие вашему имени пользователя:
/etc/systemd/system/vncserver@.service[Unit]
Description=Start TightVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=sammy
Group=sammy
WorkingDirectory=/home/sammy
PIDFile=/home/sammy/.vnc/%H:%i.pid
ExecStartPre=-/usr/bin/vncserver -kill :%i > /dev/null 2>&1
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :%i
ExecStop=/usr/bin/vncserver -kill :%i
[Install]
WantedBy=multi-user.target
Команда ExecStartPre
останавливает сервер VNC, если он уже запущен. Команда ExecStart
запускает VNC и устанавливает 24-битную глубину цвета с разрешением 1280x800. Вы можете изменить эти параметры запуска в соответствии со своими потребностями.
Сохраните и закройте файл.
Затем сообщите системе о новом файле блока.
- sudo systemctl daemon-reload
Активируйте файл блока.
- sudo systemctl enable vncserver@1.service
Цифра 1
после символа @ указывает, на каком дисплее должна появляться служба. В данном случае это значение по умолчанию :1
, как говорилось на шаге 2.
Остановите текущий экземпляр сервера VNC, если он еще работает.
- vncserver -kill :1
Запустите его, как любую другую системную службу.
- sudo systemctl start vncserver@1
Вы можете проверить запуск с помощью следующей команды:
- sudo systemctl status vncserver@1
Если запуск выполнен нормально, результат должен выглядеть следующим образом:
Output● vncserver@1.service - Start TightVNC server at startup
Loaded: loaded (/etc/systemd/system/vncserver@.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 16:47:40 UTC; 3s ago
Process: 4977 ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :1 (code=exited, status=0/SUCCESS)
Process: 4971 ExecStartPre=/usr/bin/vncserver -kill :1 > /dev/null 2>&1 (code=exited, status=0/SUCCESS)
Main PID: 4987 (Xtightvnc)
...
Теперь сервер VNC будет доступен при перезагрузке компьютера.
Запустите туннель SSH еще раз:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
Создайте новое подключение, используя клиентское программное обеспечение VNC для подключения localhost:5901
к вашему компьютеру.
Теперь вы настроили и запустили защищенный сервер VNC на своем сервере Debian 9. Теперь вы сможете управлять файлами, программным обеспечением и настройками через удобный и знакомый графический интерфейс, а также удаленно запускать графические приложения, в том числе браузеры.
]]>myhostname = dropletname.local domain #I tried putting americanpharoah.ca here but then I can no longer relay the emails to root to me
myorigin = /etc/mailname
mydestination = americanpharoah.ca, $myorigin, $myhostname, localhost.americanpharoah.ca, localhost
]]>.cert
file and .xml
file?
Below are my nginx server configuration
/etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
/etc/nginx/conf.d/default.conf
fileserver {
server_name < hostname >;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /var/www/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I’ve no clue how to configure saml on that. Any ideas?
]]>O SSH, ou shell seguro, é um protocolo criptografado usado para administrar e se comunicar com servidores. Ao trabalhar com um servidor Debian, existem boas chances de você gastar a maior parte do seu tempo em uma sessão de terminal conectada ao seu servidor através do SSH.
Neste guia, vamos focar na configuração de chaves SSH para uma instalação básica do Debian 9. As chaves SSH fornecem uma maneira fácil e segura de fazer login no seu servidor e são recomendadas para todos os usuários.
O primeiro passo é criar uma par de chaves na máquina do cliente (geralmente seu computador):
- ssh-keygen
Por padrão, o ssh-keygen
criará um par de chaves RSA de 2048-bit, que é seguro o suficiente para a maioria dos casos (você pode opcionalmente adicionar a flag -b 4096
para criar uma chave maior de 4096-bit).
Após digitar o comando, você deve ver o seguinte resultado:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Pressione ENTER para salvar o par de chaves no sub-diretório .ssh/
no seu diretório home, ou especifique um caminho alternativo.
Se você tivesse gerado anteriormente um par de chaves SSH, pode ser que veja o seguinte prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
Se escolher substituir a chave no disco, você não poderá autenticar-se usando a chave anterior. Seja cuidadoso ao selecionar o sim, uma vez que este é um processo destrutivo que não pode ser revertido.
Então, você deve ver o seguinte prompt:
OutputEnter passphrase (empty for no passphrase):
Aqui você pode digitar uma frase secreta de forma opcional, o que é altamente recomendado. Uma frase secreta adiciona uma camada adicional de segurança para evitar que os usuários não autorizados façam login. Para aprender mais sobre segurança, consulte nosso tutorial sobre Como configurar a autenticação baseada em chaves SSH em um servidor Linux.
Você deve ver o seguinte resultado:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Agora, você tem uma chave pública e privada que pode usar para se autenticar. O próximo passo é colocar a chave pública no seu servidor para que você possa usar a autenticação baseada em chaves SSH para fazer login.
A maneira mais rápida de copiar sua chave pública para o host do Debian é usar um utilitário chamado ssh-copy-id
. Devido a sua simplicidade, este método é altamente recomendado se estiver disponível. Se não tiver o ssh-copy-id
disponível na sua máquina do cliente, você pode usar um dos dois métodos alternativos fornecidos nesta seção (copiar através do SSH baseado em senha, ou copiar manualmente a chave).
ssh-copy-id
A ferramenta ssh-copy-id
é incluída por padrão em muitos sistemas operacionais, então você pode tê-la disponível no seu sistema local. Para que este método funcione, você já deve ter acesso via SSH baseado em senha ao seu servidor.
Para usar o utilitário, você precisa especificar apenas o host remoto ao qual gostaria de se conectar e a conta do usuário que tem acesso SSH via senha. Esta é a conta na qual sua chave SSH pública será copiada.
A sintaxe é:
- ssh-copy-id username@remote_host
Pode ser que você veja a seguinte mensagem:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Isso significa que seu computador local não reconhece o host remoto. Isso acontecerá na primeira vez que você se conectar a um novo host. Digite “yes” e pressione ENTER
para continuar.
Em seguida, o utilitário irá analisar sua conta local em busca da chave id_rsa.pub
que criamos mais cedo. Quando ele encontrar a chave, irá solicitar a senha da conta do usuário remoto:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Digite a senha (sua digitação não será exibida para fins de segurança) e pressione ENTER
. O utilitário se conectará à conta no host remoto usando a senha que você forneceu. Então, ele copiará o conteúdo da sua chave ~/.ssh/id_rsa.pub
em um arquivo no diretório da conta remota home ~/.ssh
chamado authorized_keys
.
Você deve ver o seguinte resultado:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
Neste ponto, sua chave id_rsa.pub
foi enviada para a conta remota. Você pode continuar para o Passo 3.
Se não tiver o ssh-copy-id
disponível, mas tiver acesso SSH baseado em senha a uma conta do seu servidor, você pode fazer o upload das suas chaves usando um método SSH convencional.
Podemos fazer isso usando o comando cat
para ler o conteúdo da chave SSH pública no nosso computador local e passando isso através de uma conexão SSH ao servidor remoto.
Do outro lado, podemos garantir que o diretório ~/.ssh
exista e que tenha as permissões corretas na conta que estamos usando.
Então, podemos entregar o conteúdo que foi passado em um arquivo chamado authorized_keys
dentro deste diretório. Vamos usar o símbolo de redirecionamento >>
para adicionar o conteúdo ao invés de substituí-lo. Isso permitirá que adicionemos chaves sem destruir chaves previamente adicionadas.
O comando completo se parece com este:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
Pode ser que você veja a seguinte mensagem:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Isso significa que seu computador local não reconhece o host remoto. Isso acontecerá na primeira vez que você se conectar a um novo host. Digite “yes” e pressione ENTER
para continuar.
Depois disso, você deve ser solicitado a digitar a senha da conta de usuário remoto:
Outputusername@203.0.113.1's password:
Após digitar sua senha, o conteúdo da sua chave id_rsa.pub
será copiado para o final do arquivo authorized_keys
da conta do usuário remoto. Continue para o Passo 3 se isso foi bem-sucedido.
Se não tiver acesso SSH baseado em senha ao seu servidor disponível, você terá que completar o processo acima manualmente.
Vamos adicionar manualmente o conteúdo do seu arquivo id_rsa.pub
ao arquivo ~/.ssh/authorized_keys
na sua máquina remota.
Para exibir o conteúdo da sua chave id_rsa.pub
, digite isso no seu computador local:
- cat ~/.ssh/id_rsa.pub
Você verá o conteúdo da chave, que deve ser parecido com este:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Acesse seu host remoto usando algum método que você tenha disponível.
Assim que tiver acesso à sua conta no servidor remoto, você deve garantir que o diretório ~/.ssh
exista. Este comando criará o diretório se necessário, ou não fará nada se ele já existir:
- mkdir -p ~/.ssh
Agora, você pode criar ou modificar o arquivo authorized_keys
dentro deste diretório. Você pode adicionar o conteúdo do seu arquivo id_rsa.pub
ao final do arquivo authorized_keys
, criando-o se for necessário, usando este comando:
- echo public_key_string >> ~/.ssh/authorized_keys
No comando acima, substitua o public_key_string
pelo resultado do comando cat ~/.ssh/id_rsa.pub
que você executou no seu sistema local. Ela deve começar com ssh-rsa AAAA...
.
Por fim, vamos garantir que o diretório ~/.ssh
e o arquivo authorized_keys
tenha as permissões apropriadas configuradas:
- chmod -R go= ~/.ssh
Isso remove recursivamente todas as permissões “group” e “other” para o diretório ~/.ssh/
.
Se você estiver usando a conta root
para configurar chaves para uma conta de usuário, também é importante que o diretório ~/.ssh
pertença ao usuário e não ao root
:
- chown -R sammy:sammy ~/.ssh
Neste tutorial, nosso usuário chama-se sammy mas você deve substituí-lo pelo nome de usuário apropriado no comando acima.
Agora, podemos tentar uma autenticação sem senha com nosso servidor Debian.
Se tiver completado um dos procedimentos acima, você deve conseguir fazer login no host remoto sem a senha da conta remota.
O processo básico é o mesmo:
- ssh username@remote_host
Se essa é a primeira vez que você se conecta a este host (caso tenha usado o último método acima), pode ser que veja algo como isso:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Isso significa que seu computador local não reconhece o host remoto. Digite “yes” e então pressione ENTER
para continuar.
Se não forneceu uma frase secreta para sua chave privada, você será logado imediatamente. Caso tenha fornecido uma frase secreta para a chave privada quando criou a chave, você será solicitado a digitá-la agora (note que sua digitação não será exibida na sessão do terminal como medida de segurança). Após a autenticação, uma nova sessão de shell deve abrir para você com a conta configurada no servidor Debian.
Se a autenticação baseada em chaves foi bem-sucedida, continue para aprender a proteger ainda mais o seu sistema, desativando a autenticação por senha.
Se conseguiu logar na sua conta usando o SSH sem uma senha, você configurou com sucesso a autenticação baseada em chaves SSH na sua conta. Entretanto, seu mecanismo de autenticação baseado em senha ainda está ativo, o que significa que seu servidor ainda está exposto a ataques por força bruta.
Antes de completar os passos nesta seção, certifique-se de que você tenha uma autenticação baseada em chaves SSH configurada para a conta raiz neste servidor, ou, de preferência, que tenha uma autenticação baseada em senhas SSH configurada para uma conta não raiz neste servidor com privilégios sudo
. Este passo irá bloquear os logins baseados em senha, então garantir que você ainda consiga obter acesso administrativo é crucial.
Assim que você tiver confirmado que sua conta remota tem privilégios administrativos, logue no seu servidor remoto com chaves SSH, como raiz ou com uma conta com privilégios sudo
. Então, abra o arquivo de configuração do daemon SSH:
- sudo nano /etc/ssh/sshd_config
Dentro do arquivo, procure por uma diretriz chamada PasswordAuthentication
. Isso pode ser transformado em comentário. Descomente a linha e configure o valor em “no”. Isso irá desativar a sua capacidade de fazer login através do SSH usando senhas de conta:
...
PasswordAuthentication no
...
Salve e feche o arquivo quando terminar pressionando CTRL + X
, então Y
para confirmar o salvamento do arquivo e, por fim, ENTER
para sair do nano. Para realmente implementar essas alterações, precisamos reiniciar o serviço sshd
:
- sudo systemctl restart ssh
Como uma precaução, abra uma nova janela de terminal e teste se o serviço SSH está funcionando corretamente antes de fechar esta sessão:
- ssh username@remote_host
Assim que você tiver verificado seu serviço SSH, feche todas as sessões atuais do servidor com segurança.
O daemon SSH no seu servidor Debian agora responde apenas às chaves SSH. A autenticação baseada em senha foi desativada com sucesso.
Agora, você deve ter uma autenticação baseada em chaves SSH configurada no seu servidor, permitindo que você faça login sem fornecer uma senha da conta.
Se você quiser aprender mais sobre como trabalhar com o SSH, veja nosso Guia dos fundamentos SSH.
]]>Quer acessar a Internet com segurança do seu smartphone ou notebook, enquanto estiver conectado a uma rede não confiável como o Wi-Fi de um hotel ou café? Uma Virtual Private Network (VPN) permite que você atravesse redes não confiáveis com privacidade e segurança como se estivesse em uma rede privada. O tráfego emerge do servidor VPN e continua sua jornada para o destino.
Quando combinado com conexões HTTPS, esta configuração permite que você proteja seus logins e transações sem fio. Você pode contornar restrições geográficas e de censura, e proteger seu local e qualquer tráfego HTTP não criptografado da rede não confiável.
O OpenVPN é uma solução VPN de código aberto Secure Socket Layer (SSL) completa que acomoda uma ampla gama de configurações. Neste tutorial, você irá configurar um servidor OpenVPN em um servidor Debian 9 e, em seguida, configurar o acesso a ele do Windows, macOS, iOS e/ou Android. Este tutorial irá manter os passos de instalação e configuração o mais simples possível para cada uma dessas configurações.
Nota: Se você planeja configurar um servidor OpenVPN em um Droplet DigitalOcean, fique ciente de que nós, como muitos provedores de hospedagem, cobramos por excesso de largura de banda. Por este motivo, tenha cuidado acerca de com quanto tráfego seu servidor está lidando.
Veja esta página para maiores informações.
Para completar este tutorial, você precisará de acesso a um servidor Debian 9 para hospedar seu serviço OpenVPN. Você precisará configurar um usuário não root com privilégios sudo
antes de iniciar este guia. Você pode seguir nosso guia de configuração inicial do servidor Debian 9 para configurar um usuário com permissões apropriadas. O tutorial em questão também irá configurar um firewall, que assumimos que esteja em funcionamento durante todo este guia.
Além disso, você precisará de uma máquina separada para servir como sua autoridade de certificação (CA). Embora seja tecnicamente possível usar seu servidor OpenVPN ou sua máquina local como sua CA, isso não é recomendado pois abre seu VPN para algumas vulnerabilidades de segurança. Baseando-se na documentação oficial do OpenVPN, você deve colocar sua CA em uma máquina autônoma que seja dedicada a importar e assinar pedidos de certificado. Por isso, este guia supõe que seu CA esteja em um servidor Debian 9 separado que também tenha um usuário não root com privilégios sudo
e um firewall básico.
Note que se você desativar a autenticação por senha enquanto configurar esses servidores, pode ter dificuldades ao transferir arquivos entre eles mais tarde neste guia. Para resolver este problema, você pode reativar a autenticação por senha em cada servidor. Como alternativa, você poderia gerar um par de chaves SSH para cada servidor, então adicionar a chave SSH pública do servidor OpenVPN no arquivo CA authorized_keys
da máquina e vice-versa. Veja Como configurar as chaves SSH no Debian 9 para instruções sobre como utilizar qualquer uma dessas soluções.
Quando tiver esses pré-requisitos funcionando, siga para o Passo 1 deste tutorial.
Para começar, atualize o índice de pacotes do seu servidor VPN e instale o OpenVPN. O OpenVPN está disponível nos repositórios padrão do Debian, então você pode usar o apt
para a instalação:
- sudo apt update
- sudo apt install openvpn
O OpenVPN é um VPN TLS/SSL. Isso significa que o OpenVPN utiliza os certificados para criptografar o tráfego entre o servidor e os clientes. Para emitir certificados confiáveis, você irá configurar sua própria autoridade de certificação (CA) simples. Para fazer isso, vamos baixar a última versão do EasyRSA, que vamos usar para construir nossa infraestrutura de chaves públicas (PKI) CA, do repositório oficial do projeto no GitHub.
Como mencionado nos pré-requisitos, vamos construir a CA em um servidor autônomo. A razão para esta abordagem é que, se um agressor fosse capaz de se infiltrar no seu servidor, eles seriam capazes de acessar sua chave privada CA e usá-la para assinar novos certificados, dando-lhes acesso ao seu VPN. Dessa forma, gerenciar a CA a partir de uma máquina autônoma ajuda a evitar que usuários não autorizados acessem seu VPN. Note também que é recomendado que você mantenha o servidor CA desligado quando não estiver sendo usado para assinar chaves como uma medida de precaução adicional.
Para começar a construir a infraestrutura CA e PKI, utilize o wget
para baixar a última versão do EasyRSA tanto na máquinas CA quanto no seu servidor OpenVPN. Para obter a última versão, vá até a página Releases no projeto oficial do EasyRSA no GitHub, copie o link de download para o arquivo que termina em .tgz
e então cole-o no seguinte comando:
- wget -P ~/ https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.4/EasyRSA-3.0.4.tgz
Então, extraia o tarball:
- cd ~
- tar xvf EasyRSA-3.0.4.tgz
Você instalou todos os softwares necessários com sucesso no seu servidor e máquina CA. Continue para configurar as variáveis usadas pelo EasyRSA e configurar um diretório CA, do qual você irá gerar as chaves e certificados necessários para que seu servidor e clientes acessem o VPN.
O EasyRSA vem instalado com um arquivo de configuração que você pode editar para definir uma série de variáveis para sua CA.
Na sua máquina CA, navegue até o diretório do EasyRSA:
- cd ~/EasyRSA-3.0.4/
Dentro deste diretório há um arquivo chamado vars.example
. Faça uma cópia deste arquivo, e dê à cópia o nome de vars
sem uma extensão de arquivo:
- cp vars.example vars
Abra este novo arquivo com seu editor de texto preferido:
- nano vars
Encontre as configurações que definam padrões de campo para novos certificados. Eles se parecerão com isso:
. . .
#set_var EASYRSA_REQ_COUNTRY "US"
#set_var EASYRSA_REQ_PROVINCE "California"
#set_var EASYRSA_REQ_CITY "San Francisco"
#set_var EASYRSA_REQ_ORG "Copyleft Certificate Co"
#set_var EASYRSA_REQ_EMAIL "me@example.net"
#set_var EASYRSA_REQ_OU "My Organizational Unit"
. . .
Descomente essas linhas e atualize os valores destacados para os que preferir, mas não os deixe em branco:
. . .
set_var EASYRSA_REQ_COUNTRY "US"
set_var EASYRSA_REQ_PROVINCE "NewYork"
set_var EASYRSA_REQ_CITY "New York City"
set_var EASYRSA_REQ_ORG "DigitalOcean"
set_var EASYRSA_REQ_EMAIL "admin@example.com"
set_var EASYRSA_REQ_OU "Community"
. . .
Quando você terminar, salve e feche o arquivo.
Dentro do diretório do EasyRSA está um script chamado easyrsa
que é chamado para realizar uma variedade de tarefas envolvidas com a construção e gerenciamento da CA. Execute este script com a opção init-pki
para iniciar a infraestrutura de chaves pública no servidor CA:
- ./easyrsa init-pki
Output. . .
init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /home/sammy/EasyRSA-3.0.4/pki
Depois disso, chame o script easyrsa
novamente, seguindo-o com a opção build-ca
. Isso irá construir a CA e criar dois arquivos importantes — ca.crt
e ca.key
— que constituem os lados público e privado de um certificado SSL.
ca.crt
é o arquivo de certificado público CA que, no contexto do OpenVPN, é usado pelo servidor e pelo cliente para informar um ao outro que fazem parte da mesma rede de confiança e não são alguém executando um ataque man-in-the-middle. Por essa razão, seu servidor e todos os seus clientes precisarão de uma cópia do arquivo ca.crt
.ca.key
é a chave privada que a máquina CA usa para assinar chaves e certificados para servidores e clientes. Se um agressor ganha acesso à sua CA e, por sua vez, seu arquivo ca.key
, ele poderá assinar pedidos de certificado e obter acesso ao seu VPN, impedindo sua segurança. Esse é o motivo pelo qual seu arquivo ca.key
deve estar apenas na sua máquina CA e que, idealmente, sua máquina CA deve ser mantida off-line quando não estiver assinando pedidos de certificado como uma medida de segurança extra.Se você não quiser ser solicitado a colocar uma senha sempre que interagir com sua CA, é possível executar o comando build-ca
com a opção nopass
, desta maneira:
- ./easyrsa build-ca nopass
No resultado, será solicitado que você confirme o nome comum para sua CA:
Output. . .
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:
O nome comum é o nome usado para se referir a esta máquina no contexto da autoridade de certificados. Você pode digitar qualquer string de caracteres para o nome comum da CA mas, para simplificarmos as coisas, pressione ENTER
para aceitar o nome padrão.
Com isso, sua CA está funcionando e está pronta para iniciar a assinatura de pedidos de certificados.
Agora que tem uma CA pronta para uso, você pode gerar uma chave privada e um pedido de certificado do seu servidor e então transferir o pedido para sua CA ser assinada, criando o certificado necessário. Você também é livre para criar alguns arquivos adicionais usados durante o processo de criptografia.
Inicie navegando até o diretório do EasyRSA no seu servidor OpenVPN:
- cd EasyRSA-3.0.4/
De lá, execute o script easyrsa
com a opção init-pki
. Embora você já tenha executado este comando na máquina CA, é necessário executá-lo aqui, pois seu servidor e CA terão diretórios PKI separados:
- ./easyrsa init-pki
Então, chame o script easyrsa
novamente, desta vez com a opção gen-req
seguida de um nome comum para a máquina. Novamente, pode ser o nome que você quiser, mas escolher um nome mais descritivo pode ser útil. Ao longo deste tutorial, o nome comum do OpenVPN será simplesmente “server”. Certifique-se de incluir também a opção nopass
. Se não fizer isso, o arquivo de pedido será protegido por senha, o que pode levar a problemas de permissão mais tarde:
Nota: Se escolher um nome que não seja “server”, você terá que ajustar algumas das instruções abaixo. Por exemplo, quando copiar os arquivos gerados para o diretório /etc/openvpn
, terá que substituir os nomes corretos. Você também terá que modificar o arquivo /etc/openvpn/server.conf
mais tarde para que aponte aos arquivos .crt
e .key
corretos.
- ./easyrsa gen-req server nopass
Isso criará uma chave privada para o servidor e um arquivo de pedido de certificado chamado server.req
. Copie a chave do servidor para o diretório /etc/openvpn/
:
- sudo cp ~/EasyRSA-3.0.4/pki/private/server.key /etc/openvpn/
Usando um método seguro (como o SCP, no nosso exemplo abaixo), transfira o arquivo server.req
para sua máquina CA:
- scp ~/EasyRSA-3.0.4/pki/reqs/server.req sammy@your_CA_ip:/tmp
Em seguida, na sua máquina CA, navegue até o diretório do EasyRSA:
- cd EasyRSA-3.0.4/
Utilizando o script easyrsa
mais uma vez, importe o arquivo server.req
seguindo o caminho do arquivo com seu nome comum:
- ./easyrsa import-req /tmp/server.req server
Então, assine o pedido executando o script easyrsa
com a opção sign-req
, seguida do tipo de pedido e do nome comum. O tipo de pedido pode ser client
ou server
, então para o pedido de certificado do servidor OpenVPN, certifique-se de usar o tipo de pedido do server
:
- ./easyrsa sign-req server server
No resultado, você será solicitado a verificar se o pedido vem de uma fonte confiável. Digite yes
, então pressione ENTER
para confirmar:
You are about to sign the following certificate.
Please check over the details shown below for accuracy. Note that this request
has not been cryptographically verified. Please be sure it came from a trusted
source or that you have verified the request checksum with the sender.
Request subject, to be signed as a server certificate for 3650 days:
subject=
commonName = server
Type the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Caso tenha criptografado sua chave CA, você será solicitado a colocar sua senha neste ponto.
Em seguida, transfira o certificado assinado de volta para seu servidor VPN usando um método seguro:
- scp pki/issued/server.crt sammy@your_server_ip:/tmp
Antes de sair da sua máquina CA, transfira também o arquivo ca.crt
para seu servidor:
- scp pki/ca.crt sammy@your_server_ip:/tmp
Em seguida, logue novamente no seu servidor OpenVPN e copie os arquivos server.crt
e ca.crt
para o seu diretório /etc/openvpn/
:
- sudo cp /tmp/{server.crt,ca.crt} /etc/openvpn/
Então, navegue até seu diretório do EasyRSA:
- cd EasyRSA-3.0.4/
Dali, crie uma chave Diffie-Hellman forte para usar durante a troca de chaves, digitando:
- ./easyrsa gen-dh
Isso pode levar alguns minutos para completar. Assim que terminar, gere uma assinatura HMAC para reforçar as capacidades de verificação da integridade TLS do servidor:
- sudo openvpn --genkey --secret ta.key
Quando o comando terminar, copie os dois novos arquivos para seu diretório /etc/openvpn/
:
- sudo cp ~/EasyRSA-3.0.4/ta.key /etc/openvpn/
- sudo cp ~/EasyRSA-3.0.4/pki/dh.pem /etc/openvpn/
Com isso, foram gerados todos os arquivos de certificado e de chave necessários pelo seu servidor. Você está pronto para criar os certificados e chaves correspondentes que sua máquina de cliente usará para acessar seu servidor OpenVPN.
Embora você possa gerar uma chave privada e um pedido de certificado na sua máquina de cliente e então enviá-la para a CA para ser assinada, este guia define um processo para gerar o pedido de certificado no servidor. O benefício disso é que podemos criar um script que irá gerar automaticamente arquivos de configuração do cliente que contêm todas as chaves e certificados necessários. Isso permite que você evite ter que transferir chaves, certificados e arquivos de configuração para clientes e simplifica o processo de conexão ao VPN.
Vamos gerar um único par, chave de cliente e certificado, para este guia. Se tiver mais de um cliente, você pode repetir este processo para cada um deles. Note, porém, que você precisará passar um valor de nome único ao script para cada cliente. Ao longo deste tutorial, o primeiro par de certificado/chave é chamado de client1
.
Inicie criando uma estrutura de diretório dentro do seu diretório home para armazenar os arquivos de certificado de cliente e de chave:
- mkdir -p ~/client-configs/keys
Como irá armazenar os pares de certificado/chave e arquivos de configuração dos seus clientes neste diretório, você deve bloquear suas permissões agora, como uma medida de segurança:
- chmod -R 700 ~/client-configs
Em seguida, navegue até o diretório do EasyRSA e execute o script easyrsa
com as opções gen-req
e nopass
, junto com o nome comum para o cliente:
- cd ~/EasyRSA-3.0.4/
- ./easyrsa gen-req client1 nopass
Pressione ENTER
para confirmar o nome comum. Então, copie o arquivo client1.key
para o diretório /client-configs/keys/
que você criou mais cedo:
- cp pki/private/client1.key ~/client-configs/keys/
Em seguida, transfira o arquivo client1.req
para sua máquina CA usando um método seguro:
- scp pki/reqs/client1.req sammy@your_CA_ip:/tmp
Faça login na sua máquina CA, navegue até o diretório do EasyRSA, e importe o pedido de certificado:
- ssh sammy@your_CA_IP
- cd EasyRSA-3.0.4/
- ./easyrsa import-req /tmp/client1.req client1
Então, assine o pedido como fez para o servidor no passo anterior. Desta vez, certifique-se de especificar o tipo de pedido do client
:
- ./easyrsa sign-req client client1
No prompt, digite yes
para confirmar que você pretende assinar o pedido de certificado e que ele veio de uma fonte confiável:
OutputType the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Novamente, caso tenha criptografado sua chave CA, você será solicitado a colocar sua senha aqui.
Isso criará um arquivo de certificado de cliente chamado client1.crt
. Transfira este arquivo de volta para o servidor:
- scp pki/issued/client1.crt sammy@your_server_ip:/tmp
Use o SSH de volta no seu servidor OpenVPN e copie o certificado de cliente para o diretório /client-configs/keys
:
- cp /tmp/client1.crt ~/client-configs/keys/
Em seguida, copie também os arquivos ca.crt
e ta.key
para o diretório /client-configs/keys/
:
- sudo cp ~/EasyRSA-3.0.4/ta.key ~/client-configs/keys/
- sudo cp /etc/openvpn/ca.crt ~/client-configs/keys/
Com isso, todos os certificados e chaves do seu servidor e do seu cliente foram gerados e estão armazenados nos diretórios apropriados do seu servidor. Ainda existem algumas ações que precisam ser feitas com esses arquivos, mas elas aparecerão em um passo mais adiante. Por enquanto, você pode seguir em frente para configurar o OpenVPN no seu servidor.
Agora que tanto os certificados quanto as chaves do seu cliente e do seu servidor foram gerados, você pode começar a configurar o serviço OpenVPN para usar essas credenciais.
Inicie copiando uma amostra de um arquivo de configuração do OpenVPN para o diretório de configuração e então extraia-o para usá-lo como uma base para sua configuração:
- sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
- sudo gzip -d /etc/openvpn/server.conf.gz
Abra a configuração do servidor no seu editor de texto preferido:
- sudo nano /etc/openvpn/server.conf
Encontre a seção HMAC procurando a diretriz tls-auth
. Esta linha já deve estar descomentada, mas se não estiver, remova o “;” para descomentá-la:
tls-auth ta.key 0 # This file is secret
Em seguida, encontre a seção sobre cifras criptográficas, procurando as linhas comentadas cipher
. A cifra AES-256-CBC
oferece um bom nível de criptografia e é bastante suportada. Novamente, esta linha já deve estar descomentada, mas se não estiver remova o “;” anterior a ela:
cipher AES-256-CBC
Abaixo disso, adicione uma diretriz auth
para selecionar o algoritmo message digest HMAC. Para fazer isso, o SHA256
é uma boa escolha:
auth SHA256
Em seguida, encontre a linha que contém uma diretriz dh
que define os parâmetros Diffie-Hellman. Devido a algumas alterações recentes feitas no EasyRSA, o nome do arquivo para a chave Diffie-Hellman pode ser diferente do que está listado no arquivo de configuração do servidor exemplo. Se necessário, altere o nome do arquivo listado aqui removendo o 2048
para que ele se alinhe com a chave que você gerou no passo anterior:
dh dh.pem
Por fim, encontre as configurações user
e group
e remova os “;” no começo de cada para descomentar essas linhas:
user nobody
group nogroup
As alterações que você fez no arquivo exemplo server.conf
até este ponto são necessárias para que o OpenVPN funcione. As alterações descritas abaixo são opcionais, embora também sejam necessárias para muitos casos de uso comum.
As configurações acima criarão a conexão VPN entre as duas máquinas, mas não forçarão nenhuma conexão a usar o túnel. Se quiser usar o VPN para rotear todo o seu tráfego, você provavelmente irá querer forçar as configurações DNS para os computadores do cliente.
Há algumas diretrizes no arquivo server.conf
que você deve alterar para habilitar esta funcionalidade. Encontre a seção redirect-gateway
e remova o ponto e vírgula "**;" do começo da linha **redirect-gateway
para descomentá-la:
push "redirect-gateway def1 bypass-dhcp"
Logo abaixo, encontre a seção dhcp-option
. Novamente, remova o “;” da frente de ambas as linhas para descomentá-las:
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"
Isso ajudará os clientes a reconfigurar suas configurações DNS para usar o túnel VPN como gateway padrão.
Por padrão, o servidor OpenVPN usa a porta 1194
e o protocolo UDP para aceitar conexões de clientes. Se precisar usar uma porta diferente, devido a ambientes de rede restritivos em que seus clientes possam estar, você pode alterar a opção port
. Se não estiver hospedando conteúdo Web no seu servidor OpenVPN, a porta 443
é uma escolha popular, uma vez que ela é geralmente permitida em de regras de firewall.
# Optional!
port 443
Geralmente, o protocolo também é restrito a essa porta. Se for, mude o proto
de UDP para TCP:
# Optional!
proto tcp
Se você de fato mudar o protocolo para o TCP, precisará alterar o valor explicit-exit-notify
da diretriz de 1
para 0
, já que essa diretriz é usada apenas pelo UDP. Não fazer isso ao usar o TCP irá causar erros quando você iniciar o serviço OpenVPN:
# Optional!
explicit-exit-notify 0
Se não tiver necessidade de usar uma porta e protocolos diferentes, é melhor deixar essas duas opções no seu padrão.
Se você selecionou um nome diferente durante o comando ./build-key-server
mais cedo, modifique as linhas cert
e key
que você vê para apontar para os arquivos .crt
e .key
apropriados. Se você usou o nome padrão, “server”, isso já está configurado corretamente:
cert server.crt
key server.key
Quando você terminar, salve e feche o arquivo.
Após passar e fazer as alterações na configuração OpenVPN do seu servidor que são necessárias para seu caso de uso específico, você pode começar a fazer algumas alterações na rede do seu servidor.
Existem alguns aspectos da configuração de rede do servidor que precisam ser ajustados para que o OpenVPN possa rotear corretamente o tráfego pelo VPN. O primeiro desses é o encaminhamento de IP, um método para determinar onde o tráfego de IP deve ser roteado. Isso é essencial para a funcionalidade VPN que seu servidor irá fornecer.
Ajuste a configuração padrão de encaminhamento de IP do seu servidor, modificando o arquivo /etc/sysctl.conf
:
- sudo nano /etc/sysctl.conf
Dentro, procure a linha comentada que define o net.ipv4.ip_forward
. Remova o caractere “#” do começo da linha para descomentar esta configuração:
net.ipv4.ip_forward=1
Salve e feche o arquivo quando você terminar.
Para ler o arquivo e ajustar os valores para a sessão atual, digite:
- sudo sysctl -p
Outputnet.ipv4.ip_forward = 1
Se seguiu o guia de configuração inicial do servidor Debian 9 listado nos pré-requisitos, você deve ter um firewall UFW em funcionamento. Independentemente de você usar o firewall para bloquear tráfego indesejado (o que você quase sempre deve fazer), para este guia você precisa de um firewall para manipular uma parte do tráfego que vem para o servidor. Algumas das regras do firewall devem ser modificadas para habilitar o mascaramento, que é um conceito iptables que fornece tradução de endereço de rede (NAT) dinâmica e instantânea para rotear conexões com clientes de maneira correta.
Antes de abrir o arquivo de configuração do firewall para adicionar as regras de mascaramento, você deve primeiro encontrar a interface de rede pública da sua máquina. Para fazer isso, digite:
- ip route | grep default
Sua interface pública é a string encontrada dentro do resultado desse comando que vem após a palavra “dev”. Por exemplo, este resultado mostra a interface chamada eth0
, que está destacada abaixo:
Outputdefault via 203.0.113.1 dev eth0 onlink
Quando tiver a interface associada à sua rota padrão, abra o arquivo /etc/ufw/before.rules
para adicionar as configurações relevantes:
- sudo nano /etc/ufw/before.rules
As regras do UFW são normalmente adicionadas usando o comando ufw
. As regras listadas no arquivo before.rules
, em contrapartida, são lidas e colocadas em funcionamento antes das regras convencionais do UFW serem carregadas. Em direção ao topo do arquivo, adicione as linhas destacadas abaixo. Isso irá definir a política padrão para a cadeia POSTROUTING
na tabela nat
e mascarar todo o tráfego vindo do VPN. Lembre-se de substituir o eth0
na linha -A POSTROUTING
abaixo pela interface que você encontrou no comando acima:
#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
# ufw-before-input
# ufw-before-output
# ufw-before-forward
#
# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!)
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
# END OPENVPN RULES
# Don't delete these required lines, otherwise there will be errors
*filter
. . .
Salve e feche o arquivo quando você terminar.
Em seguida, você também precisa dizer ao UFW para permitir os pacotes redirecionados por padrão. Para fazer isso, abra o arquivo /etc/default/ufw
:
- sudo nano /etc/default/ufw
Dentro, encontre a diretriz DEFAULT_FORWARD_POLICY
e altere o valor de DROP
para ACCEPT
:
DEFAULT_FORWARD_POLICY="ACCEPT"
Salve e feche o arquivo quando você terminar.
Em seguida, ajuste o firewall em si para permitir o tráfego para o OpenVPN. Caso não tenha alterado a porta e o protocolo no arquivo /etc/openvpn/server.conf
, você precisará abrir o tráfego UDP para a porta 1194
. Se você modificou a porta e/ou o protocolo, substitua os valores que selecionou aqui.
Caso tenha esquecido de adicionar a porta SSH ao seguir o tutorial pré-requisito, adicione-a também aqui:
- sudo ufw allow 1194/udp
- sudo ufw allow OpenSSH
Após adicionar essas regras, desative e reative o UFW para reiniciá-lo e carregar as alterações de todos os arquivos que você modificou:
- sudo ufw disable
- sudo ufw enable
Seu servidor agora está configurado para lidar corretamente com o tráfego OpenVPN.
Você finalmente está pronto para inicializar o serviço OpenVPN no seu servidor. Isso é feito usando o utilitário systemctl
do systemd.
Inicie o servidor OpenVPN especificando o nome do seu arquivo de configuração como uma variável de instância após o nome do arquivo de unidade systemd. O arquivo de configuração para o seu servidor chama-se /etc/openvpn/server.conf
, então adicione @server
ao final do seu arquivo de unidade ao chamá-lo:
- sudo systemctl start openvpn@server
Verifique novamente se o serviço foi iniciado com sucesso digitando:
- sudo systemctl status openvpn@server
Se tudo correu bem, seu resultado se parecerá com este:
Output● openvpn@server.service - OpenVPN connection to server
Loaded: loaded (/lib/systemd/system/openvpn@.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2016-05-03 15:30:05 EDT; 47s ago
Docs: man:openvpn(8)
https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
https://community.openvpn.net/openvpn/wiki/HOWTO
Process: 5852 ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid (code=exited, sta
Main PID: 5856 (openvpn)
Tasks: 1 (limit: 512)
CGroup: /system.slice/system-openvpn.slice/openvpn@server.service
└─5856 /usr/sbin/openvpn --daemon ovpn-server --status /run/openvpn/server.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/server.conf --writepid /run/openvpn/server.pid
Você também pode verificar se a interface OpenVPN tun0
está disponível digitando:
- ip addr show tun0
Isso dará como resultado uma interface configurada:
Output4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100
link/none
inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
valid_lft forever preferred_lft forever
Após inicializar o serviço, habilite-o para que ele seja iniciado automaticamente no boot:
- sudo systemctl enable openvpn@server
Seu serviço OpenVPN agora está funcionando. Entretanto, antes de começar a usá-lo, você deve primeiro criar um arquivo de configuração para a máquina do cliente. Este tutorial já mostrou como criar pares de certificado/key para clientes, e no próximo passo vamos demonstrar como criar uma infraestrutura que irá gerar arquivos de configuração do cliente facilmente.
Criar arquivos de configuração para os clientes OpenVPN pode ser um desafio, já que cada cliente deve ter sua própria configuração e cada um deve se alinhar com as configurações descritas no arquivo de configuração do servidor. Ao invés de escrever um único arquivo de configuração que só pode ser usado para um cliente, este passo define um processo para a construção de uma infraestrutura de configuração de clientes que você pode usar para gerar arquivos de configuração imediatamente. Você criará primeiro um arquivo de configuração “base” e então construirá um script que permitirá que você gere arquivos únicos de configuração de clientes, certificados e chaves conforme necessário.
Comece criando um novo diretório onde você irá armazenar arquivos de configuração de clientes no diretório client-configs
que você criou mais cedo:
- mkdir -p ~/client-configs/files
Em seguida, copie um arquivo de configuração de cliente exemplo para o diretório client-configs
para usá-lo como sua configuração base:
- cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf
Abra este novo arquivo no seu editor de texto:
- nano ~/client-configs/base.conf
Dentro, localize a diretriz remote
. Isso aponta o cliente para seu endereço de servidor OpenVPN — o endereço IP público do seu servidor OpenVPN. Caso tenha decidido alterar a porta em que o servidor OpenVPN está escutando, também será preciso alterar a 1194
para a porta que você selecionou:
. . .
# The hostname/IP and port of the server.
# You can have multiple remote entries
# to load balance between the servers.
remote your_server_ip 1194
. . .
Certifique-se de que o protocolo corresponda ao valor que você está usando na configuração do servidor:
proto udp
Em seguida, descomente as diretrizes user
e group
removendo o “;” no começo de cada linha:
# Downgrade privileges after initialization (non-Windows only)
user nobody
group nogroup
Encontre as diretrizes que definem ca
, cert
e key
. Comente essas diretrizes, já que você adicionará os certificados e chaves dentro do arquivo em si em breve:
# SSL/TLS parms.
# See the server config file for more
# description. It's best to use
# a separate .crt/.key file pair
# for each client. A single ca
# file can be used for all clients.
#ca ca.crt
#cert client.crt
#key client.key
Do mesmo modo, comente a diretriz tls-auth
, já que você adicionará a ta.key
diretamente no arquivo de configuração do cliente:
# If a tls-auth key is used on the server
# then every client must also have the key.
#tls-auth ta.key 1
Espelhe as configurações cipher
e auth
que você definiu no arquivo /etc/openvpn/server.conf
:
cipher AES-256-CBC
auth SHA256
Em seguida, adicione a diretriz key-direction
em algum lugar no arquivo. Você deve definir isso em “1” para que o VPN funcione corretamente na máquina do cliente:
key-direction 1
Por fim, adicione algumas linhas comentadas. Embora possa incluir essas diretrizes em todos os arquivos de configuração de clientes, você precisa habilitá-las para clientes Linux que acompanham um arquivo /etc/openvpn/update-resolv-conf
. Este script usa o utilitário resolvconf
para atualizar as informações DNS para clientes Linux.
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
Se seu cliente estiver usando o Linux e tiver um arquivo /etc/openvpn/update-resolv-conf
, descomente essas linhas do arquivo de configuração do cliente após terem sido geradas.
Salve e feche o arquivo quando você terminar.
Em seguida, crie um script simples que irá compilar sua configuração base com os arquivos de certificado, chave, e criptografia relevantes e então colocar a configuração gerada no diretório ~/client-configs/files
. Abra um novo arquivo chamado make_config.sh
dentro do diretório ~/client-configs
:
- nano ~/client-configs/make_config.sh
Dentro, adicione o seguinte conteúdo, certificando-se de alterar sammy<^>
para a conta de usuário não root do seu servidor:
#!/bin/bash
# First argument: Client identifier
KEY_DIR=/home/sammy/client-configs/keys
OUTPUT_DIR=/home/sammy/client-configs/files
BASE_CONFIG=/home/sammy/client-configs/base.conf
cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn
Salve e feche o arquivo quando você terminar.
Antes de seguir em frente, certifique-se de marcar este arquivo como executável digitando:
- chmod 700 ~/client-configs/make_config.sh
Este script fará uma cópia do arquivo base.conf
que você fez, coletará todos os arquivos de certificado e chave que você criou para seu cliente, extrairá os conteúdos deles, adicionará esse conteúdo à cópia do arquivo de configuração base e exportará tudo isso para um novo arquivo de configuração de clientes. Isso significa que, ao invés de ter que gerenciar os arquivos de configuração do cliente, certificado e chave separadamente, todas as informações necessárias são armazenadas em um só lugar. O benefício disso é que, caso tenha necessidade de adicionar um cliente no futuro, você pode executar este script para criar rapidamente o arquivo de configuração e garantir que todas as informações importantes são armazenadas em uma única localização de fácil acesso.
Note que, sempre que adicionar um novo cliente, você precisará gerar novas chaves e certificados para ele antes de executar este script e gerar seu arquivo de configuração. Você poderá praticar um pouco a utilização deste script no próximo passo.
Se seguiu com o guia, você criou um certificado e chave de cliente nomeados client1.crt
e client1.key
, respectivamente, no passo 4. É possível gerar um arquivo de configuração para essas credenciais entrando no seu diretório ~/client-configs
e executando o script que você fez no final do passo anterior:
- cd ~/client-configs
- sudo ./make_config.sh client1
Isso criará um arquivo chamado client1.ovpn
no seu diretório ~/client-configs/files
:
- ls ~/client-configs/files
Outputclient1.ovpn
Você precisa transferir esse arquivo para o dispositivo que planeja usar como cliente. Por exemplo, este poderia ser seu computador local ou um dispositivo móvel.
Embora os aplicativos exatos usados para realizar essa transferência dependerão do sistema operacional do seu dispositivo e de suas preferências pessoais, um método confiável e seguro é usar o SFTP (protocolo SSH de transferência de arquivos) ou SCP (cópia segura) no backend. Isso transportará os arquivos de autenticação do VPN do seu cliente através de uma conexão criptografada.
Aqui está um comando SFTP exemplo usando o exemplo client1.ovpn
que você pode executar do seu computador local (macOS ou Linux). Ele coloca o arquivo .ovpn
no seu diretório home:
- sftp sammy@your_server_ip:client-configs/files/client1.ovpn ~/
Aqui estão diversas ferramentas e tutoriais para transferir arquivos com segurança do servidor para um computador local:
Esta seção aborda como instalar um perfil VPN de um cliente no Windows, macOS, Linux, iOS e Android. Nenhuma dessas instruções de cliente dependem uma da outra, então sinta-se a vontade para pular para alguma que seja aplicável ao seu dispositivo.
A conexão OpenVPN terá o mesmo nome que você chamou o arquivo .ovpn
. No que diz respeito a este tutorial, isso significa que a conexão se chama client1.ovpn
, de acordo com o primeiro arquivo de cliente que você gerou.
Instalando
Faça download do aplicativo do cliente OpenVPN para o Windows da página de download do OpenVPN. Escolha a versão do instalador apropriada para a sua versão do Windows.
Nota: O OpenVPN precisa de privilégios administrativos para ser instalado.
Após instalar o OpenVPN, copie o arquivo .ovpn
para:
C:\Program Files\OpenVPN\config
Quando iniciar o OpenVPN, ele verá o perfil e o disponibilizará automaticamente.
Você deve executar o OpenVPN como administrador sempre que ele é usado, mesmo por contas administrativas. Para fazer isso sem precisar clicar com o botão direito e selecionar Executar como administrador sempre que você usar o VPN, é necessário predefinir isso de uma conta administrativa. Isso significa que os usuários padrão também precisam digitar a senha do administrador para usar o OpenVPN. Por outro lado, os usuários padrão não conseguem se conectar devidamente ao servidor a não ser que o aplicativo OpenVPN no cliente tenha direitos de administrador, então são necessários privilégios elevados.
Para definir que o aplicativo OpenVPN sempre execute como administrador, clique com o botão direito no seu atalho e vá em Propriedades. No final da aba Compatibilidade, clique no botão para Mudar as configurações para todos os usuários. Na nova janela, marque Executar este programa como administrador.
Conectando
Cada vez que iniciar a OpenVPN GUI, o Windows irá perguntar se você quer permitir que o programa faça alterações no seu computador. Clique em Yes. Iniciar o aplicativo do cliente OpenVPN coloca apenas o applet na bandeja do sistema para que você possa conectar e desconectar o VPN conforme necessário; isso não cria de fato a conexão VPN.
Assim que o OpenVPN iniciar, inicie uma conexão entrando no applet da bandeja do sistema e clicando com o botão direito no ícone do applet do OpenVPN. Isso abre o menu de contexto. Selecione client1 no topo do menu (esse é seu perfil client1.ovpn
) e escolha Connect.
Uma janela de status abrirá mostrando o registro de saída enquanto a conexão estiver estabelecida, e uma mensagem aparecerá assim que o cliente estiver conectado.
Desconecte-se do VPN da mesma maneira: Vá até o applet da bandeja do sistema, clique com o botão direito no ícone do applet do OpenVPN, selecione o perfil do cliente e clique em Disconnect.
Instalando
O Tunnelblick é um cliente OpenVPN gratuito e de código aberto para o macOS. Você pode baixar a imagem de disco mais recente da página de downloads do Tunnelblick. Clique duas vezes no arquivo baixado .dmg
e siga os prompts para instalar.
Ao final do processo de instalação, o Tunnelblick irá perguntar se você tem algum arquivo de configuração. Para simplificar, responda No e deixe o Tunnelblick terminar. Abra uma janela do Finder e clique duas vezes no client1.ovpn
. O Tunnelblick instalará o perfil do cliente. São necessário privilégios administrativos.
Conectando
Inicie o Tunnelblick clicando duas vezes no Tunnelblick na pasta Applications. Assim que o Tunnelblick iniciar, haverá um ícone do Tunnelblick na barra do menu no canto superior direito da tela para controle de conexões. Clique no ícone e então no item do menu Connect para iniciar a conexão VPN. Selecione a conexão client1.
Se estiver usando o Linux, existe uma variedade de ferramentas que você pode usar dependendo da sua distribuição. Seu ambiente do desktop ou gerenciador de janelas também pode incluir serviços de conexão.
Entretanto, o modo mais universal de se conectar é apenas usar o software OpenVPN.
No Ubuntu ou Debian, você pode instalá-lo assim como fez no servidor digitando:
- sudo apt update
- sudo apt install openvpn
No CentOS, você pode habilitar os repositórios EPEL e então instalá-lo digitando:
- sudo yum install epel-release
- sudo yum install openvpn
Verifique se sua distribuição inclui um script /etc/openvpn/update-resolv-conf
:
- ls /etc/openvpn
Outputupdate-resolv-conf
Em seguida, edite o arquivo de configuração do cliente OpenVPN que você transferiu:
- nano client1.ovpn
Se conseguir encontrar um arquivo update-resolv-conf
, descomente as três linhas que você adicionou para ajustar as configurações DNS:
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
Se estiver usando o CentOS, altere a diretriz group
de nogroup
para nobody
de forma a corresponder aos grupos disponíveis da distribuição:
group nobody
Salve e feche o arquivo.
Agora, você pode se conectar ao VPN apenas apontando o comando openvpn
para o arquivo de configuração do cliente:
- sudo openvpn --config client1.ovpn
Isso deve conectar você ao seu VPN.
Instalando
A partir da App Store do iTunes, procure e instale o OpenVPN Connect, o aplicativo oficial do cliente iOS OpenVPN. Para transferir sua configuração do cliente iOS para o dispositivo, conecte-o diretamente a um computador.
O processo de completar a transferência com o iTunes é descrito aqui. Abra o iTunes no computador e clique em iPhone > apps. Role até o final da seção File Sharing e clique no app OpenVPN. A janela em branco à direita, OpenVPN Documents, serve para compartilhar arquivos. Arraste o arquivo .ovpn
para a janela OpenVPN Documents.
Agora, inicie o app OpenVPN no iPhone. Você receberá uma notificação de que um novo perfil está pronto para ser importado. Clique no sinal mais em verde para importá-lo.
Conectando
O OpenVPN agora está pronto para ser usado com o novo perfil. Inicie a conexão deslizando o botão Connect para a posição On. Desconecte deslizando o mesmo botão para Off.
Nota: O switch VPN em Settings não pode ser usado para se conectar ao VPN. Se tentar fazer isso, receberá uma notificação para se conectar apenas usando o app OpenVPN.
Instalando
Abra a Google Play Store. Procure e instale o Android OpenVPN Connect, o aplicativo oficial do cliente Android OpenVPN.
Você pode transferir o perfil .ovpn
conectando o dispositivo Android ao seu computador por USB e copiando o arquivo. Como alternativa, caso tenha um leitor de cartões SD, você pode remover o cartão SD do dispositivo, copiar o perfil nele e então inserir o cartão de volta no dispositivo Android.
Inicie o app OpenVPN e clique no menu para importar o perfil.
Então navegue até a localização do perfil salvo (a captura de tela usa /sdcard/Download/
) e selecione o arquivo. O app criará uma nota de que o perfil foi importado.
Conectando
Para se conectar, basta apertar o botão Connect. Você será questionado se confia no aplicativo OpenVPN. Escolha OK para iniciar a conexão. Para se desconectar do VPN, volte para o app OpenVPN e escolha Disconnect.
Nota: Este método utilizado no teste da sua conexão VPN funcionará apenas se você optou por rotear todo o seu tráfego através do VPN no Passo 5.
Assim que tudo estiver instalado, um simples visto confirma que tudo está funcionando corretamente. Sem ter uma conexão VPN habilitada, abra um navegador e vá para DNSLeakTest.
O site irá retornar o endereço IP atribuído pelo seu provedor de serviço de Internet e como você se mostra para o resto do mundo. Para verificar suas configurações DNS através do mesmo site, clique em Extended Test e ele dirá a você quais servidores DNS você está usando.
Agora, conecte o cliente OpenVPN ao VPN do seu servidor e atualize o navegador. Um endereço IP completamente diferente (aquele do seu servidor VPN) deve aparecer agora e é assim que você aparece para o mundo. Novamente, o Extended Test do DNSLeakTest irá verificar suas configurações DNS e confirmar que você agora está usando os resolvers DNS definidos pelo seu VPN.
De vez em quando, você pode precisar revogar um certificado de cliente para impedir o acesso adicional ao servidor OpenVPN.
Para fazer isso, navegue até o diretório EasyRSA na sua máquina CA:
- cd EasyRSA-3.0.4/
Em seguida, execute o script easyrsa
com a opção revoke
, seguida do nome do cliente que você deseja revogar:
- ./easyrsa revoke client2
Isso irá pedir que você confirme a revogação digitando yes
:
OutputPlease confirm you wish to revoke the certificate with the following subject:
subject=
commonName = client2
Type the word 'yes' to continue, or any other input to abort.
Continue with revocation: yes
Após confirmar a ação, a CA irá revogar completamente o certificado do cliente. Entretanto, seu servidor OpenVPN ainda não tem como verificar se os certificados de clientes foram revogados e se o cliente ainda terá acesso ao VPN. Para corrigir isso, crie uma lista de revogação de certificados (CRL) na sua máquina CA:
- ./easyrsa gen-crl
Isso irá gerar um arquivo chamado crl.pem
. Transfira com segurança este arquivo para seu servidor OpenVPN:
- scp ~/EasyRSA-3.0.4/pki/crl.pem sammy@your_server_ip:/tmp
No seu servidor OpenVPN, copie este arquivo para seu diretório /etc/openvpn/
:
- sudo cp /tmp/crl.pem /etc/openvpn
Em seguida, abra o arquivo de configuração do servidor OpenVPN:
- sudo nano /etc/openvpn/server.conf
No final do arquivo, adicione a opção crl-verify
que irá instruir o servidor OpenVPN para verificar a lista de revogação de certificados que criamos sempre que uma tentativa de conexão for feita:
crl-verify crl.pem
Salve e feche o arquivo.
Por fim, reinicie o OpenVPN para implementar a revogação do certificado:
- sudo systemctl restart openvpn@server
O cliente já não deve conseguir se conectar com sucesso ao servidor usando a credencial antiga.
Para revogar outros clientes, siga este processo:
./easyrsa revoke client_name
crl.pem
para seu servidor OpenVPN e copie-o para o diretório /etc/openvpn
para sobrepor a lista antiga.Você pode usar este processo para revogar quaisquer certificados que você tenha emitido anteriormente para seu servidor.
Agora, você está navegando na Internet com segurança protegendo sua identidade, local e tráfego de bisbilhoteiros e censuradores. Se agora você já não precisa mais emitir certificados, é recomendável que desligue sua máquina CA ou desconecte-a da Internet até que precise adicionar ou revogar certificados. Isso ajudará a evitar que os agressores tenham acesso ao seu VPN.
Para configurar mais clientes, você só precisa seguir os passos 4 e 9-11 para cada dispositivo adicional. Para revogar o acesso a clientes, siga o passo 12.
]]>O UFW, ou Uncomplicated Firewall (Firewall Descomplicado), é uma interface para iptables
desenvolvida para simplificar o processo de configuração de um firewall. Apesar da iptables
ser uma ferramenta sólida e flexível, pode ser difícil para os iniciantes aprender como usá-la para configurar corretamente um firewall. Se você deseja começar a proteger sua rede, mas não tem certeza sobre qual ferramenta usar, o UFW pode ser a escolha certa para você.
Este tutorial mostrará como configurar um firewall com o UFW no Debian 9.
Para seguir este tutorial, você precisará de um servidor Debian 9 com um usuário sudo não raiz, que pode ser configurado seguindo os Passos 1–3 na Configuração Inicial do Servidor com o Debian 9.
O Debian não instala o UFW por padrão. Se você seguiu todo o tutorial de Configuração Inicial do Servidor, você já terá instalado e habilitado o UFW. Caso contrário, instale-o agora usando o apt
:
- sudo apt install ufw
Vamos configurar o UFW e habilitá-lo nos passos a seguir.
Este tutorial foi escrito tendo o IPv4 em mente, mas funcionará também com o IPv6, contanto que você o habilite. Se seu servidor Debian tiver o IPv6 habilitado, certifique-se de que o UFW esteja configurado para dar suporte ao IPv6, para que ele gerencie as regras de firewall do IPv6, além das regras do IPv4. Para fazer isso, abra a configuração do UFW com o nano
ou seu editor favorito.
- sudo nano /etc/default/ufw
Então, certifique-se de que o valor IPV6
seja yes
. Deve se parecer com isto:
IPV6=yes
Salve e feche o arquivo. Agora, quando o UFW estiver habilitado, ele será configurado para escrever regras de firewall para ambos IPv4 e IPv6. No entanto, antes de habilitar o UFW, nós vamos querer garantir que seu firewall esteja configurado para permitir que você se conecte via SSH. Vamos começar configurando as políticas padrão.
Se você estiver apenas começando com seu firewall, as primeiras regras a definir são suas políticas padrão. Essas regras controlam a maneira de lidar com tráfego que não seja claramente compatível com quaisquer outras regras. Por padrão, o UFW é configurado para negar todas as conexões de entrada e permitir todas as conexões de saída. Isso significa que qualquer um que tentasse acessar o seu servidor não conseguiria conectar-se, ao passo que os aplicativos dentro do servidor conseguiriam alcançar o mundo exterior.
Vamos retornar as regras do seu UFW para as configurações padrão, para que possamos ter certeza de que você conseguirá acompanhar este tutorial. Para definir os padrões usados pelo UFW, utilize estes comandos:
- sudo ufw default deny incoming
- sudo ufw default allow outgoing
Esses comandos configuram os padrões para negar as conexões de entrada e permitir as conexões de saída. Esses padrões do firewall poderiam suprir as necessidades de um computador pessoal Porém, os servidores normalmente precisam responder às solicitações que chegam de usuários externos. Vamos verificar isso a seguir.
Se nós habilitássemos nosso firewall UFW agora, ele negaria todas as conexões de entrada. Isso significa que precisaremos criar regras que permitam de maneira explícita as conexões de entrada legítimas — conexões SSH ou HTTP, por exemplo — se quisermos que nosso servidor responda a esses tipos de pedidos. Se você estiver usando um servidor na nuvem, você provavelmente irá querer permitir conexões de entrada via protocolo SSH para que você consiga se conectar e gerenciar o seu servidor.
Para configurar seu servidor para permitir as conexões de entrada via SSH, você pode utilizar este comando:
- sudo ufw allow ssh
Esse procedimento criará regras do firewall que permitirão todas as conexões na porta 22
, que é a porta na qual o SSH daemon escuta por padrão. O UFW sabe o que a porta allow ssh
significa porque ela está listada como um serviço no arquivo /etc/services
.
No entanto, podemos realmente escrever a regra equivalente, especificando a porta em vez do nome do serviço. Por exemplo, este comando funciona de forma semelhante ao anterior:
- sudo ufw allow 22
Se você configurou seu SSH daemon para usar uma porta diferente, você terá que especificar a porta apropriada. Por exemplo, se seu servidor SSH estiver escutando na porta 2222
, você pode usar este comando para permitir as conexões naquela porta:
- sudo ufw allow 2222
Agora que seu firewall está configurado para permitir as conexões de entrada via SSH, podemos habilitá-lo.
Para habilitar o UFW, utilize este comando:
- sudo ufw enable
Você receberá um aviso que diz que o comando pode interromper conexões via SSH existentes. Nós já configuramos uma regra de firewall que permite conexões via SSH, então tudo bem continuar. Responda ao prompt com y
e tecle ENTER
.
O firewall agora está ativo. Execute o comando sudo ufw status verbose
para ver as regras que foram definidas. O resto deste tutorial tratará de como usar o UFW em mais detalhes, como, por exemplo, permitir ou negar diferentes tipos de conexões.
Neste ponto, você deve permitir todas as outras conexões às quais seu servidor precisa responder. As conexões que você deve permitir dependem das suas necessidades específicas. Felizmente, você já sabe escrever regras que permitem conexões baseadas em um nome de serviço ou porta; já fizemos isso para o SSH na porta 22
. Você também pode fazer isso para:
sudo ufw allow http
ou sudo ufw allow 80
sudo ufw allow https
ou sudo ufw allow 443
Há várias outras maneiras de permitir outras conexões, além de especificar uma porta ou serviço conhecido.
Você pode especificar faixas de portas com o UFW. Alguns aplicativos usam várias portas, em vez de uma única porta.
Por exemplo, para permitir conexões via protocolo X11, que usam as portas 6000
-6007
, utilize esses comandos:
- sudo ufw allow 6000:6007/tcp
- sudo ufw allow 6000:6007/udp
Ao especificar as faixas de porta com o UFW, você deve especificar o protocolo (tcp
ou udp
) para o qual as regras devem aplicar-se. Nós não mencionamos isso antes porque não especificar o protocolo automaticamente permite ambos os protocolos, o que é OK na maioria dos casos.
Ao trabalhar com o UFW, você também pode especificar endereços IP. Por exemplo, se quiser permitir conexões de um endereço IP específico, como um endereço IP de trabalho ou domicílio de 203.0.113.4
, você precisa especificar from
, então o endereço IP:
- sudo ufw allow from 203.0.113.4
Você também pode especificar uma porta específica na qual o endereço IP tem permissão para se conectar, adicionando to any port
seguido do número da porta. Por exemplo, se quiser permitir que o endereço 203.0.113.4
se conecte à porta 22
(SSH), utilize este comando:
- sudo ufw allow from 203.0.113.4 to any port 22
Se você quiser permitir uma sub-rede de endereços IP, você pode fazer isso usando uma notação CIDR para especificar uma máscara de rede. Por exemplo, se você quiser permitir todos os endereços IP na faixa de 203.0.113.1
a 203.0.113.254
, você pode usar este comando:
- sudo ufw allow from 203.0.113.0/24
De maneira similar, você também pode especificar a porta de destino na qual a sub-rede 203.0.113.0/24
tem permissão para se conectar. Novamente, nós vamos usar a porta 22
(SSH) como um exemplo:
- sudo ufw allow from 203.0.113.0/24 to any port 22
Se você quiser criar uma regra de firewall que se aplique apenas a uma interface de rede específica, você pode fazer isso especificando “allow in on” seguido do nome da interface de rede.
Você pode querer verificar suas interfaces de rede antes de continuar. Para fazer isso, utilize este comando:
- ip addr
Output Excerpt2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
. . .
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
. . .
O resultado destacado indica os nomes de interfaces de rede. Normalmente, elas recebem nomes como eth0
ou enp3s2
.
Então, se o seu servidor tiver uma interface de rede pública chamada eth0
, você pode permitir o tráfego HTTP (porta 80
) nela, usando este comando:
- sudo ufw allow in on eth0 to any port 80
Fazer isso permitiria que seu servidor recebesse pedidos HTTP da internet pública.
Ou, se você quiser que seu servidor de banco de dados MySQL (porta 3306
) escute conexões na interface de rede privada eth1
, por exemplo, você pode usar este comando:
- sudo ufw allow in on eth1 to any port 3306
Isso permitiria que outros servidores na sua rede privada se conectassem ao seu banco de dados MySQL.
Se você não tiver mudado a política padrão das conexões de entrada, o UFW está configurado para negar todas as conexões de entrada. Geralmente, isso simplifica o processo de criação de uma política de firewall segura ao exigir que você crie regras que permitam explicitamente portas e endereços IP específicos.
No entanto, algumas vezes você irá querer negar conexões específicas baseadas no endereço IP ou na sub-rede da origem, talvez porque você saiba que seu servidor está sendo atacado a partir daí. Além disso, se você quisesse alterar sua política de entrada padrão para permitir (allow) (o que não é recomendado), você precisaria criar regras para negar (deny) quaisquer serviços ou endereços IP para os quais você não quisesse permitir conexões.
Para escrever regras de deny, você pode usar os comandos descritos acima, substituindo allow por deny.
Por exemplo, para negar conexões HTTP, você pode usar este comando:
- sudo ufw deny http
Ou se você quiser negar todas as conexões do endereço 203.0.113.4
você pode usar este comando:
- sudo ufw deny from 203.0.113.4
Agora, vamos ver como excluir regras.
Saber como excluir regras do firewall é tão importante quanto saber como criá-las. Há duas maneiras diferentes para especificar quais regras excluir: pelo número da regra ou pela regra em si (semelhante ao modo como as regras foram especificadas quando foram criadas). Vamos começar com o método **excluir pelo número **da regra porque é mais fácil.
Se você estiver usando o número da regra para excluir regras do firewall, a primeira coisa que você vai querer fazer é obter uma lista das regras do seu firewall. O comando status do UFW tem uma opção para mostrar números ao lado de cada regra, como demonstrado aqui:
- sudo ufw status numbered
Numbered Output:Status: active
To Action From
-- ------ ----
[ 1] 22 ALLOW IN 15.15.15.0/24
[ 2] 80 ALLOW IN Anywhere
Se decidirmos que queremos excluir a regra 2, aquela que permite conexões na porta 80 (HTTP), podemos especificar isso em um comando delete do UFW desta forma:
- sudo ufw delete 2
Isso mostraria um prompt de confirmação e então excluiria a regra 2, que permite conexões HTTP. Note que se você tivesse o IPv6 habilitado, você iria querer excluir a regra correspondente ao IPv6 também.
A alternativa para os números das regras é especificar a regra propriamente dita a ser excluída. Por exemplo, se quiser remover a regra allow http
, você pode escrever desta forma:
- sudo ufw delete allow http
Você também poderia especificar a regra por allow 80
, em vez de pelo nome do serviço:
- sudo ufw delete allow 80
Esse método irá excluir as regras do IPv4 e do IPv6, se houver.
A qualquer momento, você pode verificar o status do UFW com este comando:
- sudo ufw status verbose
Se o UFW estiver desabilitado - o que acontece por padrão, você verá algo como isso:
OutputStatus: inactive
Se o UFW estiver ativo - o que deve ser o caso se você tiver seguido o Passo 3 - o resultado dirá que ele está ativo e listará quaisquer regras que estejam definidas. Por exemplo, se o firewall estiver configurado para permitir conexões via SSH (porta 22
) de qualquer lugar, o resultado poderia parecer com este:
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
Use o comando status
, se quiser verificar como o UFW configurou o firewall.
Se você decidir que não quer usar o UFW, você pode desabilitá-lo com este comando:
- sudo ufw disable
Quaisquer regras que você tenha criado com o UFW já não estarão mais ativas. Você sempre poderá executar sudo ufw enable
se precisar ativá-lo mais tarde.
Se você já tiver regras do UFW configuradas, mas decidir que quer começar novamente, você pode usar o comando de redefinição:
- sudo ufw reset
Isso irá desabilitar o UFW e excluir quaisquer regras que tiverem sido definidas anteriormente. Lembre-se de que as políticas padrão não irão voltar para suas configurações originais se, em algum momento, você as tiver alterado. Isso deve dar a você um novo começo com o UFW.
Seu firewall agora está configurado para permitir (pelo menos) conexões via SSH. Certifique-se de permitir outras conexões de entrada que seu servidor, ao mesmo tempo em que limita quaisquer conexões desnecessárias, para que seu servidor seja funcional e seguro.
Para aprender sobre mais configurações comuns do UFW, verifique o tutorial Fundamentos do UFW: regras comuns de firewall e comandos.
]]>O servidor HTTP Apache é o servidor Web mais amplamente usado no mundo. Ele fornece muitas características poderosas, incluindo módulos carregáveis dinamicamente, suporte robusto de mídia e uma integração extensa com outros softwares populares.
Neste guia, vamos explicar como instalar um servidor Web Apache no seu servidor Debian 9.
Antes de iniciar este guia, você deve ter um usuário regular e não-root com privilégios sudo configurado no seu servidor. Além disso, você precisará habilitar um firewall básico para bloquear portas não essenciais. Você pode aprender como configurar uma conta de usuário regular e configurar um firewall para seu servidor seguindo nosso guia de configuração inicial de servidor para o Debian 9.
Quando você tiver uma conta disponível, logue com seu usuário não-root para começar.
O Apache está disponível dentro dos repositórios de software padrões do Debian, possibilitando que ele seja instalado usando ferramentas de gerenciamento de pacotes convencionais.
Vamos começar atualizando o índice de pacotes local para refletir as últimas alterações a montante:
- sudo apt update
Então, instale o pacote apache2
:
- sudo apt install apache2
Após confirmar a instalação, o apt
irá instalar o Apache e todas as dependências necessárias.
Antes de testar o Apache, é necessário modificar as configurações do firewall para permitir o acesso exterior às portas Web padrão. Supondo que você tenha seguido as instruções nos pré-requisitos, você deve ter um firewall UFW configurado para restringir o acesso ao seu servidor.
Durante a instalação, o Apache registra-se com o UFW para fornecer alguns perfis de aplicativo que podem ser usados para habilitar ou desabilitar o acesso ao Apache através do firewall.
Liste os perfis de aplicativo ufw
digitando:
- sudo ufw app list
Você verá uma lista dos perfis dos aplicativos:
OutputAvailable applications:
AIM
Bonjour
CIFS
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
Os perfis Apache começam com WWW:
É recomendável que habilite o perfil mais restritivo que ainda assim permitirá o tráfego que você configurou. Como ainda não configurou o SSL para seu servidor neste guia, nós precisaremos apenas permitir o tráfego na porta 80:
- sudo ufw allow 'WWW'
Você pode verificar a mudança digitando:
- sudo ufw status
Você deve ver o tráfego HTTP como permitido no resultado exibido:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW (v6) ALLOW Anywhere (v6)
Como você pode ver, o perfil foi ativado para permitir o acesso ao servidor Web.
No final do processo de instalação, o Debian 9 inicia o Apache. O servidor Web já deve estar em funcionamento.
Verifique com o sistema init systemd
para garantir que o serviço está funcionando digitando:
- sudo systemctl status apache2
Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 19:21:48 UTC; 13min ago
Main PID: 12849 (apache2)
CGroup: /system.slice/apache2.service
├─12849 /usr/sbin/apache2 -k start
├─12850 /usr/sbin/apache2 -k start
└─12852 /usr/sbin/apache2 -k start
Sep 05 19:21:48 apache systemd[1]: Starting The Apache HTTP Server...
Sep 05 19:21:48 apache systemd[1]: Started The Apache HTTP Server.
Como você pode ver a partir deste resultado, o serviço parece ter sido iniciado com sucesso. No entanto, a melhor maneira de testar isso é solicitando uma página do Apache.
Você pode acessar a página de destino padrão do Apache para confirmar que o software está funcionando corretamente através do seu endereço IP: Se você não sabe o endereço IP do seu servidor, você pode obtê-lo de algumas maneiras diferentes a partir da linha de comando.
Tente digitar isso no prompt de comando do seu servidor:
- hostname -I
Você receberá alguns endereços separados por espaços. Você pode testar cada um no seu navegador Web para ver se eles funcionam.
Uma alternativa é usar a ferramento curl
, que deve dar a você seu endereço IP público como visto de outro local na internet:
Primeiro, instale o curl
usando o apt
:
- sudo apt install curl
Então, use o curl
para recuperar o icanhazip.com usando IPv4:
- curl -4 icanhazip.com
Quando você tiver o endereço IP do seu servidor, digite ele na barra de endereço do seu navegador:
http://your_server_ip
Você deve ver a página Web padrão Apache do Debian 9:
Esta página indica que o Apache está funcionando corretamente. Ela também inclui algumas informações básicas sobre arquivos importantes do Apache e localizações de diretórios importantes.
Agora que você tem seu servidor Web funcionamento, vamos ver alguns comandos básicos de gerenciamento.
Para parar seu servidor Web, digite:
- sudo systemctl stop apache2
Para iniciar o servidor quando ele for parado, digite:
- sudo systemctl start apache2
Para parar e então iniciar o serviço novamente, digite:
- sudo systemctl restart apache2
Se você estiver simplesmente fazendo alterações de configuração, o Apache geralmente pode recarregar sem quedas na conexão. Para fazer isso, utilize este comando:
- sudo systemctl reload apache2
Por padrão, o Apache está configurado para iniciar automaticamente quando o servidor for iniciado. Se isso não é o que você quer, desative este comportamento digitando:
- sudo systemctl disable apache2
Para reativar o serviço de inicialização no boot, digite:
- sudo systemctl enable apache2
O Apache deve começar automaticamente quando o servidor for iniciado outra vez.
Ao usar o servidor Web Apache, você pode usar _hosts virtuais _(similares a blocos de servidor no Nginx) para encapsular detalhes de configuração e hospedar mais de um domínio de um único servidor. Vamos configurar um domínio chamado example.com, mas você deve substituí-lo por seu próprio nome de domínio. Para aprender mais sobre como configurar um nome de domínio com o DigitalOcean, veja nossa Introdução ao DNS do DigitalOcean.
O Apache no Debian 9 tem um bloco de servidor habilitado por padrão que está configurado para atender documentos do diretório /var/www/html
. Enquanto isso funciona bem para um único site, ele pode tornar-se indevido se você estiver hospedado vários sites. Ao invés de modificar o /var/www/html
, vamos criar uma estrutura de diretórios dentro do /var/www
para nosso site example.com, deixando o /var/www/html
no lugar como o diretório padrão para ser servido se um pedido de cliente não corresponder a nenhum outro site.
Crie o diretório para example.com como segue, usando a flag -p
para criar quaisquer diretórios parentais necessários:
sudo mkdir -p /var/www/example.com/html
Em seguida, atribua a posse do diretório com a variável de ambiente $USER
:
- sudo chown -R $USER:$USER /var/www/example.com/html
As permissões das suas roots Web devem estar corretas se ainda não tiver modificado seu valor unmask
, mas você pode certificar-se digitando:
- sudo chmod -R 755 /var/www/example.com
A seguir, crie uma página de amostra index.html
utilizando o nano
ou seu editor favorito:
- nano /var/www/example.com/html/index.html
Dentro, adicione a seguinte amostra HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com virtual host is working!</h1>
</body>
</html>
Salve e feche o arquivo quando você terminar.
Para que o Apache sirva este conteúdo, é necessário criar um arquivo de host virtual com as diretivas corretas. Ao invés de modificar o arquivo de configuração padrão localizado em /etc/apache2/sites-available/000-default.conf
diretamente, vamos fazer um novo em /etc/apache2/sites-available/example.com.conf
:
- sudo nano /etc/apache2/sites-available/example.com.conf
Cole no seguinte bloco de configuração, que é similar ao padrão, mas atualizado para nosso novo diretório e nome de domínio:
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Note que atualizamos o DocumentRoot
para nosso novo diretório e o ServerAdmin
para um e-mail que o administrador do site example.com pode acessar. Também adicionamos duas diretivas: o ServerName
, que estabelece o domínio base que deve corresponder e essa definição de host virtual e o ServerAlias
, que define nomes adicionais que devem corresponder como se fossem o nome base.
Salve e feche o arquivo quando você terminar.
Vamos habilitar o arquivo com a ferramenta a2ensite
:
- sudo a2ensite example.com.conf
Desabilite o site padrão definido em 000-default.conf
:
- sudo a2dissite 000-default.conf
Em seguida, vamos testar à procura de erros de configuração:
- sudo apache2ctl configtest
Você deve ver o seguinte resultado:
OutputSyntax OK
Reinicie o Apache para implementar as suas alterações:
- sudo systemctl restart apache2
O Apache agora deve estar atendendo seu nome de domínio. Você pode testar isso navegando para http://example.com
, onde você deve ver algo como isso:
Agora que você sabe como gerenciar o serviço do Apache, você deve gastar alguns minutos para familiarizar-se com alguns diretórios e arquivos importantes.
/var/www/html
: O conteúdo Web em si, que por padrão apenas consiste na página Apache padrão que você viu antes, é servido do diretório /var/www/html
. Isso pode ser alterado mudando os arquivos de configuração do Apache./etc/apache2
: O diretório de configuração do Apache. Todos os arquivos de configuração do Apache residem aqui./etc/apache2/apache2.conf
: O arquivo de configuração principal do Apache. Isso pode ser modificado para fazer alterações na configuração global do Apache. Este arquivo é o responsável por carregar muitos dos outros arquivos no diretório de configuração./etc/apache2/ports.conf
: Este arquivo especifica as portas nas quais o Apache irá escutar. Por padrão, o Apache escuta na porta 80 e adicionalmente escuta na porta 443 quando um módulo que fornece capacidades SSL está ativo./etc/apache2/sites-available/
: O diretório onde hosts virtuais de cada site podem ser armazenados. O Apache não usará os arquivos de configuração encontrados neste diretório a menos que estejam ligados ao diretório sites-enabled
. Normalmente, todas as configurações de bloco do servidor são feitas neste diretório e então habilitadas ligando-as ao outro diretório com o comando a2ensite.
/etc/apache2/sites-enabled/
: O diretório onde hosts virtuais habilitados de cada site são armazenados. Normalmente, eles são criados ligando arquivos de configuração encontrados no diretório sites-available
com o a2ensite
. O Apache lê os arquivos de configuração e links encontrados neste diretório quando inicia ou recarrega para compilar uma configuração completa./etc/apache2/conf-available/
, /etc/apache2/conf-enabled/
: Estes diretórios têm a mesma relação que os diretórios sites-available
e sites-enabled
, mas são usados para armazenar fragmentos de configuração que não pertencem em um host virtual. Arquivos no diretório conf-available
podem ser habilitados com o comando a2enconf
e desabilitados com o comando a2disconf
./etc/apache2/mods-available/
, /etc/apache2/mod-enabled/
: Estes diretórios contêm os módulos disponíveis e habilitados, respectivamente. Arquivos com final .load
contêm fragmentos para carregar módulos específicos, enquanto os arquivos com final .conf
contêm a configuração para esses módulos. Módulos podem ser habilitados e desabilitados utilizando os comandos a2enmod
e a2dismod
./var/log/apache2/access.log
: Por padrão, cada solicitação feita para seu servidor é gravada neste arquivo de registro a menos que o Apache esteja configurado para fazer de outro modo./var/log/apache2/error.log
: Por padrão, todos os erros são gravados neste arquivo. A diretiva LogLevel
na configuração do Apache especifica quanto detalhe os registros de erros irão conter.Agora que você tem seu servidor Web instalado, você tem muitas opções para o tipo de conteúdo que você pode oferecer e as tecnologias que você quiser usar para criar uma experiência mais rica.
Se você quiser construir uma pilha de aplicativo mais completa, verifique este artigo sobre como configurar uma pilha LAMP no Debian 9.
]]>O Node.js é uma plataforma JavaScript para programação de fins gerais que permite que os usuários construam aplicativos de rede rapidamente. Ao potencializar o JavaScript em ambos front e backend, o Node.js torna o desenvolvimento mais consistente e integrado.
Neste guia, mostraremos como começar com o Node.js em um servidor Debian 9.
Este guia supõe que você esteja usando o Debian 9. Antes de começar, será necessário ter uma conta de usuário sem root com privilégios sudo configurados no seu sistema. Você pode aprender como configurar isso seguindo a configuração inicial de servidor para o Debian 9.
O Debian contém uma versão de Node.js nos repositórios por padrão. No momento em que este artigo está sendo escrito, essa versão é a 4.8.2, que chegará ao final de vida no fim de abril de 2018. Se quiser testar com a linguagem usando uma opção estável e capaz, a instalação através do repositórios pode fazer mais sentido. No entanto, é recomendável que, para casos de desenvolvimento e de produção, que instale uma versão mais recente com uma PPA. Iremos discutir como instalar de um PPA no próximo passo.
Para obter a versão distro-estável do Node.js, utilize o gerenciador de pacotes apt
. Primeiramente, atualize seu índice de pacotes local:
- sudo apt update
Então, instale o pacote Node.js dos repositórios:
- sudo apt install nodejs
Se o pacote nos repositórios atender às suas necessidades, então isso é tudo que precisa fazer para estar configurado com o Node.js.
Para verificar qual versão do Node.js você tem instalada após esses passos iniciais, digite:
- nodejs -v
Por causa de um conflito com outro pacote, o executável dos repositórios Debian é chamado de nodejs
ao invés de node
. Lembre-se disso quando estiver executando o software.
Assim que tiver estabelecido qual versão de Node.js tem instalada pelos repositórios Debian, você pode decidir se quer ou não trabalhar com diferentes versões, arquivos de pacotes, ou gerentes de versão. Em seguida, discutiremos estes elementos junto com métodos de instalação mais flexíveis e robustos.
Para trabalhar com uma versão mais recente do Node.js, adicione o PPA (arquivo de pacotes pessoal) mantido pela NodeSource Este arquivo terá versões mais atualizadas do Node.js do que as dos repositórios do Debian oficial, e permitirá que escolha entre o Node.js v4.x (a versão de suporte de longo prazo mais antiga, que terá suporte até o final de abril de 2018), o Node.js v6.x (com suporte até abril de 2019), o Node.js v8.x (a versão LTS atual, com suporte até dezembro de 2019) e o Node.js v10x (a versão mais recente, com suporte até abril de 2021).
Vamos atualizar o índice de pacotes local e instalar o curl
, que você usará para acessar o PPA:
- sudo apt update
- sudo apt install curl
Em seguida, instale o PPA para ter acesso ao seu conteúdo. Do seu diretório home, utilize o curl
para recuperar o script de instalação para sua versão preferida, certificando-se de substituir 10.x
com sua versão string favorita (se estiver diferente):
- cd ~
- curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
É possível verificar o conteúdo deste script com o nano
ou seu editor de texto preferido:
- nano nodesource_setup.sh
Execute o script sob sudo
:
- sudo bash nodesource_setup.sh
O PPA será adicionado à sua configuração e seu cache de pacotes local será atualizado automaticamente. Após executar o script de configuração, instale o pacote Node.js da mesma forma que você fez acima:
- sudo apt install nodejs
Para verificar qual versão do Node.js você tem instalada após esses passos iniciais, digite:
- nodejs -v
Outputv10.9.0
O pacote nodejs
contém o binário do nodejs
assim como o npm
, então não é necessário instalar o npm
separadamente.
O npm
usa um arquivo de configuração no seu diretório home para manter o controle de atualizações. Ele será criado na primeira vez que você executar o npm
. Execute este comando para verificar se o npm
está instalado e crie o arquivo de configuração:
- npm -v
Output6.2.0
Para que alguns pacotes npm
possam funcionar (os que requerem compilar o código da fonte, por exemplo), será necessário instalar o pacote build-essential
:
- sudo apt install build-essential
Agora, você tem as ferramentas necessárias para trabalhar com os pacotes npm
que requerem o código de compilação da fonte.
Uma alternativa para instalar o Node.js através do apt
é usar uma ferramenta chamada nvm
, que significa “Gerenciador de versão Node.js”. Ao invés de trabalhar no nível do sistema operacional, o nvm
funciona no nível de um diretório independente dentro do seu diretório home. Isso significa que você pode instalar várias versões auto-contidas do Node.js sem afetar o sistema inteiro.
Controlar seu ambiente com o nvm
permite que você acesse as versões mais novas do Node.js e mantenha e gerencie versões anteriores. No entanto, é um utilitário diferente do apt
, e as versões do Node.js que você gerencia com ele são distintas daquelas gerenciadas com o apt
.
Para baixar o script de instalação nvm
da página de projetos do GitHub, utilize o curl
. Note que o número de versão pode diferir do que está destacado aqui:
- curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh -o install_nvm.sh
Verifique o script de instalação com o nano
:
- nano install_nvm.sh
Execute o script com o bash
:
- bash install_nvm.sh
Ele instalará o software em um subdiretório do seu diretório home em ~/.nvm
. Ele também adicionará as linhas necessárias no seu arquivo ~/.profile
para usar o arquivo.
Para obter acesso à funcionalidade nvm
, será necessário que saia e faça o login novamente, ou origine o arquivo ~/.profile
para que sua sessão atual saiba sobre as alterações:
- source ~/.profile
Com o nvm
instalado, instale as versões do Node.js isoladamente. Para maiores informações sobre as versões do Node.js disponíveis, digite:
- nvm ls-remote
Output...
v8.11.1 (Latest LTS: Carbon)
v9.0.0
v9.1.0
v9.2.0
v9.2.1
v9.3.0
v9.4.0
v9.5.0
v9.6.0
v9.6.1
v9.7.0
v9.7.1
v9.8.0
v9.9.0
v9.10.0
v9.10.1
v9.11.0
v9.11.1
v10.0.0
v10.1.0
v10.2.0
v10.2.1
v10.3.0
v10.4.0
v10.4.1
v10.5.0
v10.6.0
v10.7.0
v10.8.0
v10.9.0
Como pode ver, a versão LTS atual no momento em que este artigo está sendo escrito é a v8.11.1. Instale-a digitando:
- nvm install 8.11.1
Normalmente, o nvm
irá mudar para usar a versão mais recentemente instalada. Diga ao nvm
para usar a versão que acabou de baixar digitando:
- nvm use 8.11.1
Quando instalar o Node.js usando o nvm
, o executável é chamado de node
. É possível visualizar a versão que está sendo usada pela shell digitando:
- node -v
Outputv8.11.1
Se tiver várias versões do Node.js, verifique o que está instalado digitando:
- nvm ls
Se quiser umas das versões como padrão, digite:
- nvm alias default 8.11.1
Esta versão será selecionada automaticamente quando uma nova sessão começar. Também é possível chamá-la pelo pseudônimo, desta forma:
- nvm use default
Cada versão do Node.js irá manter o controle dos seus próprios pacotes e tem o npm
disponível para gerenciar esses pacotes.
Também é possível ter pacotes de instalação npm
no diretório de projeto do Node.js ./node_modules
. Utilize a seguinte sintaxe para instalar o módulo express
:
- npm install express
Se quiser instalar o módulo globalmente, disponibilizando-o para outros projetos utilizando a mesma versão do Node.js, adicione a flag -g
:
- npm install -g express
Isso instalará o pacote em:
~/.nvm/versions/node/node_version/lib/node_modules/express
A instalação do módulo global permitirá que você execute comandos da linha de comando, mas será necessário conectar o pacote para sua esfera local para solicitar ele de dentro de um programa:
- npm link express
Aprenda mais a respeito das opções disponíveis a você com o nvm digitando:
- nvm help
É possível desinstalar o Node.js usando o apt
ou o nvm
, dependendo da versão do programa escolhido. Para remover as versões instaladas dos repositórios ou do PPA, será necessário trabalhar com o utilitário apt
no nível do sistema.
Para remover qualquer uma destas versões, digite o seguinte:
- sudo apt remove nodejs
Este comando removerá o pacote e os arquivos de configuração.
Para desinstalar uma versão do Node.js que tenha habilitado usando o nvm
, verifique primeiro se a versão que gostaria de remover é a versão atual ativa:
- nvm current
Se a versão que está escolhendo no é a versão atualmente ativa, execute:
- nvm uninstall node_version
Este comando irá desinstalar a versão selecionada do Node.js.
Se a versão que você gostaria de remover **é **a versão ativa, será necessário desativar o nvm
para permitir suas alterações:
- nvm deactivate
Agora, é possível desinstalar a versão atual usando o comando uninstall
acima, que removerá todos os arquivos associados à versão escolhida do Node.js exceto os arquivos na cache que podem ser usados para uma reinstalação.
Há várias maneiras de trazer e executar com o Node.js em seu servidor Debian 9. Suas circunstâncias irão ditar qual dos métodos acima são melhores para suas necessidades. Ao mesmo tempo que usar a versão de pacotes no repositório Debian seja uma opção para experimentação, a instalação a partir de um PPA e seu funcionamento com o npm
ou o nvm
também oferece uma flexibilidade adicional.
O Docker é um aplicativo que simplifica o processo de gerenciamento de processos de aplicação em containers. Os containers deixam você executar suas aplicações em processos isolados de recurso. Eles são semelhantes a máquinas virtuais, mas os containers são mais portáveis, mais fáceis de usar e mais dependentes do sistema operacional do host.
Para uma introdução detalhada aos diferentes componentes de um container Docker, verifique O Ecossistema Docker: Uma Introdução aos Componentes Comuns.
Neste tutorial, você irá instalar e usar a Edição da Comunidade do Docker (CE) no Debian 9. Você irá instalar o Docker, trabalhar com containers e imagens e enviar uma imagem para um Repositório do Docker.
Para seguir este tutorial, você precisará do seguinte:
O pacote de instalação do Docker disponível no repositório oficial do Debian pode não ser a versão mais recente. Para garantir que tenhamos a versão mais recente, iremos instalar o Docker do repositório oficial do Docker. Para fazer isso, adicionaremos uma nova fonte de pacote, adicionaremos a chave GPG do Docker para garantir que os downloads sejam válidos, e então instalaremos o pacote.
Primeiro, atualize sua lista existente de pacotes:
- sudo apt update
Em seguida, instale alguns pacotes pré-requisito que deixam o apt
usar pacotes pelo HTTPS:
- sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
Então, adicione a chave GPG para o repositório oficial do Docker no seu sistema:
- curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
Adicione o repositório do Docker às fontes do APT:
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
Em seguida, atualize o banco de dados do pacote com os pacotes do Docker do recém adicionado repositório:
- sudo apt update
Certifique-se de que você está prestes a instalar do repositório do Docker ao invés do repositório padrão do Debian:
- apt-cache policy docker-ce
Você verá um resultado assim, embora o número da versão para o Docker possa ser diferente:
docker-ce:
Installed: (none)
Candidate: 18.06.1~ce~3-0~debian
Version table:
18.06.1~ce~3-0~debian 500
500 https://download.docker.com/linux/debian stretch/stable amd64 Packages
Observe que o docker-ce
não está instalado, mas o candidato para a instalação é do repositório do Docker para o Debian 9 (stretch
).
Finalmente, instale o Docker:
- sudo apt install docker-ce
O Docker deve agora ser instalado, o daemon iniciado e o processo habilitado a iniciar no boot. Verifique se ele está funcionando:
- sudo systemctl status docker
O resultado deve ser similar ao mostrado a seguir, mostrando que o serviço está ativo e funcionando:
Output● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
Docs: https://docs.docker.com
Main PID: 21319 (dockerd)
CGroup: /system.slice/docker.service
├─21319 /usr/bin/dockerd -H fd://
└─21326 docker-containerd --config /var/run/docker/containerd/containerd.toml
Instalando o Docker agora não dá apenas o serviço do Docker (daemon), mas também o utilitário de linha de comando docker
, ou o cliente do Docker. Vamos explorar como usar o comando docker
mais tarde neste tutorial.
Por padrão, o comando docker
só pode ser executado pelo usuário root ou por um usuário no grupo docker, que é criado automaticamente no processo de instalação do Docker. Se você tentar executar o comando docker
sem prefixar ele com o sudo
ou sem estar no grupo docker, você terá um resultado como este:
Outputdocker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
Se você quiser evitar digitar sudo
sempre que você executar o comando docker
, adicione seu nome de usuário no grupo docker
:
- sudo usermod -aG docker ${USER}
Para inscrever o novo membro ao grupo, saia do servidor e logue novamente, ou digite o seguinte:
- su - ${USER}
Você será solicitado a digitar a senha do seu usuário para continuar.
Confirme que seu usuário agora está adicionado ao grupo docker digitando:
- id -nG
Outputsammy sudo docker
Se você precisar adicionar um usuário ao grupo docker
com o qual você não está logado, declare esse nome de usuário explicitamente usando:
- sudo usermod -aG docker username
O resto deste artigo supõe que você esteja executando o comando docker
como um usuário no grupo docker. Se você escolher não fazer isso, por favor preencha os comandos com sudo
.
Vamos explorar o comando docker
a seguir.
Usar o docker
consiste em passar a ele uma cadeia de opções e comandos seguidos de argumentos. A sintaxe toma esta forma:
- docker [option] [command] [arguments]
Para ver todos os subcomandos disponíveis, digite:
- docker
No Docker 18, a lista completa de subcomandos disponíveis inclui:
Output
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Para visualizar as opções disponíveis para um comando específico, digite:
- docker docker-subcommand --help
Para visualizar informações de sistema sobre o Docker, use:
- docker info
Vamos explorar alguns desses comandos. Começaremos trabalhando com imagens.
Os containers do Docker são construídos com imagens do Docker. Por padrão, o Docker puxa essas imagens do Docker Hub, um registro Docker gerido pelo Docker, a empresa por trás do projeto Docker. Qualquer um pode hospedar suas imagens do Docker no Docker Hub, então a maioria dos aplicativos e distribuições do Linux que você precisará terá imagens hospedadas lá.
Para verificar se você pode acessar e baixar imagens do Docker Hub, digite:
- docker run hello-world
O resultado irá indicar que o Docker está funcionando corretamente:
OutputUnable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
O Docker inicialmente não conseguiu encontrar a imagem hello-world
localmente, então ele baixou a imagem do Docker Hub, que é o repositório padrão. Uma vez baixada a imagem, o Docker criou um container da imagem e executou o aplicativo no container, mostrando a mensagem.
Você pode procurar imagens disponíveis no Docker Hub usando o comando docker
com o subcomando search
. Por exemplo, para procurar a imagem do Ubuntu, digite:
- docker search ubuntu
O script irá vasculhar o Docker Hub e devolverá uma lista de todas as imagens cujo nome correspondam ao string de pesquisa. Neste caso, o resultado será similar a este:
OutputNAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 8320 [OK]
dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 214 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 170 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 128 [OK]
ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 95 [OK]
ubuntu-upstart Upstart is an event-based replacement for th… 88 [OK]
neurodebian NeuroDebian provides neuroscience research s… 53 [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 43 [OK]
ubuntu-debootstrap debootstrap --variant=minbase --components=m… 39 [OK]
nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK]
tutum/ubuntu Simple Ubuntu docker images with SSH access 18
i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13
1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 12 [OK]
ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12
eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK]
darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK]
codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK]
1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK]
pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 2
1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK]
ossobv/ubuntu Custom ubuntu image from scratch (based on o… 0
smartentry/ubuntu ubuntu with smartentry 0 [OK]
1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK]
pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0
paasmule/bosh-tools-ubuntu Ubuntu based bosh-cli 0 [OK]
...
Na coluna OFFICIAL, o OK indica uma imagem construída e suportada pela empresa por trás do projeto. Uma vez que você tenha identificado a imagem que você gostaria de usar, você pode baixá-la para seu computador usando o subcomando pull
.
Execute o comando a seguir para baixar a imagem oficial ubuntu
no seu computador:
- docker pull ubuntu
Você verá o seguinte resultado:
OutputUsing default tag: latest
latest: Pulling from library/ubuntu
6b98dfc16071: Pull complete
4001a1209541: Pull complete
6319fc68c576: Pull complete
b24603670dc3: Pull complete
97f170c87c6f: Pull complete
Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d
Status: Downloaded newer image for ubuntu:latest
Após o download de uma imagem, você pode então executar um container usando a imagem baixada com o subcomando run
. Como você viu com o exemplo hello-world
, caso uma imagem não tenha sido baixada quando o docker
for executado com o subcomando run
, o cliente do Docker irá primeiro baixar a imagem e então executar um container usando ele.
Para ver as imagens que foram baixadas no seu computador, digite:
- docker images
O resultado deve ser semelhante ao seguinte:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 16508e5c265d 13 days ago 84.1MB
hello-world latest 2cb0d9787c4d 7 weeks ago 1.85kB
Como você verá mais tarde neste tutorial, imagens que você usa para executar containers podem ser modificadas e usadas para gerar novas imagens, que podem então ser enviadas (*pushed *é o termo técnico) para o Docker Hub ou outros registros do Docker.
Vamos ver como executar containers mais detalhadamente.
O container hello-world
que você executou no passo anterior é um exemplo de um container que executa e finaliza após emitir uma mensagem de teste. Os containers podem ser muito mais úteis do que isso, e eles podem ser interativos. Afinal, eles são semelhantes a máquinas virtuais, apenas mais fáceis de usar.
Como um exemplo, vamos executar um container usando a última imagem do Ubuntu. A combinação dos switches -i e -t dá a você um acesso de shell interativo no container:
- docker run -it ubuntu
Seu prompt de comando deve mudar para refletir o fato de você agora estar trabalhando dentro do container e deve assumir esta forma:
Outputroot@d9b100f2f636:/#
Observe o id do container no prompt de comando. Neste exemplo, é d9b100f2f636
. Você precisará do ID do container mais tarde para identificar o container quando você quiser removê-lo.
Agora você pode executar qualquer comando dentro do container. Por exemplo, vamos atualizar o banco de dados do pacote dentro do container. Você não precisa prefixar nenhum comando com sudo
, porque você está operando dentro do container como o usuário root:
- apt update
Então, instale qualquer aplicativo nele. Vamos instalar o Node.js:
- apt install nodejs
Isso instala o Node.js no container do repositório oficial do Ubuntu. Quando a instalação terminar, verifique se o Node.js está instalado:
- node -v
Você verá o número da versão exibido no seu terminal:
Outputv8.10.0
Qualquer alteração que você faça dentro do container apenas se aplica a esse container.
Para sair do container, digite exit
no prompt.
Vamos ver como gerenciar os containers no nosso sistema a seguir.
Após usar o Docker por um tempo, você terá muitos containers ativos (executando) e inativos no seu computador. Para visualizar os ativos, use:
- docker ps
Você verá um resultado similar ao seguinte:
OutputCONTAINER ID IMAGE COMMAND CREATED
Neste tutorial, você iniciou dois containers; um da imagem hello-world
e outro da imagem ubuntu
. Ambos os containers já não estão funcionando, mas eles ainda existem no seu sistema.
Para ver todos os containers — ativos e inativos, execute docker ps
com o switch -a
:
- docker ps -a
Você verá um resultado similar a este:
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 8 minutes ago sharp_volhard
01c950718166 hello-world "/hello" About an hour ago Exited (0) About an hour ago festive_williams
Para ver o último container que você criou, passe o switch -l
:
- docker ps -l
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard
Para iniciar um container parado, use o docker start
, seguido do ID do container ou nome do container. Vamos iniciar o container baseado no Ubuntu com o ID do d9b100f2f636
:
- docker start d9b100f2f636
O container irá iniciar e você pode usar o docker ps
para ver seu status:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Up 8 seconds sharp_volhard
Para parar um container em execução, use o docker stop
, seguido do ID ou nome do container. Desta vez, usaremos o nome que o Docker atribuiu ao container, que é sharp_volhard
:
- docker stop sharp_volhard
Uma vez que você tenha decidido que você já não precisa mais de um container, remova ele com o comando docker rm
, novamente usando o ID do container ou o nome. Use o comando docker ps -a
para encontrar o ID ou nome do container associado à imagem hello-world
e remova-o.
- docker rm festive_williams
Você pode iniciar um novo container e dar a ele um nome usando o switch --name
. Você também pode usar o switch --rm
para criar um container que remove a si mesmo quando ele é parado. Veja o comando docker run help
para obter mais informações sobre essas e outras opções.
Os containers podem ser transformados em imagens que você pode usar para criar novos containers. Vamos ver como isso funciona.
Quando você iniciar uma imagem do Docker, você pode criar, modificar e deletar arquivos assim como você pode com uma máquina virtual. As alterações que você faz apenas se aplicarão a esse container. Você pode iniciá-lo e pará-lo, mas uma vez que você o destruir com o comando docker rm
, as alterações serão perdidas para sempre.
Esta seção mostra como salvar o estado de um container como uma nova imagem do Docker.
Após instalar o Node.js dentro do container do Ubuntu, você agora tem um container executando uma imagem, mas o container é diferente da imagem que você usou para criá-lo. Mas você pode querer reutilizar este container Node.js como a base para novas imagens mais tarde.
Então, envie as alterações a uma nova instância de imagem do Docker usando o comando a seguir.
- docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name
O switch **-m **é para a mensagem de envio que ajuda você e outros a saber quais as alterações que você fez, enquanto **-a **é usado para especificar o autor. O container_id
é aquele que você anotou anteriormente no tutorial quando você iniciou a sessão interativa do Docker. A menos que você tenha criado repositórios adicionais no Docker Hub, repository
é normalmente seu nome de usuário do Docker Hub.
Por exemplo, para o usuário sammy, com o ID do container d9b100f2f636
, o comando seria:
- docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs
Quando você *envia *uma imagem, a nova imagem é salva localmente no seu computador. Mais tarde neste tutorial, você aprenderá como empurrar uma imagem para um registro do Docker para que outros possam acessá-la.
Listando as imagens do Docker novamente irá mostrar a nova imagem, além da antiga da qual ela foi derivada:
- docker images
Você verá um resultado como esse:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB
ubuntu latest 113a43faa138 4 weeks ago 81.2MB
hello-world latest e38bc07ac18e 2 months ago 1.85kB
Neste exemplo, o ubuntu-nodejs
é a nova imagem, que foi derivada da imagem ubuntu
existente do Docker Hub. A diferença de tamanho reflete as alterações que foram feitas. E neste exemplo, a mudança foi que o NodeJS foi instalado. Então, da próxima vez que você precisar executar um container usando o Ubuntu com o NodeJS pré-instalado, você pode apenas usar a nova imagem.
Você também pode construir Imagens de um Dockerfile
, que permite a você automatizar a instalação de software em uma nova imagem. No entanto, isso está fora do âmbito deste tutorial.
Agora vamos compartilhar a nova imagem com outros para que eles possam criar containers a partir dela.
O próximo passo lógico após criar uma nova imagem de uma imagem existente é compartilhá-la com alguns de seus amigos, todo o mundo no Docker Hub, ou outro registro do Docker que você tenha acesso. Para empurrar uma imagem para o Docker Hub ou qualquer outro registro do Docker, você deve ter uma conta lá.
Esta seção mostra como empurrar uma imagem do Docker para o Docker Hub. Para aprender a criar seu próprio registro privado do Docker, verifique Como Configurar um Registro Privado do Docker no Ubuntu 14.04.
Para empurrar sua imagem, primeiro logue no Docker Hub.
- docker login -u docker-registry-username
Você será solicitado a autenticar-se usando sua senha do Docker Hub. Se você especificou a senha correta, a autenticação deve ser bem sucedida.
Nota: Se seu nome de usuário de registro do Docker for diferente do nome de usuário local que você usou para criar a imagem, você terá que anexar sua imagem com seu nome de usuário de registro. Para o exemplo dado no último passo, você digitaria:
- docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs
Então você pode empurrar sua própria imagem usando:
- docker push docker-registry-username/docker-image-name
Para empurrar a imagem ubuntu-nodejs
no repositório sammy, o comando seria:
- docker push sammy/ubuntu-nodejs
O processo pode levar algum tempo para se completar uma vez que ele envia as imagens, mas quando finalizado, o resultado se parecerá com este:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed
...
Após empurrar uma imagem para um registro, ela deve estar listada no painel da sua conta, como mostrado na imagem abaixo.
Se uma tentativa de empurrar resultar em um erro deste tipo, então você provavelmente não logou:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required
Logue com docker login
e repita a tentativa de empurrar. Então verifique se ela existe na sua página de repositório do Docker Hub.
Você agora pode usar o docker pull sammy/ubuntu-nodejs
para puxar a imagem para uma nova máquina e usá-la para executar um novo container.
Neste tutorial você instalou o Docker, trabalhou com imagens e containers, e empurrou uma imagem modificada para o Docker Hub. Agora que você sabe o básico, explore os outros tutoriais do Docker na Comunidade DigitalOcean.
]]>SSH, o shell seguro, es un protocolo cifrado que se usa para administrar servidores y comunicarse con ellos. Al trabajar con un servidor de Debian, es probable que pase la mayor parte de su tiempo en una sesión de terminal conectada a su servidor a través de SSH.
En esta guía, nos centraremos en configurar claves SSH para una instalación vanilla de Debian 9. Las claves de SSH proporcionan una alternativa sencilla y segura para iniciar sesión en su servidor y se recomiendan para todos los usuarios.
El primer paso es crear un par de claves en la máquina cliente (por lo general, su computadora):
- ssh-keygen
De forma predeterminada, ssh-keygen
creará un par de claves RSA de 2048 bits, que ofrece suficiente seguridad para la mayoría de los casos de uso (como opción, puede pasar en el indicador -b 4096
a crear una clave más grande de 4096 bits).
Después de ingresar el comando, verá el siguiente resultado:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Presione ENTER para guardar el par de claves en el subdirectorio .ssh/
de su directorio principal, o especificar una ruta alternativa.
Si generó previamente un par de claves de SSH, puede ver el siguiente mensaje:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
Si elige sobrescribir la clave en el disco, ya **no **podrá autenticar usando la clave anterior. Tenga mucho cuidado al convalidar la operación, ya que este es un proceso destructivo que no puede revertirse.
Debería ver el siguiente mensaje:
OutputEnter passphrase (empty for no passphrase):
Aquí, puede introducir una frase de contraseña segura, lo cual se recomienda mucho. Una frase de contraseña agrega una capa de seguridad adicional para evitar el inicio de sesión de usuarios no autorizados. Para obtener más información sobre seguridad, consulte nuestro tutorial Cómo configurar la autenticación basada en claves de SSH en un servidor de Linux.
Debería ver el siguiente resultado:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
Ahora dispondrá de una clave pública y privada que puede usar para realizar la autenticación. El siguiente paso es ubicar la clave pública en su servidor a fin de poder usar la autenticación basada en claves de SSH para iniciar sesión.
La alternativa más rápida para copiar su clave pública al host de Debian es usar una utilidad llamada ssh-copy-id
. Debido a su simplicidad, este método se recomienda mucho si está disponible. Si no tiene la utilidad ssh-copy-id
disponible en su máquina cliente, puede usar uno de los dos métodos alternativos proporcionados en esta sección (copiar mediante SSH con contraseña o copiar manualmente la clave).
ssh-copy-id
La herramienta ssh-copy-id
se incluye por defecto en muchos sistemas operativos. Por ello, es posible que tenga la posibilidad de disponer de ella en su sistema local. Para que este método funcione, ya debe disponer de acceso con SSH basado en contraseña en su servidor.
Para usar la utilidad, solo necesita especificar el host remoto al que desee conectarse y la cuenta de usuario a la que tenga acceso mediante SSH con contraseña. Esta es la cuenta a la que se copiará su clave de SSH pública.
La sintaxis es la siguiente:
- ssh-copy-id username@remote_host
Es posible que vea el siguiente mensaje:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Esto significa que su computadora local no reconoce el host remoto. Esta sucederá la primera vez que establezca conexión con un nuevo host. Escriba “yes” y presione ENTER
para continuar.
A continuación, la utilidad analizará su cuenta local en busca de la clave id_rsa.pub
que creamos antes. Cuando la encuentre, le solicitará la contraseña de la cuenta del usuario remoto:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Escriba la contraseña (por motivos de seguridad, no se mostrará lo que escriba) y presione ENTER
. La utilidad se conectará a la cuenta en el host remoto usando la contraseña que proporcionó. Luego, copie el contenido de su clave ~/.ssh/id_rsa.pub
a un archivo en el directorio principal de la cuenta remota ~/.ssh
, llamado authorized_keys
.
Debería ver el siguiente resultado:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
En este punto, su clave id_rsa.pub
se habrá cargado en la cuenta remota. |Puede continuar con el paso 3.
Si no tiene ssh-copy-id
disponible, pero tiene acceso de SSH basado en contraseña a una cuenta de su servidor, puede cargar sus claves usando un método de SSH convencional.
Podemos hacer esto usando el comando cat
para leer el contenido de la clave de SSH pública en nuestra computadora local y canalizando esto a través de una conexión SSH al servidor remoto.
Por otra parte, podemos asegurarnos de que el directorio ~/.ssh
exista y tenga los permisos correctos conforme a la cuenta que usamos.
Luego podemos transformar el contenido que canalizamos a un archivo llamado authorized_keys
dentro de este directorio. Usaremos el símbolo de redireccionamiento >>
para anexar el contenido en lugar de sobrescribirlo. Esto nos permitirá agregar claves sin eliminar claves previamente agregadas.
El comando completo tiene este aspecto:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
Es posible que vea el siguiente mensaje:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Esto significa que su computadora local no reconoce el host remoto. Esto sucederá la primera vez que establezca conexión con un nuevo host. Escriba “yes” y presione ENTER
para continuar.
Posteriormente, deberá recibir la solicitud de introducir la contraseña de la cuenta de usuario remota:
Outputusername@203.0.113.1's password:
Una vez que ingrese su contraseña, el contenido de su clave id_rsa.pub
se copiará al final del archivo authorized_keys
de la cuenta del usuario remoto. Continúe con el paso 3 si el procedimiento se completó de forma correcta.
Si no tiene disponibilidad de acceso de SSH basado en contraseña a su servidor, deberá completar el proceso anterior de forma manual.
Habilitaremos el contenido de su archivo id_rsa.pub
para el archivo ~/.ssh/authorized_keys
en su máquina remota.
Para mostrar el contenido de su clave id_rsa.pub
, escriba esto en su computadora local:
- cat ~/.ssh/id_rsa.pub
Verá el contenido de la clave, que debería tener un aspecto similar a este:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Acceda a su host remoto usando el método que esté a su disposición.
Una vez que tenga acceso a su cuenta en el servidor remoto, debe asegurarse de que exista el directorio ~/.ssh
. Con este comando se creará el directorio, si es necesario. Si este último ya existe, no se creará:
- mkdir -p ~/.ssh
Ahora, podrá crear o modificar el archivo authorized_keys
dentro de este directorio. Puede agregar el contenido de su archivo de id_rsa.pub
al final del archivo authorized_keys
y, si es necesario, crear este último con el siguiente comando:
- echo public_key_string >> ~/.ssh/authorized_keys
En el comando anterior, reemplace public_key_string
por el resultado del comando cat ~/.ssh/id_rsa.pub
que ejecutó en su sistema local. Debería comenzar con ssh-rsa AAAA...
.
Por último, verificaremos que el directorio ~/.ssh
y el archivo authorized_keys
tengan el conjunto de permisos apropiados:
- chmod -R go= ~/.ssh
Con esto, se eliminan de forma recursiva todos los permisos “grupo” y “otros” del directorio ~/.ssh/
.
Si usa la cuenta root
para configurar claves para una cuenta de usuario, también es importante que el directorio ~/.ssh
pertenezca al usuario y no sea root
:
- chown -R sammy:sammy ~/.ssh
En este tutorial, nuestro usuario recibe el nombre sammy, pero debe sustituir el nombre de usuario que corresponda en el comando anterior.
Ahora podemos intentar la autenticación sin contraseña con nuestro servidor de Debian.
Si completó con éxito uno de los procedimientos anteriores, debería poder iniciar sesión en el host remoto sin la contraseña de la cuenta remota.
El proceso básico es el mismo:
- ssh username@remote_host
Si es la primera vez que establece conexión con este host (si empleó el último método anterior), es posible que vea algo como esto:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
Esto significa que su computadora local no reconoce el host remoto. Escriba “yes” y presione ENTER
para continuar.
Si no proporcionó una frase de contraseña para su clave privada, se iniciará sesión de inmediato. Si proporcionó una frase de contraseña para la clave privada al crearla, se solicitará que la introduzca ahora (tenga en cuenta que, por motivos de seguridad, las pulsaciones de teclas no se mostrarán en la sesión de terminal). Después de la autenticación, se debería abrir una nueva sesión de shell con la cuenta configurada en el servidor de Debian.
Si la autenticación basada en claves se realizó con éxito, puede aprender a proteger más su sistema inhabilitando la autenticación con contraseña.
Si pudo iniciar sesión en su cuenta usando SSH sin una contraseña, habrá configurado con éxito la autenticación basada en claves de SSH para su cuenta. Sin embargo, su mecanismo de autenticación basado en contraseña sigue activo. Esto significa que su servidor sigue expuesto a ataques de fuerza bruta.
Antes de completar los pasos de esta sección, asegúrese de tener configurada la autenticación basada en claves de SSH para la cuenta root en este servidor o, preferentemente, la autenticación basada en clave de SSH para una cuenta no root en este servidor con privilegios sudo
. Con este paso, se bloquearán los registros basados en contraseñas. Por lo tanto, es fundamental que se asegure de seguir teniendo acceso administrativo.
Una vez que haya confirmado que su cuenta remota tiene privilegios administrativos, inicie sesión en su servidor remoto con claves de SSH, ya sea como root o con una cuenta con privilegios sudo
. Luego, abra el archivo de configuración del demonio de SSH:
- sudo nano /etc/ssh/sshd_config
Dentro del archivo, busque una directiva llamada PasswordAuthentication
. Puede insertar comentarios sobre esto. Elimine los comentarios de la línea y fije el valor en “no”. Esto inhabilitará su capacidad de iniciar sesión a través de SSH usando contraseñas de cuenta:
...
PasswordAuthentication no
...
Guarde y cierre el archivo cuando haya terminado presionando CTRL + X
, luego Y
para confirmar la operación de guardado y, por último, ENTER
para cerrar nano. Para implementar realmente estos cambios, debemos reiniciar el servicio sshd
:
- sudo systemctl restart ssh
Como medida de precaución, abra una nueva ventana de terminal y compruebe que el servicio SSH funcione correctamente antes de cerrar esta sesión:
- ssh username@remote_host
Una vez que haya verificado su servicio SSH, podrá cerrar de forma segura todas las sesiones de servidores actuales.
El demonio de SSH de su servidor de Debian ahora solo responderá a claves de SSH. La autenticación basada en contraseña se habrá desactivado con éxito.
De esta manera, la autenticación basada en claves de SSH debería quedar configurada en su servidor. Esto le permitirá iniciar sesión sin proporcionar una contraseña de cuenta.
Si desea obtener más información sobre cómo trabajar con SSH, consulte nuestra Guía de aspectos básicos de SSH.
]]>¿Desea acceder a Internet de manera segura desde su teléfono inteligente o desde su notebook cuando está conectado a una red no confiable, como la red wifi de un hotel o una cafetería? Una red privada virtual (VPN, por su sigla en inglés) le permite desplazarse por redes no confiables de manera privada y segura, como si estuviera en una red privada. El tráfico se inicia en el servidor de la VPN y continúa su camino hacia el destino.
Cuando se combina con conexiones HTTPS, esta configuración le permite proteger los inicios de sesión y las operaciones que realiza por medios inalámbricos. Puede evadir censuras y restricciones geográficas, y proteger su ubicación y el tráfico de HTTP no cifrado contra la actividad de la red no confiable.
OpenVPN es una solución de capa de conexión segura (SSL) de funciones completas y de código abierto que cuenta con una amplia variedad de configuraciones. A través de este tutorial, configurará un servidor de OpenVPN en un servidor de Debian 9 y luego el acceso a él desde Windows, macOS, iOS o Android. Se intentará brindar el mayor nivel de simplicidad posible para los pasos de instalación y configuración de cada una de las configuraciones.
Nota: Si planea configurar un servidor de OpenVPN en un Droplet de DigitalOcean, tenga en cuenta que nosotros, como muchos proveedores de alojamiento web, aplicamos cobros por excesos de banda ancha. Por este motivo, controle el volumen de tráfico que maneja su servidor.
Consulte esta página para obtener más información.
Para completar este tutorial, necesitará acceso a un servidor de Debian 9 a fin de alojar su servicio de OpenVPN. Antes de comenzar a seguir los pasos de esta guía, deberá configurar un usuario no root con privilegios sudo
. Puede seguir nuestra guía de configuración inicial para servidores de Debian 9 para configurar un usuario con los permisos adecuados. A través del tutorial del enlace también se podrá configurar un firewall, que para esta guía se supone que está instalado.
Además, necesitará una máquina aparte para que funcione como su autoridad de certificación (CA). Si bien es técnicamente posible usar su servidor de OpenVPN o su máquina local como CA, esto no se recomienda porque expone su VPN a algunas vulnerabilidades de seguridad. Según la documentación oficial de OpenVPN, debe instalar su CA en una máquina independiente dedicada a importar y firmar solicitudes de certificados. Por este motivo, para esta guía se supone que su CA se encuentra en un servidor de Debian 9 independiente que también tiene un usuario no root con privilegios sudo
y un firewall básico.
Tenga en cuenta que si deshabilita la autenticación de contraseña mientras configura estos servidores, es posible que experimente dificultades al transferir archivos entre ellos más adelante en esta guía. Para solucionar este problema, puede volver a habilitar la autenticación de contraseña en cada servidor. De manera alternativa, puede generar un par de claves SSH para cada servidor, luego agregar la clave SSH pública del servidor de OpenVPN al archivo authorized_keys
y viceversa. Consulte Cómo configurar claves SSH en Debian 9 para hallar instrucciones sobre cómo aplicar cualquiera de estas soluciones.
Una vez cumplidos estos requisitos previos, podrá abordar el paso 1 de este tutorial.
Para comenzar, actualice el índice de paquetes de su servidor de VPN e instale OpenVPN. OpenVPN está disponible en los repositorios predeterminados de Debian, por lo que puede usar apt
para la instalación:
- sudo apt update
- sudo apt install openvpn
OpenVPN es una VPN TL/SSL. Esto significa que utiliza certificados para cifrar el tráfico entre el servidor y los clientes. Para emitir certificados de confianza, debe configurar su propia autoridad de certificación (CA) simple. Para hacer esto, descargaremos la última versión de EasyRSA, que usaremos para crear nuestra infraestructura de clave pública (PKI) de CA, desde el repositorio de GitHub oficial del proyecto.
Como se mencionó en los requisitos previos, crearemos la CA en un servidor independiente. Este enfoque se basa en la idea de que si un atacante pudiera infiltrarse en su servidor, podría acceder a la clave privada de su CA y usarla para firmar nuevos certificados, con lo cual obtendría acceso a su VPN. Respectivamente, administrar la CA desde una máquina independiente ayuda a evitar que los usuarios no autorizados accedan a su VPN. Tenga también en cuenta que se recomienda mantener el servidor de CA desactivado cuando no esté en uso para firmar claves como medida de precaución adicional.
A fin de comenzar a crear la infraestructura de CA y PKI, use wget
para descargar la última versión de EasyRSA tanto en su máquina de CA como en su servidor de OpenVPN. Para obtener la última versión, diríjase a la página Releases (versiones) del proyecto oficial EasyRSA de GitHub, copie el enlace de descarga del archivo que termina en .tgz
y luego péguelo en el siguiente comando:
- wget -P ~/ https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.4/EasyRSA-3.0.4.tgz
Luego extraiga el tarball:
- cd ~
- tar xvf EasyRSA-3.0.4.tgz
Con esto, habrá instalado de manera correcta el software necesario en su servidor y en su máquina de CA. Podrá proseguir con la configuración de las variables empleadas por EasyRSA y la fijación de un directorio de CA, desde el cual generará las claves y los certificados necesarios para que su servidor y sus clientes accedan a la VPN.
EasyRSA cuenta con un archivo de configuración que puede editar para definir diversas variables para su CA.
En su **máquina **de CA, diríjase al directorio EasyRSA:
- cd ~/EasyRSA-3.0.4/
Dentro de este directorio, hay un archivo llamado vars.example
. Haga una copia de este archivo y asigne a esta el nombre vars
sin agregar una extensión:
- cp vars.example vars
Abra este archivo nuevo con su editor de texto preferido:
- nano vars
Encuentre los ajustes que establecen los valores de campos predeterminados para nuevos certificados. El aspecto será similar a este:
. . .
#set_var EASYRSA_REQ_COUNTRY "US"
#set_var EASYRSA_REQ_PROVINCE "California"
#set_var EASYRSA_REQ_CITY "San Francisco"
#set_var EASYRSA_REQ_ORG "Copyleft Certificate Co"
#set_var EASYRSA_REQ_EMAIL "me@example.net"
#set_var EASYRSA_REQ_OU "My Organizational Unit"
. . .
Quite los comentarios de estas líneas y cambie los valores resaltados por los que prefiera, pero no los deje vacíos:
. . .
set_var EASYRSA_REQ_COUNTRY "US"
set_var EASYRSA_REQ_PROVINCE "NewYork"
set_var EASYRSA_REQ_CITY "New York City"
set_var EASYRSA_REQ_ORG "DigitalOcean"
set_var EASYRSA_REQ_EMAIL "admin@example.com"
set_var EASYRSA_REQ_OU "Community"
. . .
Cuando termine, guarde y cierre el archivo.
Dentro del directorio EasyRSA hay una secuencia de comandos llamada easyrsa
, que se usar para llevar a cabo varias tareas relacionadas con la creación y administración de la CA. Ejecute la secuencia de comandos con la opción init-pki
para iniciar la infraestructura de clave pública en el servidor de CA:
- ./easyrsa init-pki
Output. . .
init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /home/sammy/EasyRSA-3.0.4/pki
Luego, ejecute la secuencia de comandos easyrsa
nuevamente, seguida de la opción build-ca
. Con esto, se crearán la CA y dos archivos importantes, ca.crt
y ca.key
, que representarán los lados públicos y privados de un certificado SSL.
ca.crt
es el archivo de certificado público de CA que usan, en el contexto de OpenVPN, el servidor y el cliente para informarse entre sí que son parte de la misma red de confianza y no atacantes desconocidos. Por este motivo, su servidor y todos sus clientes necesitarán una copia del archivo ca.crt
.ca.key
es la clave privada que la máquina CA usa para firmar claves y certificados para servidores y clientes. Si un atacante logra acceder a su CA, y con ello a su archivo ca.key
, podrá firmar solicitudes de certificados y acceder a su VPN, lo que inhabilitará su seguridad. Esta es la razón por la cual su archivo ca.key
deberá estar **únicamente **en su máquina de CA. A su vez, lo ideal sería que su máquina de CA estuviera desconectada cuando no firme solicitudes de certificados como medida de seguridad adicional.Si no desea que se le solicite una contraseña cada vez que interactúe con su CA, puede ejecutar el comando build-ca
con la opción nopass
, de la siguiente forma:
- ./easyrsa build-ca nopass
En el resultado, se le solicitará confirmar el nombre común de su CA:
Output. . .
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:
El nombre común es el que se usa para hacer referencia a esta máquina en el contexto de la autoridad de certificación. Puede ingresar cualquier secuencia de caracteres para el nombre común de la CA. No obstante, para hacerlo más simple, presione ENTER
para aceptar el nombre predeterminado.
Con esto, su CA quedará configurada y lista para comenzar a firmar solicitudes de certificado.
Ahora que ya tiene lista una CA, puede generar una solicitud de clave y certificado privados desde su servidor y luego transferirla a su CA para que la firme y cree el certificado solicitado. También puede crear algunos archivos adicionales que se usan durante el proceso de cifrado.
Comience dirigiéndose al directorio EasyRSA en su servidor de OpenVPN:
- cd EasyRSA-3.0.4/
Desde allí, ejecute la secuencia de comandos easyrsa
con la opción init-pki
. Aunque ya ejecutó este comando en la máquina de CA, es necesario ejecutarlo aquí porque su servidor y su CA tendrán directorios de PKI independientes:
- ./easyrsa init-pki
Luego, ejecute la secuencia de comandos easyrsa
nuevamente, esta vez con la opción gen-req
seguida de un nombre común para la máquina. Este nombre, una vez más, puede ser cualquiera, aunque puede ser útil que sea descriptivo. A lo largo de este tutorial, el nombre común del servidor de OpenVPN será simplemente “server”. Asegúrese de incluir también la opción nopass
. Si no lo hace, se protegerá con contraseña el archivo de solicitud, lo que puede generar problemas de permisos posteriormente:
Nota: Si elige otro nombre que no sea “server”, deberá modificar algunas de las instrucciones a continuación. Por ejemplo, al copiar los archivos generados al directorio /etc/openvpn
, deberá sustituir los nombres que correspondan. También deberá modificar el archivo /etc/openvpn/server.conf
más adelante para apuntar a los archivos .crt
y .key
correctos.
- ./easyrsa gen-req server nopass
Con esto se crearán una clave privada para el servidor y un archivo de solicitud de certificado llamado server.req
. Copie la clave del servidor al directorio /etc/openvpn/
:
- sudo cp ~/EasyRSA-3.0.4/pki/private/server.key /etc/openvpn/
Transfiera el archivo server.req
a su máquina de CA utilizando un método seguro (como SCP, en el ejemplo que ofrecemos a continuación):
- scp ~/EasyRSA-3.0.4/pki/reqs/server.req sammy@your_CA_ip:/tmp
Luego, en **su **máquina de CA, diríjase al directorio EasyRSA:
- cd EasyRSA-3.0.4/
Usando nuevamente la secuencia de comandos easyrsa
, importe el archivo server.req
y siga la ruta de este con su nombre común:
- ./easyrsa import-req /tmp/server.req server
Luego, firme la solicitud ejecutando la secuencia easyrsa
con la opción sign-req
,seguida del tipo de solicitud y el nombre común. El tipo de solicitud puede ser client
o server
. Por ello, para la solicitud de certificado del servidor de OpenVPN asegúrese de usar el tipo de solicitud server
:
- ./easyrsa sign-req server server
Al finalizar, se le solicitará verificar que la solicitud provenga de una fuente de confianza. Escriba yes
y luego presion e ENTER
para confirmarlo:
You are about to sign the following certificate.
Please check over the details shown below for accuracy. Note that this request
has not been cryptographically verified. Please be sure it came from a trusted
source or that you have verified the request checksum with the sender.
Request subject, to be signed as a server certificate for 3650 days:
subject=
commonName = server
Type the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Si cifró su clave de CA, se le solicitará ingresar la contraseña en este punto.
Luego, transfiera el certificado firmado de vuelta a su servidor de VPN con un método seguro:
- scp pki/issued/server.crt sammy@your_server_ip:/tmp
Antes de cerrar sesión en su máquina de CA, transfiera también el archivo ca.crt
a su servidor:
- scp pki/ca.crt sammy@your_server_ip:/tmp
Luego, vuelva a iniciar sesión en su servidor de OpenVPN y copie los archivos server.crt
y ca.cart
a su directorio /etc/openvpn/:
- sudo cp /tmp/{server.crt,ca.crt} /etc/openvpn/
Luego, diríjase a su directorio EasyRSA:
- cd EasyRSA-3.0.4/
Desde ahí, cree una clave segura Diffie-Hellman para usarla durante el intercambio de claves escribiendo:
- ./easyrsa gen-dh
Esta operación puede tardar unos minutos. Una vez que se complete, genere una firma HMAC para fortalecer las capacidades de verificación de integridad TLS del servidor:
- sudo openvpn --genkey --secret ta.key
Cuando el comando se aplique, copie los dos nuevos archivos a su directorio /etc/openvpn/
:
- sudo cp ~/EasyRSA-3.0.4/ta.key /etc/openvpn/
- sudo cp ~/EasyRSA-3.0.4/pki/dh.pem /etc/openvpn/
Con esto, se generarán todos los archivos de certificados y claves necesarios para su servidor. Ya está listo para crear los certificados y las claves correspondientes que usará su máquina cliente para acceder a su servidor de OpenVPN.
Aunque puede generar una solicitud de claves y certificados privados en su máquina cliente y luego enviarla a la CA para que la firme, en esta guía se describe un proceso para generar la solicitud de certificado en el servidor. El beneficio de esto es que podemos crear una secuencia de comandos que generará de manera automática archivos de configuración que contienen las claves y los certificados necesarios. Esto le permite evitar la transferencia de claves, certificados y archivos de configuración a los clientes y optimiza el proceso para unirse a la VPN.
Generaremos un par individual de clave y certificado de cliente para esta guía. Si tiene más de un cliente, puede repetir este proceso para cada uno. Tenga en cuenta que deberá pasar un valor de nombre único a la secuencia de comandos para cada cliente. En este tutorial, el primer par de certificado y clave se denominará “client1”
.
Comience por crear una estructura de directorios dentro de su directorio de inicio para almacenar los archivos de certificado y clave de cliente:
- mkdir -p ~/client-configs/keys
Debido a que almacenará los pares de certificado y clave de sus clientes y los archivos de configuración en este directorio, debe bloquear sus permisos ahora como medida de seguridad:
- chmod -R 700 ~/client-configs
Luego, diríjase al directorio EasyRSA y ejecute la secuencia de comandos easyrsa
con las opciones gen-req
y nopass
, junto con el nombre común para el cliente:
- cd ~/EasyRSA-3.0.4/
- ./easyrsa gen-req client1 nopass
Presione ENTER
para confirmar el nombre común. Luego, copie el archivo client1.key
al directorio /client-configs/keys/
que creó antes:
- cp pki/private/client1.key ~/client-configs/keys/
Luego, transfiera el archivo client1.req
a su máquina de CA usando un método seguro:
- scp pki/reqs/client1.req sammy@your_CA_ip:/tmp
Inicie sesión en su máquina de CA, navegue hasta el directorio EasyRSA e importe la solicitud de certificado:
- ssh sammy@your_CA_IP
- cd EasyRSA-3.0.4/
- ./easyrsa import-req /tmp/client1.req client1
Luego firme la solicitud como lo hizo en el caso del servidor en el paso anterior. Esta vez, asegúrese de especificar el tipo de solicitud client
:
- ./easyrsa sign-req client client1
En la línea de comandos, ingrese yes
para confirmar que desea firmar la solicitud de certificado y que esta provino de una fuente confiable:
OutputType the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Nuevamente, si cifró su clave de CA, se le solicitará la contraseña en este punto.
Con esto, se creará un archivo de certificado de cliente llamado client1.crt
. Transfiera este archivo de vuelta al servidor:
- scp pki/issued/client1.crt sammy@your_server_ip:/tmp
Establezca una conexión de retorno de SSH en su servidor de OpenVPN y copie el certificado de cliente al directorio /client-configs/keys/
:
- cp /tmp/client1.crt ~/client-configs/keys/
Luego, copie también los archivos ca.crt
y ta.key
al directorio /client-configs/keys/
:
- sudo cp ~/EasyRSA-3.0.4/ta.key ~/client-configs/keys/
- sudo cp /etc/openvpn/ca.crt ~/client-configs/keys/
Con esto, se generarán los certificados y las claves de su servidor y cliente, y se almacenarán en los directorios correspondientes de su servidor. Aún quedan algunas acciones que se deben realizar con estos archivos, pero se realizarán más adelante. Por ahora, puede comenzar a configurar OpenVPN en su servidor.
Ahora que se generaron los certificados y las claves de su cliente y servidor, puede comenzar a configurar el servicio de OpenVPN para que use estas credenciales.
Comience copiando un archivo de configuración de OpenVPN de muestra al directorio de configuración y luego extráigalo para usarlo como base para su configuración:
- sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
- sudo gzip -d /etc/openvpn/server.conf.gz
Abra el archivo de configuración del servidor en su editor de texto preferido:
- sudo nano /etc/openvpn/server.conf
Busque la directiva tls-auth
para encontrar la sección HMAC. Los comentarios de esta línea no deberían existir, pero si esto no sucede elimine “;” para quitar los comentarios:
tls-auth ta.key 0 # This file is secret
Luego, busque las líneas cipher
con comentarios para encontrar la sección de cifrado. El código AES-256-CBC
ofrece un buen nivel de cifrado y cuenta con buen respaldo. Una vez más, no debería haber comentarios para esta línea, pero si esto no sucede simplemente elimine el “;” que la precede:
cipher AES-256-CBC
Debajo, agregue una directiva auth
para seleccionar el algoritmo de codificación de mensajes HMAC. SHA256
es una buena opción:
auth SHA256
Luego, encuentre la línea que contenga la directiva dh
que define los parámetros Diffie-Hellman. Debido a algunos cambios recientes realizados en EasyRSA, el nombre de archivo de la clave Diffie-Hellman puede ser distinto del que figura en el ejemplo del archivo de configuración del servidor. Si es necesario, cambie el nombre de archivo que aparece eliminando 2048
para que coincida con la clave que generó en el paso anterior:
dh dh.pem
Por último, busque los ajustes user
y group
, y elimine “;” al inicio de cada uno para quitar los comentarios de estas líneas:
user nobody
group nogroup
Los cambios realizados al archivo de muestra server.conf
hasta el momento son necesarios para que OpenVPN funcione. Los cambios mencionados a continuación son opcionales, aunque también se necesitan para muchos casos de uso comunes.
Con los ajustes anteriores, se creará la conexión de VPN entre las dos máquinas, pero no se forzarán conexiones para usar el túnel. Si desea usar la VPN para dirigir todo su tráfico, probablemente le convenga aplicar los ajustes de sistemas de nombre de domino (DNS) a las computadoras clientes.
Para habilitar esta funcionalidad, debe cambiar algunas directivas del archivo server.conf
. Encuentre la sección redirect-gateway
y elimine el punto y coma, “;”, del inicio de la línea redirect-gateway
para quitar los comentarios:
push "redirect-gateway def1 bypass-dhcp"
Debajo de esto, encontrará la sección dhcp-option
. Nuevamente, elimine “;” del inicio de ambas líneas para quitar los comentarios:
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"
Esto ayudará a los clientes a configurar de nuevo sus ajustes de DNS para usar el túnel de la VPN como puerta de enlace predeterminada.
Por defecto, el servidor de OpenVPN usa el puerto 1194
y el protocolo UDP para aceptar las conexiones de los clientes. Si necesita usar un puerto diferente debido a restricciones de los entornos de red que sus clientes puedan emplear, puede cambiar la opción port
. Si no aloja contenido web en su servidor de OpenVPN, el puerto 443
es una opción común, ya que se suele permitir en las reglas de firewall.
# Optional!
port 443
Algunas veces, el protocolo se limita a ese puerto también. Si esto sucede, cambie proto
de UDP a TCP:
# Optional!
proto tcp
Si cambia el protocolo a TCP, deberá cambiar el valor de la directiva explicit-exit-notify
de 1
a 0
, ya que solo UDP la usa. Si no lo hace al usar TCP, se producirán errores al iniciar el servicio de OpenVPN:
# Optional!
explicit-exit-notify 0
Si no tiene necesidad de usar un puerto y protocolo distintos, es mejor dejar estos dos ajustes como sus valores predeterminados.
Si anteriormente seleccionó un nombre distinto durante el comando ./ build-key-server
, modifique las líneas cert
y key
que ve para apuntar a los archivos .crt
y .key
adecuados. Si usó el nombre predeterminado, “server”, esto ya está correctamente configurado:
cert server.crt
key server.key
Cuando termine, guarde y cierre el archivo.
Luego de revisar y aplicar los cambios necesarios a la configuración de su servidor de OpenVPN para sus necesidades de uso específicas, puede comenzar a aplicar algunos cambios a la conexión de su servidor.
Hay algunos aspectos de la configuración de redes del servidor que deben modificarse para que OpenVPN pueda dirigir el tráfico de manera correcta a través de la VPN. El primero es *el enrutamiento *de IP, un método para determinar a dónde se debe dirigir el tráfico de IP. Esto es esencial para la funcionalidad de VPN que proporcionará su servidor.
Para ajustar el enrutamiento de IP predeterminado de su servidor, modifique el archivo /etc/sysctl.conf
:
- sudo nano /etc/sysctl.conf
Dentro de este, busque la línea con comentarios que configura net.ipv4.ip_forward
. Elimine el carácter “#” del inicio de la línea para quitar los comentairos de este ajuste:
net.ipv4.ip_forward=1
Guarde y cierre el archivo cuando termine.
Para leer el archivo y modificar los valores de la sesión actual, escriba lo siguiente:
- sudo sysctl -p
Outputnet.ipv4.ip_forward = 1
Si siguió la guía de instalación inicial para servidores de Debian 9 mencionada en los requisitos previos, debería tener un firewall UFW establecido. Independientemente de que use el firewall para bloquear el tráfico no deseado (algo que debe hacer casi siempre), a los efectos de esta guía necesitará un firewall para manipular parte del tráfico que ingresa al servidor. Algunas de las reglas del firewall deben modificarse para permitir el enmascaramiento, un concepto de iptables que proporciona traducción de direcciones de red (NAT) dinámica sobre la marcha para dirigir de manera correcta las conexiones de los clientes.
Antes de abrir el archivo de configuración de firewall para agregar las reglas de enmascaramiento, primero debe encontrar la interfaz de red pública de su máquina. Para hacer esto, escriba lo siguiente:
- ip route | grep default
Su interfaz pública es la secuencia que se halla en el resultado de este comando después de la palabra “dev”. Por ejemplo, este resultado muestra la interfaz denominada eth0
, que aparece resaltada a continuación:
Outputdefault via 203.0.113.1 dev eth0 onlink
Una vez que tenga la interfaz asociada con su ruta predeterminada, abra el archivo /etc/ufw/before.rules
para agregar la configuración pertinente:
- sudo nano /etc/ufw/before.rules
Las reglas de UFW suelen agregarse usando el comando ufw
. Sin embargo, las reglas enumeradas en el archivo before.rules
se leen e implementan antes de que se carguen las reglas de UFW. En la parte superior del archivo, agregue las líneas resaltadas a continuación. Con esto, se establecerá la política predeterminada de la cadena POSTROUTING
en la tabla nat
y se enmascarará el tráfico que provenga de la VPN. Recuerde reemplazar eth0
en la línea -A POSTROUTING
siguiente por la interfaz que encontró en el comando anterior:
#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
# ufw-before-input
# ufw-before-output
# ufw-before-forward
#
# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!)
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
# END OPENVPN RULES
# Don't delete these required lines, otherwise there will be errors
*filter
. . .
Guarde y cierre el archivo cuando termine.
Luego, debe indicar a UFW que permita también los paquetes reenviados de forma predeterminada. Para hacer esto, abra el archivo /etc/default/ufw
:
- sudo nano /etc/default/ufw
Dentro de este, encuentre la directiva DEFAULT_FORWARD_POLICY
y cambie el valor de DROP
a ACCEPT
:
DEFAULT_FORWARD_POLICY="ACCEPT"
Guarde y cierre el archivo cuando termine.
Luego, ajuste el firewall para permitir el tráfico hacia OpenVPN. Si no cambió el puerto ni el protocolo en el archivo /etc/openvpn/server.conf
, deberá abrir el tráfico UDP al puerto 1194
. Si modificó el puerto o el protocolo, sustituya los valores que seleccionó aquí.
En caso de que se haya olvidado de agregar el puerto SSH al seguir el tutorial de los requisitos previos, agréguelo aquí también:
- sudo ufw allow 1194/udp
- sudo ufw allow OpenSSH
Luego de agregar esas reglas, deshabilite y vuelva a habilitar UFW para reiniciarlo y cargue los cambios de todos los archivos que haya modificado:
- sudo ufw disable
- sudo ufw enable
Su servidor quedará, así, configurado para manejar de manera correcta el tráfico de OpenVPN.
Finalmente, ya está listo para iniciar el servicio de OpenVPN en su servidor. Esto se hace mediante la utilidad systemctl
de systemd.
Inicie el servidor de OpenVPN especificando el nombre de su archivo de configuración como una variable de instancia después del nombre del archivo de unidad de systemd. El archivo de configuración de su servidor se llama /etc/openvpn/server.conf
. Por lo tanto, debe agregar @server
al final de su archivo de unidad cuando lo llame:````
- sudo systemctl start openvpn@server
Vuelva a controlar que el servicio se haya iniciado correctamente escribiendo lo siguiente:
- sudo systemctl status openvpn@server
Si todo salió bien, el resultado será similar a esto:
Output● openvpn@server.service - OpenVPN connection to server
Loaded: loaded (/lib/systemd/system/openvpn@.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2016-05-03 15:30:05 EDT; 47s ago
Docs: man:openvpn(8)
https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
https://community.openvpn.net/openvpn/wiki/HOWTO
Process: 5852 ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid (code=exited, sta
Main PID: 5856 (openvpn)
Tasks: 1 (limit: 512)
CGroup: /system.slice/system-openvpn.slice/openvpn@server.service
└─5856 /usr/sbin/openvpn --daemon ovpn-server --status /run/openvpn/server.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/server.conf --writepid /run/openvpn/server.pid
También puede controlar que la interfaz tun0
de OpenVPN esté disponible escribiendo lo siguiente:
- ip addr show tun0
Esto dará como resultado una interfaz configurada:
Output4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100
link/none
inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
valid_lft forever preferred_lft forever
Luego de iniciar el servicio, habilítelo para que se cargue de manera automática en el inicio:
- sudo systemctl enable openvpn@server
Su servicio de OpenVPN quedará, así, configurado y en funcionamiento. Sin embargo, para comenzar a usarlo debe crear primero un archivo de configuración para la máquina cliente. En el tutorial ya se explicó la forma crear pares de certificado y clave para clientes, y en el siguiente paso demostraremos la forma de crear una infraestructura que generará archivos de configuración de clientes fácilmente.
Es posible que se deban crear archivos de configuración para clientes de OpenVPN, ya que todos los clientes deben tener su propia configuración y alinearse con los ajustes mencionados en el archivo de configuración del servicio. En este paso, en lugar de detallarse el proceso para escribir un único archivo de configuración que solo se pueda usar en un cliente, se describe un proceso para crear una infraestructura de configuración de cliente que puede usar para generar archivos de configuración sobre la marcha. Primero creará un archivo de configuración “de base” y luego una secuencia de comandos que le permitirá generar archivos de configuración, certificados y claves de clientes exclusivos según sea necesario.
Comience creando un nuevo directorio en el que almacenará archivos de configuración de clientes dentro del directorio client-configs
creado anteriormente:
- mkdir -p ~/client-configs/files
Luego, copie un archivo de configuración de cliente de ejemplo al directorio client-configs
para usarlo como su configuración de base:
- cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf
Abra este archivo nuevo en su editor de texto preferido:
- nano ~/client-configs/base.conf
Dentro de este, ubique la directiva remote
. Esto dirige al cliente a la dirección de su servidor de OpenVPN: la dirección IP pública de su servidor de OpenVPN. Si decidió cambiar el puerto en el que el servidor de OpenVPN escucha, también deberá cambiar 1194
por el puerto seleccionado:
. . .
# The hostname/IP and port of the server.
# You can have multiple remote entries
# to load balance between the servers.
remote your_server_ip 1194
. . .
Asegúrese de que el protocolo coincida con el valor que usa en la configuración del servidor:
proto udp
Luego, elimine los comentarios de las directivas user
y group
quitando “;” al inicio de cada línea:
# Downgrade privileges after initialization (non-Windows only)
user nobody
group nogroup
Encuentre las directivas que establecen ca
, cert
y key
. Elimine los comentarios de estas directivas, ya que pronto agregará los certificados y las claves dentro del archivo:
# SSL/TLS parms.
# See the server config file for more
# description. It's best to use
# a separate .crt/.key file pair
# for each client. A single ca
# file can be used for all clients.
#ca ca.crt
#cert client.crt
#key client.key
De manera similar, elimine los comentarios de la directiva tls-auth
, ya que agregará ta.key
directamente al archivo de configuración de cliente:
# If a tls-auth key is used on the server
# then every client must also have the key.
#tls-auth ta.key 1
Refleje los ajustes de cipher
y auth
que estableció en el archivo /etc/openvpn/server.conf
:
cipher AES-256-CBC
auth SHA256
Luego, agregue la directiva key-direction
en algún lugar del archivo. Es **necesario que **fije el valor “1” para esta, a fin de que la VPN funcione de manera correcta en la máquina cliente:
key-direction 1
Por último, agregue algunas líneas no incluidas. Aunque puede incluir estas directivas en todos los archivos de configuración de clientes, solo debe habilitarlas para clientes Linux que incluyan un archivo /etc/openvpn/update-resolv-conf
. Esta secuencia de comandos usa la utilidad resolvconf
para actualizar la información de DNS para clientes Linux.
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
Si su cliente tiene Linux instalado y un archivo /etc/openvpn/update-resolv-conf
, elimine los comentarios de estas líneas del archivo de configuración de cliente luego de que se haya generado.
Guarde y cierre el archivo cuando termine.
A continuación, cree una secuencia de comandos simple que compile su configuración de base con el certificado, la clave y los archivos de cifrado pertinentes, y luego ubique la configuración generada en el directorio ~/client-configs/files
. Abra un nuevo archivo llamado make_config.sh
en el directorio ~/client-configs
:
- nano ~/client-configs/make_config.sh
Dentro de este, agregue el siguiente contenido y asegúrese de cambiar sammy
por el de la cuenta no root de su servidor:
#!/bin/bash
# First argument: Client identifier
KEY_DIR=/home/sammy/client-configs/keys
OUTPUT_DIR=/home/sammy/client-configs/files
BASE_CONFIG=/home/sammy/client-configs/base.conf
cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn
Guarde y cierre el archivo cuando termine.
Antes de continuar, asegúrese de marcar este archivo como ejecutable escribiendo lo siguiente:
- chmod 700 ~/client-configs/make_config.sh
Esta secuencia de comandos realizará una copia del archivo base.conf
que creó, recopilará todos los archivos de certificados y claves que haya confeccionado para su cliente, extraerá el contenido de estos y los anexará a la copia del archivo de configuración de base, y exportará todo este contenido a un nuevo archivo de configuración de cliente. Esto significa que se evita la necesidad de administrar los archivos de configuración, certificado y clave del cliente por separado, y que toda la información necesaria se almacena en un solo lugar. El beneficio de esto es que, si alguna vez necesita agregar un cliente más adelante, puede simplemente ejecutar esta secuencia de comandos para crear de manera rápida el archivo de configuración y asegurarse de que toda la información importante se almacene en una sola ubicación de acceso sencillo.
Tenga en cuenta que siempre que agregue un nuevo cliente, deberá generar claves y certificados nuevos para poder ejecutar esta secuencia de comandos y generar su archivo de configuración. Podrá practicar con este comando en el siguiente paso.
Si siguió la guía, creó un certificado y una clave de cliente llamados client1.crt
y client1.key
, respectivamente, en el paso 4. Puede generar un archivo de configuración para estas credenciales si se dirige al directorio ~/client-configs
y ejecuta la secuencia de comandos que realizó al final del paso anterior:
- cd ~/client-configs
- sudo ./make_config.sh client1
Con esto, se creará un archivo llamado client1.ovpn
en su directorio ~/client-configs/files
:
- ls ~/client-configs/files
Outputclient1.ovpn
Debe transferir este archivo al dispositivo que planee usar como cliente. Por ejemplo, puede ser su computadora local o un dispositivo móvil.
Si bien las aplicaciones exactas empleadas para lograr esta transferencia dependerán del sistema operativo de su dispositivo y sus preferencias personales, un método seguro y confiable consiste en usar el protocolo de transferencia de archivos SSH (SFTP ) o la copia segura (SCP) en el backend. Con esto se transportarán los archivos de autenticación de VPN de su cliente a través de una conexión cifrada.
Aquí se ofrece un comando SFTP de muestra que usa el ejemplo client1.ovpn
y que usted puede ejecutar desde su computadora local (macOS o Linux). Dispone el archivo .ovpn
en su directorio de inicio:````
- sftp sammy@your_server_ip:client-configs/files/client1.ovpn ~/
A continuación, se muestran diferentes herramientas y tutoriales para transferir de manera segura los archivos del servidor a una computadora local:
En esta sección se aborda la forma de instalar un perfil de VPN de cliente en Windows, macOS, Linux, iOS y Android. Ninguna de estas instrucciones para clientes depende de la otra. Por lo tanto, no dude en dirigirse directamente a la que corresponda para su dispositivo.
La conexión de OpenVPN tendrá el mismo nombre que utilizó para el archivo .ovpn.
En lo que respecta a este tutorial, esto significa que la conexión se llama client1.ovpn
y guarda correspondencia con el primer archivo de cliente que generó.
Instalación
Descargue la aplicación de cliente de OpenVPN para Windows de la página de descargas de OpenVPN. Seleccione la versión adecuada del instalador para su versión de Windows.
Nota: OpenVPN necesita privilegios administrativos para instalarse.
Luego de instalar OpenVPN, copie el archivo .ovpn
a esta ubicación:
C:\Program Files\OpenVPN\config
Cuando inicie OpenVPN, este detectará el perfil de manera automática y lo dejará disponible.
Debe ejecutar OpenVPN como administrador cada vez que lo use, aun en cuentas administrativas. Para realizar esto sin tener que hacer clic con el botón secundario y seleccionar Ejecutar como administrador cada vez que use la VPN, debe fijarlo como ajuste predeterminado desde una cuenta administrativa. Esto también significa que los usuarios estándares deberán ingresar la contraseña del administrador para usar OpenVPN. Por otro lado, los usuarios estándares no pueden conectarse de manera adecuada al servidor a menos que la aplicación OpenVPN del cliente tenga derechos de administrador. Por lo tanto, se necesitan privilegios elevados.
Para configurar la aplicación OpenVPN de modo que se ejecute siempre con privilegios de administrador, haga clic con el botón secundario en su ícono de acceso directo y diríjase a Propiedades. Al final de la pestaña Compatibilidad, haga clic en el botón Cambiar la configuración para todos los usuarios. En la nueva ventana, seleccione Ejecutar este programa como administrador.
Conexión
Cada vez que inicie OpenVPN GUI, Windows le preguntará si quiere que el programa realice cambios en su computadora. Haga clic en Sí. Iniciar la aplicación OpenVPN de cliente solo ubica el applet en la bandeja del sistema para que pueda conectar y desconectar la VPN según sea necesario; no establece la conexión de VPN.
Una vez que se inicie OpenVPN, establezca una conexión ingresando al área de notificación y haga clic con el botón secundario en el ícono de OpenVPN. Con esto, se abrirá el menú contextual. Seleccione** client1** en la parte superior del menú (su perfil client1.ovpn
) y luego** Connect**.
Una ventana de estado se abrirá y mostrará el resultado de registro mientras se establece la conexión,y se mostrará un mensaje una vez que el cliente esté conectado.
Desconéctese de la VPN de la misma forma: ingrese al applet de la bandeja del sistema, haga clic con el botón secundario en el ícono de OpenVPN, seleccione el perfil del cliente y haga clic en ****Disconnect.
Instalación
Tunnelblick es un cliente de OpenVPN gratuito y de código abierto para macOS. Puede descargar la última imagen de disco desde la página de descargas de Tunnelblick. Haga doble clic en el archivo .dmg
descargado y siga las instrucciones para instalarlo.
Al finalizar el proceso de instalación, Tunnelblick le preguntará si tiene algún archivo de configuración. Para simplificar el proceso, responda No y permita que Tunnelblick se cierre. Abra una ventana de Finder, busque client1.ovpn
y haga doble clic sobre él. Tunnelblick instalará el perfil de cliente. Se necesitan privilegios de administrador.
Conexión
Inicie Tunnelblick haciendo doble clic sobre este en la carpeta Aplicaciones. Una vez que se haya iniciado Tunnelblick, habrá un ícono de este en la barra de menú de la esquina superior derecha de la pantalla para controlar las conexiones. Haga clic en el ícono y luego en el elemento de menú **Connect **para iniciar la conexión de VPN. Seleccione la conexión client1.
Si usa Linux, dispone de varias herramientas según su distribución. En su entorno de escritorio o gestor de ventanas también pueden incluirse utilidades de conexión.
Sin embargo, el método de conexión más universal consiste en simplemente usar el software OpenVPN.
En Ubuntu o Debian, puede instalarlo como en el servidor escribiendo lo siguiente:
- sudo apt update
- sudo apt install openvpn
En CentOS, puede habilitar los repositorios EPEL y luego instalarlo escribiendo lo siguiente:
- sudo yum install epel-release
- sudo yum install openvpn
Verifique si su distribución incluye una secuencia de comandos /etc/openvpn/update-resolv-conf
:
- ls /etc/openvpn
Outputupdate-resolv-conf
A continuación, edite el archivo de configuración de cliente de OpenVPN que transfirió:
- nano client1.ovpn
Si pudo encontrar un archivo update-resolv-conf
, elimine los comentarios de las tres líneas de agregó para modificar los ajustes de DNS:
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
Si usa CentOS, cambie la directiva group
de nogroup
a nobody
para que coincidan los grupos de distribución disponibles:
group nobody
Guarde y cierre el archivo.
Ahora, podrá conectarse a la VPN simplemente apuntando el comando openvpn
hacia el archivo de configuración de cliente:
- sudo openvpn --config client1.ovpn
Esto debería permitirle establecer conexión con la VPN.
Instalación
Desde iTunes App Store, busque e instale OpenVPN Connect, la aplicación de cliente de OpenVPN oficial de iOS. Para transferir su configuración de cliente de iOS al dispositivo, conéctelo directamente a una computadora.
El proceso para completar la transferencia con iTunes se describe aquí. Abra iTunes en la computadora y haga clic en iPhone > apps. Deslícese hacia la parte inferior, hasta la sección Compartir archivos y haga clic en la app OpenVPN. La ventana en blanco de la derecha, OpenVPN Documents, sirve para compartir archivos. Arrastre el archivo .ovpn
hacia la ventana OpenVPN Documents.
.
Ahora, inicie la aplicación OpenVPN en el iPhone. Recibirá una notificación de que un nuevo perfil está listo para importarse. Toque el símbolo verde del signo de suma para importarlo.
Conexión
De esta manera, OpenVPN estará listo para usarse con el nuevo perfil. Inicie la conexión deslizando el botón de Connect hasta la posición de activación. Finalice la conexión deslizando el mismo botón hasta la posición de desactivación.
Nota: El interruptor de VPN de Settings no se puede usar para establecer conexión con la VPN. Si lo intenta, recibirá un aviso que le indicará conectarse únicamente utilizando la aplicación OpenVPN.
Instalación
Abra Google Play Store. Busque e instale Android OpenVPN Connect, la aplicación de cliente de OpenVPN oficial de Android.
Puede transferir el perfil .ovpn
conectando el dispositivo Android a su computadora a través de un puerto USB y copiando el archivo. De manera alternativa, si tiene un lector de tarjetas SD, puede quitar la tarjeta SD del dispositivo, copiar el perfil a ella y luego insertarla tarjeta de vuelta en el dispositivo Android.
Inicie la aplicación OpenVPN y haga clic en el menú para importar el perfil.
.
Luego, diríjase a la ubicación del perfil guardado (en la captura de pantalla se usa /sdcard/Download/
) y seleccione el archivo. La aplicación notificará que se importó el perfil.
Conexión
Para conectarse, simplemente toque el botón Connect. Se le preguntará si confía en la aplicación OpenVPN. Seleccione OK para iniciar la conexión. Para desconectarse de la VPN, vuelva a la aplicación OpenVPN y seleccione Disconnect.
Nota: Este método para probar la conexión de su VPN solo funcionará si eligió dirigir todo el tráfico a través de la VPN en el paso 5.
Una vez que todo esté instalado, con una simple revisión confirmará que todo funciona de forma correcta. Sin tener una conexión VPN habilitada, abra un navegador e ingrese en DNSLeakTest.
El sitio mostrará la dirección de IP asignada por su proveedor de servicio de Internet y la forma en que aparece para el resto del mundo. Para corroborar sus ajustes de DNS a través del mismo sitio web, haga clic en Extended Test. Esto le indicará los servidores DNS que usa.
Ahora, conecte el cliente de OpenVPN a la VPN de su servidor y actualice el navegador. Con esto, debería aparecer una dirección de IP totalmente distinta (la de su servidor de VPN). De esta manera aparecerá ante el mundo. Una vez más, la opción Extended Test de DNSLeakTest revisará sus ajustes de DNS y confirmará que ahora use los solucionadores de DNS enviados por su VPN.
Es posible que, de tanto en tanto, deba rechazar un certificado de cliente para evitar más accesos al servidor de OpenVPN.
Para hacerlo, en su máquina de CA, diríjase al directorio EasyRSA:
- cd EasyRSA-3.0.4/
Luego, ejecute la secuencia de comandos easyrsa
con la opción revoke
seguida del nombre del cliente que desee rechazar:
- ./easyrsa revoke client2
Con esto, se solicitará que confirme el rechazo ingresando yes
:
OutputPlease confirm you wish to revoke the certificate with the following subject:
subject=
commonName = client2
Type the word 'yes' to continue, or any other input to abort.
Continue with revocation: yes
Una vez que se confirme la acción, la CA rechazará por completo el certificado del cliente. Sin embargo, su servidor de OpenVPN actualmente no tiene forma de corroborar si los certificados de clientes se rechazaron y si el cliente seguirá teniendo acceso a la VPN. Para corregir esto, cree una lista de rechazo de certificados (CRL) en su máquina de CA:
- ./easyrsa gen-crl
Con esto, se generará un archivo llamado
crl.pem. Transfiera este archivo de manera segura a su servidor de OpenVPN:
- scp ~/EasyRSA-3.0.4/pki/crl.pem sammy@your_server_ip:/tmp
En su servidor de OpenVPN, copie este archivo a su directorio /etc/openvpn/
:
- sudo cp /tmp/crl.pem /etc/openvpn
Luego, abra el archivo de configuración del servidor de OpenVPN:
- sudo nano /etc/openvpn/server.conf
Al final del archivo, agregue la opción crl-verify
, que indicará al servidor OpenVPN que revise la lista de rechazo de certificados que creamos cada vez que se realice un intento de conexión:
crl-verify crl.pem
Guarde y cierre el archivo.
Por último, reinicie OpenVPN para implementar el rechazo de certificados:
- sudo systemctl restart openvpn@server
El cliente ya no debería poder conectarse de manera correcta al servidor usando la credencial anterior.
Para rechazar clientes adicionales, siga este proceso:
./easyrsa revoke nombre_cliente
.crl.pem
nuevo a su servidor de OpenVPN y cópielo al directorio /etc/openvpn
para sobrescribir la lista anterior.Puede usar este proceso para rechazar cualquier certificado emitido anteriormente para su servidor.
De esta manera, podrá navegar por Internet protegiendo su identidad, ubicación y tráfico contra fisgones y censores. Si en este punto ya no necesita emitir certificados, recomendamos que apague su máquina de CA o la desconecte de Internet hasta que necesite agregar o rechazar certificados. Esto ayudará a evitar que los atacantes obtengan acceso a su VPN.
Para configurar más clientes, solo debe seguir el paso 4 y los pasos 9 a 11 en el caso de cada dispositivo adicional. Para rechazar el acceso de los clientes, siga el paso 12.
]]>UFW o Uncomplicated Firewall es una interfaz para iptables
orientada a simplificar el proceso de configuración de un firewall. Aunque iptables
es una herramienta sólida y flexible, puede resultar difícil para los principiantes aprender a usarlo para configurar correctamente un firewall. Si está desea comenzar a proteger su red y no está seguro de qué herramienta utilizar, UFW puede ser la mejor opción .
En este tutorial verá la forma de configurar un firewall con UFW en Debian 9.
Para completar este tutorial, necesitará un servidor de Debian 9 con un usuario sudo no root que puede configurar siguiendo los pasos 1 a 3 del tutorial Configuración inicial para servidores de Debian 9.
Debian no instala UFW por defecto. Si completó el tutorial de configuración inicial para servidores, ya habrá instalado y habilitado UFW. De lo contrario, realice la instalación ahora usando apt
:
- sudo apt install ufw
Crearemos UFW y lo habilitaremos en los siguientes pasos.
Este tutorial se redactó teniendo en cuenta IPv4, pero funcionará para IPv6 siempre que lo habilite. Si su servidor de Debian tiene IPv6 habilitado, compruebe que UFW esté configurado para que admitir IPv6 de modo que administre las reglas de firewall para IPv6 además de IPv4. Para hacerlo, abra la configuración de UFW con nano
o su editor favorito.
- sudo nano /etc/default/ufw
A continuación, asegúrese de que el valor de IPV6
sea yes
. Debería tener el siguiente aspecto:
IPV6=yes
Guarde y cierre el archivo. Cuando UFW esté habilitado, se configurará para escribir reglas de firewall de IPv4 y IPv6. Sin embargo, antes de habilitar UFW, nos convendrá comprobar que su firewall esté configurado para que pueda conectarse a través de SSH. Empezaremos con la configuración de las políticas predeterminadas.
Si acaba de empezar con su firewall, las primeras reglas que debe definir son sus políticas predeterminadas. Estas reglas controlan la administración del tráfico que no coincida de forma explícita con otras reglas. Por defecto, UFW está configurado para denegar todas las conexiones entrantes y permitir todas las conexiones salientes. Esto significa que quien intente llegar a su servidor no podrá conectarse, mientras que cualquier aplicación dentro del servidor podría llegar al mundo exterior.
Restableceremos los valores predeterminados de sus reglas de UFW para garantizar que podamos seguir con este tutorial. Para fijar los valores predeterminados utilizados por UFW, emplee estos comandos:
- sudo ufw default deny incoming
- sudo ufw default allow outgoing
Establecen los valores predeterminados para denegar las conexiones entrantes y permitir las salientes. Con solo estos valores predeterminados de firewall podría bastar para una computadora personal, pero normalmente los servidores deben responder a las solicitudes de usuarios externos. Lo veremos a continuación.
Si habilitamos nuestro firewall de UFW ahora, denegaría todas las conexiones entrantes. Esto significa que deberemos crear reglas que permitan explícitamente las conexiones entrantes legítimas (SSH o HTTP, por ejemplo) si queremos que nuestro servidor responda a estos tipos de solicitudes. Si utiliza un servidor en nube, probablemente le convenga permitir las conexiones SSH entrantes para poder conectarse y administrar su servidor.
Para configurar su servidor de modo que permita las conexiones SSH entrantes, puede utilizar este comando:
- sudo ufw allow ssh
Esto creará reglas de firewall que permitirán todas las conexiones en el puerto 22
, que es el que escucha el demonio SSH por defecto. UFW registra el significado del puerto allow ssh
porque está enumerado como servicio en el archivo /etc/services
.
Sin embargo, realmente podemos escribir la regla equivalente especificando el puerto en vez del nombre de servicio. Por ejemplo, este comando funciona como el anterior:
- sudo ufw allow 22
Si configuró su demonio SSH para utilizar un puerto diferente, deberá especificar el puerto apropiado. Por ejemplo, si su servidor SSH escucha en el puerto 2222
puede utilizar este comando para permitir las conexiones en ese puerto:
- sudo ufw allow 2222
Ahora que su firewall está configurado para permitir las conexiones SSH entrantes, podemos habilitarlo.
Para habilitar UFW, utilice este comando:
- sudo ufw enable
Recibirá una advertencia que indicará que el comando puede interrumpir las conexiones SSH existentes. Ya configuramos una regla de firewall que permite conexiones SSH. Debería ser posible continuar sin inconvenientes. Responda a la solicitud con y
y presione ENTER
.
Con esto, el firewall quedará activo. Ejecute el comando sudo ufw status verbose
para ver las reglas que se configuran. En el resto de este tutorial se abarca en mayor profundidad la forma de utilizar UFW. Se analizarán opciones como las de permitir o denegar diferentes tipos de conexiones.
En este momento, debería permitir el resto de las conexiones a las que su servidor debe responder. Las conexiones que debería permitir dependen de sus necesidades específicas. Afortunadamente, ya sabe cómo escribir reglas que permiten las conexiones basadas en un nombre de servicio o un puerto; ya lo hicimos para SSH en el puerto 22
. También puede hacerlo para:
sudo ufw allow http
o sudo ufw allow 80
sudo ufw allow https
o sudo ufw allow 443
Existen varias maneras de permitir otras conexiones, aparte de especificar un puerto o un servicio conocido.
Puede especificar intervalos de puerto con UFW. Algunas aplicaciones utilizan varios puertos en vez de uno solo.
Por ejemplo, para permitir las conexiones de X11 que utilizan los puertos 6000
-6007
, emplee estos comandos:
- sudo ufw allow 6000:6007/tcp
- sudo ufw allow 6000:6007/udp
Cuando se especifiquen intervalos de puerto con UFW, debe especificar el protocolo (tcp
o udp
) a los que deberían aplicarse las reglas. No lo mencionamos antes porque cuando no se especifica el protocolo se permiten ambos de forma automática, lo cual está bien en la mayoría de los casos.
Al trabajar con UFW, también puede especificar direcciones IP. Por ejemplo, si desea permitir las conexiones desde una dirección IP específica, como una dirección IP de trabajo o doméstica 203.0.113.4
, debe especificar from
y luego la dirección IP:
- sudo ufw allow from 203.0.113.4
También puede especificar un puerto concreto al que la dirección IP pueda conectarse agregando to any port
seguido del número de este. Por ejemplo, si desea permitir que 203.0.113.4
se conecte al puerto 22
(SSH), utilice este comando:
- sudo ufw allow from 203.0.113.4 to any port 22
Si desea permitir una subred de direcciones IP, puede hacerlo utilizando la notación CIDR para especificar una máscara de red. Por ejemplo, si desea permitir todas las direcciones IP de la 203.0.113.1
a la 203.0.113.254
podría utilizar este comando:
- sudo ufw allow from 203.0.113.0/24
Del mismo modo, también puede especificar el puerto de destino al que puede conectarse la subred 203.013.0/24
. Una vez más, utilizaremos el puerto 22
(SSH) como ejemplo:
- sudo ufw allow from 203.0.113.0/24 to any port 22
Si desea crear una regla de firewall que solo se aplique a una interfaz de red específica, puede hacerlo especificando “allow in on” seguido del nombre de la interfaz de red.
Tal vez le convenga revisar sus interfaces de red antes de continuar. Para hacerlo, utilice este comando:
- ip addr
Output Excerpt2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
. . .
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
. . .
El resultado resaltado indica los nombres de la interfaz de red. Normalmente tienen nombres similares a eth0
o enp3s2
.
Así, si su servidor tiene una interfaz de red pública llamada eth0
, podría permitir el tráfico HTTP (puerto 80
) hacia él con este comando:
- sudo ufw allow in on eth0 to any port 80
Hacerlo permitiría que su servidor recibiera solicitudes HTTP desde la Internet pública.
O bien, si desea que su servidor de base de datos de MySQL (puerto 3306
) escuche las conexiones en la interfaz de red privada eth1
, por ejemplo, podría utilizar este comando:
- sudo ufw allow in on eth1 to any port 3306
Esto permitiría que otros servidores de su red privada se conectaran a su base de datos de MySQL.
Si no ha cambiado la política predeterminada para las conexiones entrantes, UFW está configurado para denegarlas a todas. Generalmente, esto simplifica el proceso de creación de una política de firewall segura al exigirle crear reglas que permitan de forma explícita el acceso de puertos específicos y direcciones IP.
Sin embargo, a veces le convendrá denegar conexiones específicas basadas en la dirección IP o subred de origen, quizás por saber que su servidor recibe ataques desde ellas. Además, si desea cambiar su política entrante predeterminada para allow (no lo recomendamos), debería crear reglas deny para cualquier servicio o dirección IP cuyas conexiones no desee permitir.
Para escribir reglas de deny, puede utilizar los comandos descritos anteriormente y sustituir allow por deny.
Por ejemplo, para denegar conexiones HTTP, podría utilizar este comando:
- sudo ufw deny http
A su vez, si desea denegar todas las conexiones de 203.0.113.4
podría utilizar este comando:
- sudo ufw deny from 203.0.113.4
Ahora veremos la forma de eliminar reglas.
Saber eliminar reglas de firewall es tan importante como saber crearlas. Existen dos maneras diferentes de especificar las reglas que se eliminarán: por número de regla o por regla real (se asemejan a la forma en que las reglas se especifican al crearse). Comenzaremos con el método de eliminación por el número de regla porque es más sencillo.
Si utiliza el número de regla para eliminar reglas de firewall, lo primero que le convendrá hacer es obtener una lista de reglas de firewall. El comando “UFW status” tiene una opción para mostrar números junto a cada regla, como se muestra aquí:
- sudo ufw status numbered
Numbered Output:Status: active
To Action From
-- ------ ----
[ 1] 22 ALLOW IN 15.15.15.0/24
[ 2] 80 ALLOW IN Anywhere
Si decidimos eliminar la regla 2, que permite las conexiones del puerto 80 (HTTP), podemos especificarlo en un comando “UFW delete”como este:
- sudo ufw delete 2
Esto mostraría un mensaje de confirmación y eliminaría la regla 2, que permite conexiones HTTP. Tenga en cuenta que si tiene IPv6 habilitado, le convendría eliminar también la regla IPv6 correspondiente.
La alternativa a números de regla es especificar la regla real que se eliminará. Por ejemplo, si desea eliminar la regla allow http
, podría escribir lo siguiente:
- sudo ufw delete allow http
También podría especificar la regla mediante allow 80
en vez de hacerlo por nombre de servicio:
- sudo ufw delete allow 80
Si existen reglas IPv4 y IPv6, este método las eliminará.
En cualquier momento, puede verificar el estado de UFW con este comando:
- sudo ufw status verbose
Si UFW está desactivado, lo cual se aplica por defecto, verá algo como esto:
OutputStatus: inactive
Si UFW está activo, lo cual debería suceder si siguió el paso 3, el resultado dirá que está activo y enumerará cualquier regla configurada. Por ejemplo, si el firewall está configurado para permitir conexiones SSH (puerto 22
) desde cualquier parte, el resultado podría ser parecido a este:
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
Utilice el comando status
si desea verificar la configuración que UFW aplicó al firewall.
Si decide que no desea utilizar UFW, puede desactivarlo con este comando:
- sudo ufw disable
Cualquier regla que haya creado con UFW dejará de estar activa. Siempre puede ejecutar sudo ufw enable
si necesita activarla más adelante.
Si ya tiene reglas de UFW configuradas y decide que desea empezar de nuevo, puede utilizar el comando “reset”:
- sudo ufw reset
Esto desactivará UFW y eliminará cualquier regla definida anteriormente. Tenga en cuenta que los ajustes originales de las políticas predeterminadas no se restablecerán si las modificó en algún momento. Esto debería permitirle empezar de nuevo con UFW.
De esta manera, su firewall quedará configurado para permitir conexiones SSH (al menos). Asegúrese de permitir cualquier otra conexión entrante de su servidor y, al mismo tiempo, limitar cualquier conexión innecesaria, de modo que su servidor funcione y sea seguro.
Para obtener información sobre más configuraciones comunes de UFW, consulte el tutorial Aspectos básicos de UFW: reglas y comandos comunes de firewall.
]]>El servidor HTTP Apache es el más usado del mundo. Ofrece muchas características potentes, entre las que se incluyen módulos que se cargan de forma dinámica, una sólida compatibilidad con medios y amplia integración con otras herramientas de software populares.
En esta guía, explicaremos la forma de instalar el servidor web de Apache en su servidor de Debian 9.
Antes de comenzar a aplicar esta guía, debe tener un usuario no root normal con privilegios sudo configurado en su servidor. Además, deberá habilitar un firewall básico para que bloquee los puertos que no sean esenciales. Para aprender a configurar una cuenta normal de usuario y un firewall para su servidor, siga nuestra guía de configuración inicial para servidores de Debian 9.
Cuando disponga de una cuenta, inicie sesión como usuario no root para comenzar.
Apache está disponible dentro de los repositorios de software predeterminados de Debian, lo que permite instalarlo utilizando herramientas convencionales de administración de paquetes.
Comencemos actualizando el índice de paquetes locales para que reflejen los últimos cambios anteriores:
- sudo apt update
A continuación, instale el paquete apache2
:
- sudo apt install apache2
Una vez confirmada la instalación, apt
instalará Apache y todas las dependencias necesarias.
Antes de probar Apache, es necesario modificar los ajustes de firewall para permitir el acceso externo a los puertos web predeterminados. Suponiendo que siguió las instrucciones de los requisitos previos, debería tener un firewall UFW configurado para que restrinja el acceso a su servidor.
Durante la instalación, Apache se registra con UFW para proporcionar algunos perfiles de aplicación que pueden utilizarse para habilitar o deshabilitar el acceso a Apache a través del firewall.
Enumere los perfiles de aplicación ufw
escribiendo lo siguiente:
- sudo ufw app list
Verá una lista de los perfiles de aplicación:
OutputAvailable applications:
AIM
Bonjour
CIFS
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
Los perfiles de Apache comienzan con WWW:
Se recomienda habilitar el perfil más restrictivo, que de todos modos permitirá el tráfico que configuró. Debido a que en esta guía aún no configuramos SSL para nuestro servidor, solo deberemos permitir el tráfico en el puerto 80:
- sudo ufw allow 'WWW'
Puede verificar el cambio escribiendo lo siguiente:
- sudo ufw status
Debería ver el tráfico HTTP permitido en el resultado que se muestra:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW (v6) ALLOW Anywhere (v6)
Como puede ver, el perfil se activó para permitir el acceso al servidor web.
Al final del proceso de instalación, Debian 9 inicia Apache. El servidor web ya debería estar en funcionamiento.
Realice una verificación con el sistema systemd
init para saber si se encuentra en ejecución el servicio escribiendo lo siguiente:
- sudo systemctl status apache2
Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 19:21:48 UTC; 13min ago
Main PID: 12849 (apache2)
CGroup: /system.slice/apache2.service
├─12849 /usr/sbin/apache2 -k start
├─12850 /usr/sbin/apache2 -k start
└─12852 /usr/sbin/apache2 -k start
Sep 05 19:21:48 apache systemd[1]: Starting The Apache HTTP Server...
Sep 05 19:21:48 apache systemd[1]: Started The Apache HTTP Server.
Como puede ver en este resultado, parece que el servicio se inició correctamente. Sin embargo, la mejor forma de comprobarlo es solicitar una página de Apache.
Puede acceder a la página de destino predeterminada de Apache para confirmar que el software funcione correctamente mediante su dirección IP: Si no conoce la dirección IP de su servidor, puede obtenerla de varias formas desde la línea de comandos.
Intente escribir esto en la línea de comandos de su servidor:
- hostname -I
Obtendrá algunas direcciones separadas por espacios. Puede probar cada una de ellas en su navegador web para ver si funcionan.
Una alternativa es usar la herramienta curl
, que debería proporcionarle su dirección IP pública tal como se ve desde otra ubicación en Internet.
Primero, instale curl
utilizando apt
:
- sudo apt install curl
Luego, utilice curl
para recuperar icanhazip.com mediante IPv4:
- curl -4 icanhazip.com
Cuando tenga la dirección IP de su servidor, introdúzcala en la barra de direcciones de su navegador:
http://your_server_ip
Debería ver la página web predeterminada de Apache de Debian 9:
Esta página indica que Apache funciona correctamente. También incluye información básica sobre archivos y ubicaciones de directorios importantes de Apache.
Ahora el servidor web funciona, repasemos algunos comandos de administración básicos.
Para detener su servidor web, escriba lo siguiente:
- sudo systemctl stop apache2
Para iniciar el servidor web cuando se detenga, escriba lo siguiente:
- sudo systemctl start apache2
Para detener y luego iniciar el servicio de nuevo, escriba lo siguiente:
- sudo systemctl restart apache2
Si solo realiza cambios de configuración, Apache a menudo puede recargarse sin cerrar conexiones. Para hacerlo, utilice este comando:
- sudo systemctl reload apache2
Por defecto, Apache está configurado para iniciarse automáticamente cuando el servidor lo hace. Si no es lo que quiere, deshabilite este comportamiento escribiendo lo siguiente:
- sudo systemctl disable apache2
Para volver a habilitar el servicio de modo que se cargue en el inicio, escriba lo siguiente:
- sudo systemctl enable apache2
Ahora, Apache debería iniciarse de forma automática cuando el servidor lo haga de nuevo.
Al emplear el servidor web Apache, puede utilizar _hosts virtuales _(similares a bloques de servidor de Nginx) para encapsular detalles de configuración y alojar más de un dominio desde un único servidor. Configuraremos un dominio llamado example.com, pero debería cambiarlo por su propio nombre de dominio. Consulte nuestra Introducción a DNS de DigitalOcean para hallar más información sobre la configuración de un nombre de dominio con DigitalOcean.
Por defecto, Apache en Debian 9 tiene habilitado un bloque de servidor que está configurado para proporcionar documentos del directorio /var/www/html
. Si bien esto funciona bien para un solo sitio, puede ser difícil de manejar si aloja varios. En vez de modificar /var/www/html
, crearemos una estructura de directorio dentro de /var/www
para nuestro sitio example.com y dejaremos /var/www/html
como directorio predeterminado que se abastecerá si una solicitud de cliente no coincide con otros sitios.
Cree el directorio para example.com, utilizando el indicador -p
para crear cualquier directorio principal necesario:
sudo mkdir -p /var/www/example.com/html
A continuación, asigne la propiedad del directorio con la variable de entorno $USER
:
- sudo chown -R $USER:$USER /var/www/example.com/html
Los permisos de sus root web deberían ser correctos si no modificó su valor unmask
, pero puede comprobarlo escribiendo lo siguiente:
- sudo chmod -R 755 /var/www/example.com
A continuación, cree una página de ejemplo index.html
utilizando nano
o su editor favorito:
- nano /var/www/example.com/html/index.html
Dentro de ella, agregue el siguiente ejemplo de HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com virtual host is working!</h1>
</body>
</html>
Guarde y cierre el archivo cuando termine.
Para que Apache proporcione este contenido, es necesario crear un archivo de host virtual con las directivas correctas. En lugar de modificar el archivo de configuración predeterminado situado en /etc/apache2/sites-available/000-default.conf
directamente, crearemos uno nuevo en /etc/apache2/sites-available/example.com.conf
:
- sudo nano /etc/apache2/sites-available/example.com.conf
Péguelo en el siguiente bloque de configuración, similar al predeterminado, pero actualizado para nuestro nuevo directorio y nombre de dominio:
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Tenga en cuenta que cambiamos DocumentRoot
por nuestro nuevo directorio y ServerAdmin
por un correo electrónico al que pueda acceder el administrador del sitio example.com. También agregamos dos directivas: ServerName
, que establece el dominio de base que debería coincidir para esta definición de host virtual, y ServerAlias
, que define más nombres que deberían coincidir como si fuesen el nombre de base.
Guarde y cierre el archivo cuando termine.
Habilitaremos el archivo con la herramienta a2ensite
:
- sudo a2ensite example.com.conf
Deshabilite el sitio predeterminado definido en 000-default.conf
:
- sudo a2dissite 000-default.conf
A continuación, realizaremos una prueba para ver que no haya errores de configuración:
- sudo apache2ctl configtest
Debería ver el siguiente resultado:
OutputSyntax OK
Reinicie Apache para implementar sus cambios:
- sudo systemctl restart apache2
Con esto, Apache debería ser el servidor de su nombre de dominio. Puede probar esto visitando http://example.com
. Allí, debería ver algo como lo siguiente:
Ahora que sabe administrar el propio servicio de Apache, debe tomarse unos minutos para familiarizarse con algunos directorios y archivos importantes.
/var/www/html
: el contenido web real, que por defecto solo consta de la página predeterminada de Apache que vio antes, se proporciona desde el directorio /var/www/html
. Esto se puede cambiar modificando los archivos de configuración de Apache./etc/apache2
: el directorio de configuración de Apache. En él se encuentran todos los archivos de configuración de Apache./etc/apache2/apache2.conf
: el archivo principal de configuración de Apache. Esto se puede modificar para realizar cambios en la configuración general de Apache. Este archivo administra la carga de muchos de los demás archivos del directorio de configuración./etc/apache2/ports.conf
: este archivo especifica los puertos en los que Apache escuchará. Por defecto, Apache escucha en el puerto 80. De forma adicional, lo hace en el 443 cuando se habilita un módulo que proporciona capacidades SSL./etc/apache2/sites-available/
: el directorio en el que se pueden almacenar hosts por sitio. Apache no utilizará los archivos de configuración de este directorio a menos que estén vinculados al directorio sites-enabled
. Normalmente, toda la configuración de bloques de servidor se realiza en este directorio y luego se habilita al vincularse al otro directorio con el comando a2ensite.
/etc/apache2/sites-enabled/
: el directorio donde se almacenan hosts virtuales por sitio habilitados. Normalmente, se crean vinculando los archivos de configuración del directorio sites-available
con a2ensite
. Apache lee los archivos de configuración y los enlaces de este directorio cuando se inicia o se vuelve a cargar para compilar una configuración completa./etc/apache2/conf-available/
y /etc/apache2/conf-enabled/
: estos directorios tienen la misma relación que los directorios sites-available
y sites-enabled
, pero se utilizan para almacenar fragmentos de configuración que no pertenecen a un host virtual. Los archivos del directorio conf-available
pueden habilitarse con el comando a2enconf
y deshabilitarse con el comando a2disconf
./etc/apache2/mods-available/
y /etc/apache2/mods-enabled/
: estos directorios contienen los módulos disponibles y habilitados, respectivamente. Los archivos que terminan en .load
contienen fragmentos para cargar módulos específicos, mientras que los archivos que terminan en .conf
contienen la configuración de estos módulos. Los módulos pueden habilitarse y deshabilitarse con los comandos a2enmod
y a2dissmod
./var/log/apache2/access.log
: por defecto, cada solicitud enviada a su servidor web se asienta en este archivo de registro a menos que Apache esté configurado para no hacerlo./var/log/apache2/error.log
: por defecto, todos los errores se registran en este archivo. La directiva LogLevel
de la configuración de Apache especifica el nivel de detalle de los registros de error.Ahora que ha instaló su servidor web, dispone de varias opciones respecto del tipo de contenido que puede ofrecer y de las tecnologías que puede utilizar para crear una experiencia más completa.
Si desea construir una pila de aplicaciones más completa, puede consultar este artículo sobre cómo configurar una pila LAMP en Debian 9.
]]>Node.js es una plataforma de JavaScript para programación general que permite a los usuarios crear aplicaciones de red de forma rápida. Al aprovechar JavaScript tanto en frontend como en backend, Node.js hace que el desarrollo sea más uniforme e integrado.
En esta guía, le mostraremos la manera de comenzar a trabajar con Node.js en un servidor Debian 9.
En esta guía, se supone que utiliza Debian 9. Antes de comenzar, debe tener configurada en su sistema una cuenta de usuario no root con privilegios sudo. Puede aprender a hacerlo siguiendo la configuración inicial de servidores para Debian 9.
Debian contiene una versión de Node.js en sus repositorios predeterminados. En el momento en que se redactó este artículo se encontraba disponible la versión 4.8.2, que caducará a fines de abril de 2018. Si le gusta experimentar el lenguaje usando una opción estable y suficiente, puede resultar útil realizar la instalación desde los repositorios. Sin embargo, se recomienda que para los casos de uso de desarrollo y producción instale una versión más reciente con un PPA. Analizaremos la forma realizar la instalación desde un PPA en el siguiente paso.
Para obtener la versión distro-stable de Node.js, puede utilizar el administrador de paquetes apt
. Primero, actualice su índice de paquetes locales:
- sudo apt update
A continuación, instale el paquete Node.js desde los repositorios:
- sudo apt install nodejs
Si el paquete de los repositorios se ajusta a sus necesidades, será todo lo que necesita para configurar Node.js.
Para comprobar la versión de Node.js que instaló después de estos pasos iniciales, escriba lo siguiente:
- nodejs -v
Debido a un conflicto con otro paquete, el ejecutable de los repositorios de Debian se llama nodejs
en vez de node
. Téngalo en cuenta al ejecutar el software.
Cuando determine la versión de Node.js que instaló desde los repositorios de Debian, podrá decidir si desea trabajar con diferentes versiones, archivos de paquetes o administradores de versiones. A continuación, veremos estos elementos junto con métodos de instalación más flexibles y sólidos.
Para trabajar con una versión más reciente de Node.js, puede agregar el _PPA _(archivo de paquetes personal) actualizado por NodeSource. En este habrá versiones de Node.js más actualizadas que en los repositorios oficiales de Debian y podrá elegir entre Node.js v4.x (la versión compatible a largo plazo más antigua, que se admitirá hasta finales de abril de 2018), Node.js v6.x (admitida hasta abril de 2019), Node.js v8.x (la versión actual de LTS, admitida hasta diciembre de 2019) y Node.js v10.x (la versión más reciente, admitida hasta abril de 2021).
Primero, actualizaremos el índice de paquetes locales e instalaremos curl
, con el cual accederá al PPA:
- sudo apt update
- sudo apt install curl
A continuación, instalaremos el PPA para poder acceder a su contenido. Desde su directorio principal, utilice curl
para recuperar la secuencia de comandos de instalación de su versión preferida y asegúrese de sustituir 10.x
por la cadena de su versión elegida (si es distinta):
- cd ~
- curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
Puede inspeccionar el contenido de esta secuencia de comandos con nano
o su editor de texto preferido:
- nano nodesource_setup.sh
Ejecute la secuencia de comandos en sudo
:
- sudo bash nodesource_setup.sh
El PPA se agregará a su configuración y su caché de paquetes locales se actualizará de forma automática. Después de ejecutar la secuencia de comandos de configuración, puede instalar el paquete Node.js de la misma manera que antes:
- sudo apt install nodejs
Para comprobar la versión de Node.js que instaló después de estos pasos iniciales, escriba lo siguiente:
- nodejs -v
Outputv10.9.0
El paquete nodejs
contiene el binario nodejs
y npm
, por lo que no tendrá que instalar npm
por separado.
npm
utiliza un archivo de configuración en su directorio de inicio para hacer un seguimiento de las actualizaciones. Se creará la primera vez que ejecute npm
. Ejecute este comando para verificar que npm
esté instalado y crear el archivo de configuración:
- npm -v
Output6.2.0
Para que algunos paquetes de npm
funcionen (por ejemplo, aquellos para los cuales de sebe compilar código de fuente), deberá instalar el paquete build-essential
:
- sudo apt install build-essential
Ahora dispondrá de las herramientas necesarias para trabajar con paquetes npm
para los que se deba compilar código desde la fuente.
Una alternativa a la instalación de Node.js a través de apt
es utilizar una herramienta llamada nvm
, que significa “Node.js Version Manager”. En vez de funcionar en el nivel del sistema operativo, nvm
funciona en el nivel de un directorio independiente dentro de su directorio de inicio. Esto significa que puede instalar varias versiones autónomas de Node.js sin que afecte a todo el sistema.
Controlar su entorno con nvm
le permite acceder a las versiones más recientes de Node.js, además de conservar y administrar versiones anteriores. Sin embargo, es una herramienta distinta de apt
y las versiones de Node.js que administra con ella con distintas de las que maneja con apt
.
Para descargar la secuencia de comandos de instalación de nvm
de la página de GitHub del proyecto, puede utilizar curl
. Tenga en cuenta que el número de versión puede diferir del que se resalta aquí:
- curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh -o install_nvm.sh
Inspeccione la secuencia de comandos de instalación con nano
:
- nano install_nvm.sh
Ejecute la secuencia de comandos con bash
:
- bash install_nvm.sh
Instalará el software en un subdirectorio de su directorio de inicio en ~/.nvm
. También agregará las líneas necesarias a su archivo ~/. profile
para utilizarlo.
Para obtener acceso a la funcionalidad nvm
, deberá cerrar sesión e iniciarla de nuevo u obtener el archivo ~/. profile
para que su sesión actual registre los cambios:
- source ~/.profile
Con nvm
instalado, puede instalar versiones aisladas de Node.js. Para obtener información sobre las versiones de Node.js disponibles, escriba lo siguiente:
- nvm ls-remote
Output...
v8.11.1 (Latest LTS: Carbon)
v9.0.0
v9.1.0
v9.2.0
v9.2.1
v9.3.0
v9.4.0
v9.5.0
v9.6.0
v9.6.1
v9.7.0
v9.7.1
v9.8.0
v9.9.0
v9.10.0
v9.10.1
v9.11.0
v9.11.1
v10.0.0
v10.1.0
v10.2.0
v10.2.1
v10.3.0
v10.4.0
v10.4.1
v10.5.0
v10.6.0
v10.7.0
v10.8.0
v10.9.0
Como puede ver, la versión LTS actual en el momento en que se redactó este artículo era la 8.11.1. Puede instalarla escribiendo lo siguiente:
- nvm install 8.11.1
Normalmente, nvm
aplicará un cambio para utilizar la versión más reciente instalada. Puede indicar a nvm
que utilice la versión que acaba de descargar escribiendo lo siguiente:
- nvm use 8.11.1
Cuando instale Node.js utilizando nvm
, el ejecutable se llamará node
. Puede ver la versión que el shell utiliza actualmente escribiendo lo siguiente:
- node -v
Outputv8.11.1
Si dispone de varias versiones de Node.js, puede ver cuál está instalada escribiendo lo siguiente:
- nvm ls
Si desea establecer como predeterminada una de las versiones, escriba lo siguiente:
- nvm alias default 8.11.1
Esta versión se seleccionará de forma automática cuando se genere una nueva sesión. También puede hacer referencia a ella con el alias, como se muestra:
- nvm use default
Cada versión de Node.js hará un seguimiento de sus propios paquetes y cuenta con npm
para administrarlos.
También puede contar con paquetes de instalación de npm
en el directorio /node_modules
del proyecto de Node.js. Utilice la siguiente sintaxis para instalar el módulo express
:
- npm install express
Si desea instalar el módulo de manera general para que otros programas que utilizan la misma versión de Node.js puedan emplearlo, puede agregar el indicador -g
:
- npm install -g express
Con esto, el paquete se instalará aquí:
~/.nvm/versions/node/node_version/lib/node_modules/express
Instalar el módulo de forma general le permitirá ejecutar comandos de la línea de comandos, pero deberá vincular el paquete a su esfera local para poder solicitarlo desde un programa:
- npm link express
Puede obtener más información sobre las opciones disponibles con nvm escribiendo lo siguiente:
- nvm help
Puede desinstalar Node.js utilizando apt
o nvm
según la versión a la que desee orientarse. Para eliminar versiones instaladas desde los repositorios o del PPA, deberá utilizar la herramienta apt
en el nivel del sistema.
Para eliminar cualquiera de estas versiones, escriba lo siguiente:
- sudo apt remove nodejs
Con este comando se eliminarán el paquete y los archivos de configuración.
Para desinstalar una versión de Node.js que haya habilitado utilizando nvm
, primero determine si la versión que desea eliminar es o no la que se encuentra activa:
- nvm current
Si esto no sucede, puede ejecutar lo siguiente:
- nvm uninstall node_version
Con este comando se desinstalará la versión seleccionada de Node.js.
Si la versión que desea eliminar es la que se encuentra, primero debe desactivar nvm
para habilitar sus cambios:
- nvm deactivate
Ahora podrá desinstalar la versión actual con el comando uninstall
anterior, que eliminará todos los archivos asociados con la versión deseada de Node.js, a excepción de aquellos en caché que se puedan utilizar para la reinstalación.
Hay varias formas de configurar y ejecutar Node.js en su servidor de Debian 9. Sus circunstancias determinarán cuál de los métodos anteriores es el mejor para usted. Si bien utilizar la versión en paquete del repositorio de Debian es una opción para experimentar, la instalación desde un PPA y el uso de npm
o nvm
ofrece mayor flexibilidad.
Docker es una aplicación que simplifica el proceso de administración de procesos de aplicación en contenedores. Los contenedores le permiten ejecutar sus aplicaciones en procesos con aislamiento de recursos. Son similares a las máquinas virtuales, pero los contenedores son más portátiles, más flexibles con los recursos y más dependientes del sistema operativo host.
Para hallar una introducción detallada a los distintos componentes de un contenedor de Docker, consulte El ecosistema de Docker: Introducción a los componentes comunes.
A través de este tutorial, instalará y usará Docker Community Edition (CE) en Debian 9. Instalará el propio Docker, trabajará con contenedores e imágenes, e introducirá una imagen en un repositorio Docker.
Para completar este tutorial, necesitará lo siguiente:
Es posible que la versión del paquete de instalación de Docker disponible en el repositorio oficial de Debian no sea la más reciente. Para asegurarnos de contar con la versión más reciente, instalaremos Docker desde el repositorio oficial de Docker. Para hacerlo, agregaremos una nueva fuente de paquetes y la clave GPG de Docker para garantizar que las descargas sean válidas, y luego instalaremos el paquete.
Primero, actualice su lista de paquetes existente:
- sudo apt update
A continuación, instale algunos paquetes de requisitos previos que permitan a apt
usar paquetes a través de HTTPS:
- sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
Luego, añada la clave de GPG para el repositorio oficial de Docker en su sistema:
- curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
Agregue el repositorio de Docker a las fuentes de APT:
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
A continuación, actualice el paquete de base de datos con los paquetes de Docker del repositorio recién agregado:
- sudo apt update
Asegúrese de estar a punto de realizar la instalación desde el repositorio de Docker en lugar del repositorio predeterminado de Debian:
- apt-cache policy docker-ce
Si bien el número de versión de Docker puede ser distinto, verá un resultado como el siguiente:
docker-ce:
Installed: (none)
Candidate: 18.06.1~ce~3-0~debian
Version table:
18.06.1~ce~3-0~debian 500
500 https://download.docker.com/linux/debian stretch/stable amd64 Packages
Observe que docker-ce
no está instalado, pero la opción más viable para la instalación es del repositorio de Docker para Debian 9 (stretch
).
Por último, instale Docker:
- sudo apt install docker-ce
Con esto, Docker quedará instalado, el demonio se iniciará y el proceso se habilitará para ejecutarse en el inicio. Compruebe que funcione:
- sudo systemctl status docker
El resultado debe ser similar al siguiente, y mostrar que el servicio está activo y en ejecución:
Output● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
Docs: https://docs.docker.com
Main PID: 21319 (dockerd)
CGroup: /system.slice/docker.service
├─21319 /usr/bin/dockerd -H fd://
└─21326 docker-containerd --config /var/run/docker/containerd/containerd.toml
La instalación de Docker ahora le proporcionará no solo el servicio de Docker (demonio) sino también la utilidad de línea de comandos docker
o el cliente de Docker. Más adelante, exploraremos la forma de usar el comando docker
en este tutorial.
Por defecto, el comando docker
solo puede ser ejecutado por el usuario root o un usuario del grupo docker, que se crea automáticamente durante el proceso de instalación de Docker. Si intenta ejecutar el comando docker
sin sudo
como prefijo o sin formar parte del grupo docker, obtendrá un resultado como este:
Outputdocker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
Si desea evitar escribir sudo
al ejecutar el comando docker
, agregue su nombre de usuario al grupo docker
:
- sudo usermod -aG docker ${USER}
Para aplicar la nueva membresía de grupo, cierre la sesión del servidor y vuelva a iniciarla o escriba lo siguiente:
- su - ${USER}
Para continuar, se le solicitará ingresar la contraseña de su usuario.
Confirme que ahora su usuario se agregó al grupo docker escribiendo lo siguiente:
- id -nG
Outputsammy sudo docker
Si debe agregar al grupo docker
un usuario con el que no inició sesión, declare dicho nombre de usuario de forma explícita usando lo siguiente:
- sudo usermod -aG docker username
Para el resto de este artículo, se supone que ejecutará el comando docker
como usuario del grupo docker. Si elige no hacerlo, incluya sudo
al principio de los comandos.
A continuación, exploremos el comando docker
.
El uso de docker
consiste en pasar a este una cadena de opciones y comandos seguida de argumentos. La sintaxis adopta esta forma:
- docker [option] [command] [arguments]
Para ver todos los subcomandos disponibles, escriba lo siguiente:
- docker
A partir de Docker 18, en la lista completa de subcomandos disponibles se incluye lo siguiente:
Output
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Si desea ver las opciones disponibles para un comando específico, escriba lo siguiente:
- docker docker-subcommand --help
Para ver información sobre Docker relacionada con todo el sistema, utilice lo siguiente:
- docker info
Exploremos algunos de estos comandos. Comenzaremos trabajando con imágenes.
Los contenedores de Docker se construyen con imágenes de Docker. Por defecto, Docker obtiene estas imágenes de Docker Hub, un registro de Docker gestionado por Docker, la empresa responsable del proyecto Docker. Cualquiera puede alojar sus imágenes en Docker Hub, de modo que la mayoría de las aplicaciones y las distribuciones de Linux que necesitará tendrán imágenes alojadas allí.
Para verificar si puede acceder a imágenes y descargarlas de Docker Hub, escriba lo siguiente:
- docker run hello-world
El resultado indicará que Docker funciona de forma correcta:
OutputUnable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
Inicialmente, Docker no pudo encontrar la imagen de hello-world
a nivel local. Por ello la descargó de Docker Hub, el repositorio predeterminado. Una vez que se descargó la imagen, Docker creó un contenedor a partir de ella y de la aplicación dentro del contenedor ejecutado, y mostró el mensaje.
Puede buscar imágenes disponibles en Docker Hub usando el comando docker
con el subcomando search
. Por ejemplo, para buscar la imagen de Ubuntu, escriba lo siguiente:
- docker search ubuntu
El script rastreará Docker Hub y mostrará una lista de todas las imágenes cuyo nombre coincida con la cadena de búsqueda. En este caso, el resultado será similar a lo siguiente:
OutputNAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 8320 [OK]
dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 214 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 170 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 128 [OK]
ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 95 [OK]
ubuntu-upstart Upstart is an event-based replacement for th… 88 [OK]
neurodebian NeuroDebian provides neuroscience research s… 53 [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 43 [OK]
ubuntu-debootstrap debootstrap --variant=minbase --components=m… 39 [OK]
nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK]
tutum/ubuntu Simple Ubuntu docker images with SSH access 18
i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13
1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 12 [OK]
ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12
eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK]
darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK]
codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK]
1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK]
pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 2
1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK]
ossobv/ubuntu Custom ubuntu image from scratch (based on o… 0
smartentry/ubuntu ubuntu with smartentry 0 [OK]
1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK]
pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0
paasmule/bosh-tools-ubuntu Ubuntu based bosh-cli 0 [OK]
...
En la columna de OFFICIAL,** OK** indica una imagen creada y avalada por la empresa responsable del proyecto. Una vez que identifique la imagen que desearía usar, puede descargarla a su computadora usando el subcomando pull
.
Ejecute el siguiente comando para descargar la imagen oficial de ubuntu
a su ordenador:
- docker pull ubuntu
Verá el siguiente resultado:
OutputUsing default tag: latest
latest: Pulling from library/ubuntu
6b98dfc16071: Pull complete
4001a1209541: Pull complete
6319fc68c576: Pull complete
b24603670dc3: Pull complete
97f170c87c6f: Pull complete
Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d
Status: Downloaded newer image for ubuntu:latest
Una vez descargada una imagen, puede ejecutar un contenedor usando la imagen descargada con el subcomando run
. Como pudo ver en el ejemplo de hello-world
, si no se descargó una imagen al ejecutarse docker
con el subcomando run
, el cliente de Docker descargará primero la imagen y luego ejecutará un contenedor utilizándola.
Para ver las imágenes descargadas a su computadora, escriba lo siguiente:
- docker images
El resultado debe tener un aspecto similar al siguiente:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 16508e5c265d 13 days ago 84.1MB
hello-world latest 2cb0d9787c4d 7 weeks ago 1.85kB
Como verá más adelante en este tutorial, las imágenes que utilice para ejecutar contenedores pueden modificarse y utilizarse para generar nuevas imágenes que se pueden cargar posteriormente (“introducir” es el término técnico) a Docker Hub o a otros registros de Docker.
Veamos en mayor profundidad la forma de ejecutar los contenedores.
El contenedor hello-world
que ejecutó en el paso anterior es un ejemplo de un contenedor que se ejecuta y se cierra tras emitir un mensaje de prueba. Los contenedores pueden ofrecer una utilidad mucho mayor y ser interactivos. Después de todo, son similares a las máquinas virtuales, aunque más flexibles con los recursos.
Como ejemplo, ejecutemos un contenedor usando la imagen más reciente de Ubuntu. La combinación de los conmutadores -i y -t le proporcionan un acceso interactivo del shell al contenedor:
- docker run -it ubuntu
Su símbolo del sistema debe cambiar para reflejar el hecho de que ahora trabaja dentro del contenedor y debe adoptar esta forma:
Outputroot@d9b100f2f636:/#
Tenga en cuenta el ID del contenedor en el símbolo del sistema. En este ejemplo, es d9b100f2f636
. Más adelante, necesitará ese ID de contenedor para identificar el contenedor cuando desee eliminarlo.
Ahora puede ejecutar cualquier comando dentro del contenedor. Por ejemplo, actualicemos la base de datos del paquete dentro del contenedor. No es necesario prefijar ningún comando con sudo
, ya que realiza operaciones dentro del contenedor como el usuario root:
- apt update
Luego, instale cualquier aplicación en él. Probemos con Node.js:
- apt install nodejs
Con esto, se instala Node.js en el contenedor desde el repositorio oficial de Ubuntu. Cuando finalice la instalación, verifique que Node.js esté instalado:
- node -v
Verá el número de versión en su terminal:
Outputv8.10.0
Cualquier cambio que realice dentro del contenedor solo se aplica a este.
Para cerrar el contenedor, escriba exit
a línea de comandos.
A continuación, veremos la forma de administrar los contenedores en nuestro sistema.
Después de usar Docker durante un tiempo, tendrá muchos contenedores activos (en ejecución) e inactivos en su computadora. Para ver **los **activos, utilice lo siguiente:
- docker ps
Verá una salida similar a la siguiente:
OutputCONTAINER ID IMAGE COMMAND CREATED
A través de este tutorial, inició dos contenedores: uno de la imagen hello-world
y otro de la imagen ubuntu
. Ambos contenedores ya no están en ejecución, pero aún existen en su sistema.
Para ver todos los contenedores, activos e inactivos, ejecute docker ps
con el conmutador -a
:
- docker ps -a
Visualizará un resultado similar a esto:
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 8 minutes ago sharp_volhard
01c950718166 hello-world "/hello" About an hour ago Exited (0) About an hour ago festive_williams
Para ver el último contenedor que creó, páselo al conmutador -l
:
- docker ps -l
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard
Para iniciar un contenedor detenido, utilice docker start
, seguido del o el nombre ID del contenedor. Iniciemos el contenedor basado en Ubuntu con el ID de d9b100f2f636
:
- docker start d9b100f2f636
El contenedor se iniciará y podrá usar docker ps
para ver su estado:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Up 8 seconds sharp_volhard
Para detener un contenedor en funcionamiento, utilice docker stop
, seguido del ID o nombre del contenedor. Esta vez usaremos el nombre que Docker asignó al contenedor, que es sharp_volhard
:
- docker stop sharp_volhard
Una vez que decidida que ya no necesita un contenedor, elimínelo con el comando docker rm
y use nuevamente el ID o el nombre del contenedor. Utilice el comando docker ps -a
para encontrar el ID o nombre del contenedor asociado con la imagen hello-world
y elimínelo.
- docker rm festive_williams
Puede iniciar un nuevo contenedor y darle un nombre usando el conmutador --name
. También podrá usar el conmutador de --rm
para crear un contenedor que se elimine de forma automática cuando se detenga. Consulte el comando docker run help
para obtener más información sobre estas y otras opciones.
Los contenedores pueden convertirse en imágenes que podrá usar para crear contenedores nuevos. Veamos cómo funciona esto.
Cuando inicie una imagen de Docker, podrá crear, modificar y eliminar archivos del mismo modo que con una máquina virtual. Los cambios que realice solo se aplicarán al contenedor en cuestión. Podrá iniciarlo y detenerlo, pero una vez que lo destruya con el comando docker rm
, los cambios se perderán por completo.
En esta sección verá la forma de guardar el estado de un contenedor como una nueva imagen de Docker.
Después de instalar Node.js dentro del contenedor de Ubuntu, dispondrá de un contenedor que se ejecuta a partir de una imagen, pero este es diferente de la imagen que utilizó para crearlo. Sin embargo, quizá desee reutilizar este contenedor Node.js como base de nuevas imágenes posteriormente.
Luego, confirme los cambios en una nueva instancia de imagen de Docker utilizando el siguiente comando:
- docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name
El conmutador -m es para el mensaje de confirmación que le permite a usted y a otros a saber qué cambios realizaron, mientras que -a se utiliza para especificar el autor. El container_id
es el que observó anteriormente en el tutorial cuando inició la sesión interactiva de Docker. A menos que haya creado repositorios adicionales en Docker Hub, repository
generalmente corresponde a su nombre de usuario de Docker Hub.
Por ejemplo, para el usuario** sammy**, con el ID de contenedor d9b100f26f636
, el comando sería el siguiente:
- docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs
Cuando *confirme *una imagen, la nueva imagen se guardará a nivel local en su computadora. Más adelante, en este tutorial, aprenderá a introducir una imagen en un registro de Docker como Docker Hub para que otros puedan acceder a ella.
Listar las imágenes de Docker de nuevo mostrará la nueva imagen, así como la anterior de la que se derivó:
- docker images
Verá resultados como este:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB
ubuntu latest 113a43faa138 4 weeks ago 81.2MB
hello-world latest e38bc07ac18e 2 months ago 1.85kB
En este ejemplo ubuntu-nodejs
es la nueva imagen, derivada de la imagen ubuntu
existente de Docker Hub. La diferencia de tamaño refleja los cambios realizados. En este ejemplo, el cambio fue la instalación de NodeJS. Por ello, la próxima vez que deba ejecutar un contenedor usando Ubuntu con NodeJS preinstalado, podrá usar simplemente la nueva imagen.
También podrá crear imágenes de un Dockerfile
, lo cual le permitirá automatizar la instalación de software en una nueva imagen. Sin embargo, eso queda fuera del alcance de este tutorial.
Ahora, compartiremos la nueva imagen con terceros para que puedan crear contenedores a partir de ella.
El siguiente paso lógico después de crear una nueva imagen a partir de una imagen existente es compartirla con algunos de sus amigos, con todo el mundo en Docker Hub, o en otro registro de Docker al que tenga acceso. Para introducir una imagen a Docker Hub o a cualquier otro registro de Docker, deberá tener una cuenta en el sistema.
En esta sección verá cómo introducir una imagen de Docker en Docker Hub. Para aprender a crear su propio registro privado de Docker, consulte Cómo configurar un registro de Docker privado en Ubuntu 14.04.
Para introducir su imagen, primero inicie sesión en Docker Hub.
- docker login -u docker-registry-username
Se le solicitará autenticarse usando su contraseña de Docker Hub. Si especificó la contraseña correcta, la autenticación tendrá éxito.
Nota: Si su nombre de usuario de registro de Docker es diferente del nombre de usuario local que usó para crear la imagen, deberá etiquetar su imagen con su nombre de usuario de registro. Para el ejemplo que se muestra en el último paso, deberá escribir lo siguiente:
- docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs
Luego podrá introducir su propia imagen usando lo siguiente:
- docker push docker-registry-username/docker-image-name
Para introducir la imagen ubuntu-nodejs
en el repositorio de sammy, el comando sería el siguiente:
- docker push sammy/ubuntu-nodejs
El proceso puede tardar un tiempo en completarse cuando se suben las imágenes, pero una vez que finalice el resultado será el siguiente:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed
...
Una vez que introduce una imagen en un registro, esta debe aparecer en el panel de su cuenta, como se muestra en la siguiente captura:
Si un intento de introducción produce un error de este tipo, es probable que no haya iniciado sesión:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required
Inicie sesión con docker login
y repita el intento de introducción. Luego, compruebe que exista en su página de repositorios de Docker Hub.
Ahora podrá usar docker pull sammy/ubuntu-nodejs
para introducir la imagen en una nueva máquina y usarla para ejecutar un nuevo contenedor.
A través de este tutorial, instaló Docker, trabajó con imágenes y contenedores e introdujo una imagen modificada en Docker Hub. Ahora que conoce los aspectos básico, explore los otros tutoriales de Docker de la Comunidad de DigitalOcean.
]]>Node.js — это платформа программирования JavaScript общего назначения, позволяющая пользователям быстро создавать сетевые приложения. Благодаря использованию JavaScript в клиентской и серверной части приложения Node.js делает процесс разработки более единообразным и интегрированным.
В этом обучающем модуле мы покажем вам, как начать использовать Node.js на сервере Debian 9.
Этот обучающий модуль предполагает, что вы используете Debian 9. Перед началом его прохождения необходимо настроить в системе учетную запись пользователя без привилегий root и с привилегиями sudo. Подробнее об этом можно узнать в обучающем модуле Начальная настройка сервера для Debian 9.
Версия Node.js содержится в хранилищах Debian по умолчанию. На момент составления этого обучающего модуля это версия 4.8.2, срок использования которой заканчивается в конце апреля 2018 года. Если вы хотите поэкспериментировать с языком программирования, используя стабильную и достаточную версию, имеет смысл установить версию из хранилища. Однако для целей разработки и использования в работе рекомендуется установить более позднюю версию с PPA. На следующем шаге мы расскажем, как выполнить установку с PPA.
Чтобы получить стабильную версию дистрибутива Node.js, можно использовать диспетчер пакетов apt
. Вначале необходимо обновить локальный индекс пакетов:
- sudo apt update
Затем следует установить пакет Node.js из хранилища:
- sudo apt install nodejs
Если пакет из хранилища отвечает вашим потребностям, для начала работы с Node.js ничего больше не потребуется.
Чтобы проверить номер версии Node.js, установленной на начальном шаге, введите:
- nodejs -v
В свзяи с конфликтом с другим пакетом исполняемый файл из хранилищ Debian носит имя nodejs
, а не node
. Это необходимо помнить при запуске программного обеспечения.
Определив номер версии Node.js, установленной из хранилищ Debian, вы можете решить, хотите ли вы работать с разными версиями, архивами пакетов или диспетчерами версий. Затем мы обсудим эти элементы, а также более гибкие и надежные методы установки.
Чтобы работать с более поздней версией Node.js, вы можете добавить PPA (персональный архив пакетов), обслуживаемый NodeSource. В нем содержатся более актуальные версии Node.js, чем в официальных хранилищах Debian. Вы сможете выбрать между версией Node.js v4.x (старая версия с долгосрочной поддержкой, которая будет поддерживаться до конца апреля 2018 г.), версией Node.js v6.x (поддерживается до апреля 2019 г.), версией Node.js v8.x (текущая версия LTS, поддерживаемая до декабря 2019 г.) и версией Node.js v10.x (последняя версия, поддерживаемая до апреля 2021 г.)
Теперь обновим указатель локальных пакетов и установим curl
, который будет использоваться для доступа к PPA:
- sudo apt update
- sudo apt install curl
Затем установим PPA, чтобы получить доступ к его содержимому. Используйте в домашнем каталоге команду curl
для получения скрипта установки предпочитаемой версии. Замените 10.x
предпочитаемым номером версии (если он отличается):
- cd ~
- curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
Вы можете просмотреть содержимое скрипта с помощью nano
или предпочитаемого текстового редактора:
- nano nodesource_setup.sh
Запустите скрипт от имени пользователя sudo
:
- sudo bash nodesource_setup.sh
Архив PPA будет добавлен в вашу конфигурацию и кэш локальных пакетов автоматически обновится. После запуска скрипта настройки вы сможете установить пакет Node.js, как было описано выше:
- sudo apt install nodejs
Чтобы проверить номер версии Node.js, установленной на начальном шаге, введите:
- nodejs -v
Outputv10.9.0
Пакет nodejs
содержит двоичный файл nodejs
и npm
, поэтому вам не потребуется устанавливать npm
отдельно.
npm
использует файл конфигурации в домашнем каталоге, чтобы отслеживать обновления. Он создается при первом запуске npm
. Выполните следующую команду, чтобы проверить установку npm
и создать файл конфигурации:
- npm -v
Output6.2.0
Для работы некоторых пакетов npm
(например, требующих компиляцию кода из источника) потребуется установить пакет build-essential
:
- sudo apt install build-essential
Теперь у вас есть необходимые инструменты для работы с пакетами npm,
которые требуют компиляции кода из источника.
Вместо установки Node.js через apt
можно использовать инструмент под названием nvm
, название которого расшифровывается как «Диспетчер версий Node.js». Вместо того чтобы работать на уровне операционной системы, nvm
работает на уровне независимого каталога внутри домашнего каталога home. Это означает, что вы можете установить разные самодостаточные версии Node.js, и это не повлияет на систему в целом.
Контроль среды с помощью nvm
позволяет получить доступ к последним версиям Node.js, и при этом сохранить предыдущие версии и управлять ими. Эта служебная программа отличается от apt
, и версии Node.js, которыми она управляет, отличаются от тех, управление которыми осуществляется с помощью apt
.
Чтобы загрузить скрипт установки nvm
со страницы проекта на GitHub, вы можете использовать curl
. Обратите внимание, что номер версии может отличаться от выделенного здесь:
- curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh -o install_nvm.sh
Проверьте скрипт установки с помощью nano
:
- nano install_nvm.sh
Запустите скрипт с помощью bash
:
- bash install_nvm.sh
Программное обеспечение будет установлено в подкаталог домашнего каталога home по адресу ~/.nvm
. Также в файл ~/.profile
будут добавлены строки, необходимые для использования файла.
Чтобы получить доступ к функции nvm
, вам нужно будет выйти из системы и снова войти в нее или сослаться на файл ~/.profile
, чтобы текущему сеансу было известно об изменениях:
- source ~/.profile
После установки nvm
вы можете установить изолированные версии Node.js. Для получения информации о доступных версиях Node.js введите:
- nvm ls-remote
Output...
v8.11.1 (Latest LTS: Carbon)
v9.0.0
v9.1.0
v9.2.0
v9.2.1
v9.3.0
v9.4.0
v9.5.0
v9.6.0
v9.6.1
v9.7.0
v9.7.1
v9.8.0
v9.9.0
v9.10.0
v9.10.1
v9.11.0
v9.11.1
v10.0.0
v10.1.0
v10.2.0
v10.2.1
v10.3.0
v10.4.0
v10.4.1
v10.5.0
v10.6.0
v10.7.0
v10.8.0
v10.9.0
Как видите, на момент написания текущая версия LTS имеет номер v8.11.1. Вы можете установить ее, введя следующую команду:
- nvm install 8.11.1
Обычно nvm
переключается на использование последней установленной версии. Вы можете указать nvm
использовать версию, которую вы только что загрузили, введя следующую команду:
- nvm use 8.11.1
После установки Node.js с помощью nvm
исполняемый файл имеет имя node
. Номер используемой оболочкой версии можно посмотреть с помощью следующей команды:
- node -v
Outputv8.11.1
Если вы используете несколько версий Node.js, вы можете посмотреть установленные версии с помощью следующей команды:
- nvm ls
Если вы хотите задать одну из версий как версию по умолчанию, введите:
- nvm alias default 8.11.1
Эта версия будет выбираться автоматически при запуске нового сеанса. Также вы можете сослаться на нее по альтернативному имени, например:
- nvm use default
Каждая версия Node.js будет отслеживать собственные пакеты и иметь доступ к npm
для управления ими.
Также вы можете указать npm
устанавливать пакеты в каталог проекта Node.js ./node_modules
. Используйте следующий синтаксис команды для установки модуля express
:
- npm install express
Если вы хотите выполнить глобальную установку модуля и сделать его доступным для других проектов с той же версией Node.js, вы можете добавить опцию -g
:
- npm install -g express
В этом случае пакет будет установлен в каталоге:
~/.nvm/versions/node/node_version/lib/node_modules/express
Глобальная установка модуля позволит запускать команды из командной строки, однако вам нужно будет привязать пакет к локальной сферы, чтобы программа могла его запрашивать:
- npm link express
Дополнительную информацию о доступных возможностях nvm можно узнать с помощью следующей команды:
- nvm help
Вы можете удалить Node.js с помощью apt
или nvm
в зависимости от версии, которую удаляете. Чтобы удалить версии, установленные из хранилища или PPA, вам нужно будет использовать утилиту apt
на системном уровне.
Чтобы удалить любую из этих версий, введите следующую команду:
- sudo apt remove nodejs
Данная команда удаляет пакет и файлы конфигурации.
Чтобы удалить версию Node.js, которую вы активировали с помощью nvm
, нужно предварительно определить, является ли удаляемая версия текущей активной версией:
- nvm current
Если вы удаляете **не **текущую активную версию, вы можете использовать следующую команду:
- nvm uninstall node_version
Эта команда удаляет выбранную версию Node.js.
Если удаляемая версия является текущей активной версией, вы должны предварительно отключить nvm
, чтобы активировать изменения:
- nvm deactivate
Вы можете удалить текущую версию с помощью вышеуказанной команды uninstall
, которая удалит все файлы, связанные с требуемой версией Node.js, кроме кэшированных файлов, которые можно будет использовать для повторной установки.
Существует несколько способов запустить и начать использование Node.js на сервере Debian 9. Наиболее подходящий метод из вышеперечисленных определяется в зависимости от обстоятельств. Хотя использование упакованной версии из хранилища Debian открывает возможности для экспериментирования, установка из PPA и работа с npm
или nvm
дает дополнительную гибкость.
Private networks generally provide internet access to the hosts using NAT (network address translation), sharing a single public IP address with all hosts inside the private network. In NAT systems, the hosts inside the private network are not visible from outside the network. To expose services running on these hosts to the public internet, you would usually create NAT rules in the gateway, commonly called port forwarding rules. In several situations, though, you wouldn’t have access to the gateway to configure these rules. For situations such as this, tunneling solutions like PageKite come in handy.
PageKite is a fast and secure tunneling solution that can expose a service inside a private network to the public internet without the need for port forwarding. To do this, it relies on an external server, called the front-end server, to which the server behind NAT and the clients connect to allow communication between them. By default, PageKite uses its own commercial pagekite.net service, but as it is a completely open-source project, it allows you to set up a private frontend on a publicly accessible host, such as a DigitalOcean Droplet. With this setup, you can create a vendor-independent solution for remote access to hosts behind NAT. By configuring the remote hosts with the PageKite client to connect to the frontend and exposing the SSH port, it is possible to access them via the command line interface shell using SSH. It’s also possible to access a graphical user interface using a desktop sharing system such as VNC or RDP running over an SSH connection.
In this tutorial, you will install and set up a PageKite front-end service on a server running Debian 9. You will also set up two more Debian 9 servers to simulate a local and a remote environment. When you’re finished, you will have set up a server for multiple clients, and tested it with a practical solution for remote access using SSH and VNC.
Before following this guide you’ll need the following:
front-end-server
and its public IP address by Front_End_Public_IP
.remote-host
and local-host
and their public IP addresses by Remote_Host_Public_IP
and Local_Host_Public_IP
respectively. This tutorial will use two standard DigitalOcean Droplets with 1GB of memory to represent them. Alternatively, two local or virtual machines could be used to represent these hosts.your_domain
as an example throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.pagekite.your_domain
pointing to the IP address of the front-end-server
.pagekite.your_domain
also points out to our front-end-server
. This can be set up using wildcard DNS entries. In this case, create an A record for the wildcard DNS entry *.pagekite.your_domain
to point out to the same IP address, Front_End_Public_IP
. This will be used to distinguish the clients that connect to our server by domain name (client-1.pagekite.your_domain
and client-2.pagekite.your_domain
, for example) and tunnel the requisitions appropriately.In this tutorial, we are going to use three DigitalOcean Droplets to play the role of front-end-server
, local-host
, and remote-host
. To do this, we will first set the local-host
and remote-host
up to have access to the graphical environment and to mimic the behavior of a remote-host
under NAT, so that PageKite can be used as a solution to access its services. Besides that, we also need to configure the front-end-server
Droplet firewall rules to allow it to work with PageKite and intermediate the connection between local-host
and remote-host
.
As we are going to work with multiple servers, we’re going to use different colors in the command listings to identify which server we are using, as follows:
- # Commands and outputs in the front-end-server Droplet
- # Commands and outputs in the remote-host Droplet
- # Commands and outputs in the local-host Droplet
- # Commands and outputs in both the remote-host and local-host Droplets
Let’s first go through the steps for both remote-host
and local-host
Droplets, to install the dependencies and set up access to the graphical environment using VNC. After that, we will cover the firewall configuration in each of the three Droplets to allow the front-end-server
to run PageKite and mimic a connection using NAT on remote-host
.
We will need access to the graphical interface on both local-host
and remote-host
hosts to run through this demonstration. On local-host
, we will use a VNC session to access its graphical interface and test our setup using the browser. On remote-host
, we will set up a VNC session that we will access from local-host
.
To set up VNC, first we need to install some dependencies on local-host
and remote-host
. But before installing any package, we need to update the package list of the repositories, by running the following on both servers:
- sudo apt-get update
Next, we install the VNC server and a graphical user environment, which is needed to start a VNC session. We will use the Tight VNC server and the Xfce desktop environment, which can be installed by running:
- sudo apt-get install xfce4 xfce4-goodies tightvncserver
In the middle of the graphical environment installation, we’ll be asked about the keyboard layout we wish to use. For a QWERTY US keyboard, select English (US)
.
In addition to these, on local-host
we’re going to need a VNC viewer and an internet browser to be able to perform the connection to remote-host
. This tutorial will install the Firefox web browser and the xtightvncviewer. To install them, run:
- sudo apt-get install firefox-esr xtightvncviewer
When a graphical environment is installed, the system initializes in graphical mode by default. By using the DigitalOcean console, it is possible to visualize the graphical login manager, but it is not possible to log in or to use the command line interface. In our setup, we are mimicking the network behavior as if we were using NAT. To do this, we will need to use the DigitalOcean console, since we won’t be able to connect using SSH. Therefore, we need to disable the graphical user interface from automatically starting on boot. This can be done by disabling the login manager on both servers:
- sudo systemctl disable lightdm.service
After disabling the login manager, we can restart the Droplets and test if we can log in using the DigitalOcean console. To do that, run the following:
- sudo shutdown -r now
Next, access the DigitalOcean console by navigating to the Droplet page in the DigitalOcean Control Panel, selecting your local-host
Droplet, and clicking on the word Console in the top right corner, near the switch to turn the Droplet on and off:
Once you press enter in the console, you will be prompted for your username and password. Enter these credentials to bring up the command line prompt:
Once you have done this for the local-host
, repeat for the remote-host
.
With the console up for both Droplets, we can now set up the VNC.
Here, we will put together a basic VNC setup. If you would like a more in-depth guide on how to set this up, check out our How to Install and Configure VNC on Debian 9 tutorial.
To start a VNC session, run the following on both local-host
and remote-host
Droplets:
- vncserver
On the first run, the system will create the configuration files and ask for the main password. Input your desired password, then verify it. The VNC server will also ask for a view-only password, used for viewing another user’s VNC session. As we won’t need a view-only VNC session, type n
for this prompt.
The ouput will look similar to this:
Outputsammy@remote-host:/home/sammy$ vncserver
You will require a password to access your desktops.
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
xauth: file /home/sammy/.Xauthority does not exist
New 'X' desktop is remote-host:1
Creating default startup script /home/sammy/.vnc/xstartup
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/remote-host:1.log
The :1
after the host name represents the number of the VNC session. By default, the session number 1
is run on port 5901
, session number 2
on port 5902
, and so on. Following the previous output, we can access remote-host
by using a VNC client to connect to Remote_Host_Public_IP
on port 5901
.
One problem of the previous configuration is that it is not persistent, which means it won’t be started by default when the Droplet is restarted. To make it persistent, we can create a Systemd service and enable it. To do that, we will create the vncserver@.service
file under /etc/systemd/system
, which can be done using nano
:
- sudo nano /etc/systemd/system/vncserver@.service
Place the following contents in the file, replacing sammy
with your username:
[Unit]
Description=Start TightVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=sammy
PAMName=login
PIDFile=/home/sammy/.vnc/%H:%i.pid
ExecStartPre=-/usr/bin/vncserver -kill :%i > /dev/null 2>&1
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :%i
ExecStop=/usr/bin/vncserver -kill :%i
[Install]
WantedBy=multi-user.target
This file creates a vncserver
Systemd unit, which can be configured as a system service using the systemctl
tool. In this case, when the service is started, it kills the VNC session if it is already running (line ExecStartPre
) and starts a new session using the resolution set to 1280x800
(line ExecStart
). When the service is stopped, it kills the VNC session (line ExecStop
).
Save the file and quit nano
. Next, we’ll make the system aware of the new unit file by running:
- sudo systemctl daemon-reload
Then, enable the service to be automatically started when the server is initialized by running:
- sudo systemctl enable vncserver@1.service
When we use the enable
command with systemctl
, symlinks are created so that the service is started automatically when the system is initialized, as informed by the output of the previous command:
OutputCreated symlink /etc/systemd/system/multi-user.target.wants/vncserver@1.service → /etc/systemd/system/vncserver@.service.
With the VNC server properly configured, we may restart the Droplets to test if the service is automatically started:
- sudo shutdown -r now
After the system initializes, log in using SSH and check if VNC is running with:
- sudo systemctl status vncserver@1.service
The output will indicate the service is running:
● vncserver@1.service - Start TightVNC server at startup
Loaded: loaded (/etc/systemd/system/vncserver@.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-29 19:21:12 UTC; 1h 22min ago
Process: 848 ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :1 (code=exited, status=0/SUCCESS)
Process: 760 ExecStartPre=/usr/bin/vncserver -kill :1 > /dev/null 2>&1 (code=exited, status=2)
Main PID: 874 (Xtightvnc)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/system-vncserver.slice/vncserver@1.service
‣ 874 Xtightvnc :1 -desktop X -auth /home/sammy/.Xauthority -geometry 1280x800 -depth 24 -rfbwait
Aug 29 19:21:10 remote-host systemd[1]: Starting Start TightVNC server at startup...
Aug 29 19:21:10 remote-host systemd[760]: pam_unix(login:session): session opened for user sammy by (uid=0)
Aug 29 19:21:11 remote-host systemd[848]: pam_unix(login:session): session opened for user sammy by (uid=0)
Aug 29 19:21:12 remote-host systemd[1]: Started Start TightVNC server at startup.
~
This finishes the VNC configuration. Remember to follow the previous steps on both remote-host
and local-host
. Now let’s cover the firewall configurations for each host.
Starting with the remote-host
, we will configure the firewall to deny external connections to the Droplets’ services to mimic the behavior from behind NAT. In this tutorial, we are going to use port 8000
for HTTP connections, 22
for SSH, and 5901
for VNC, so we will configure the firewall to deny external connections to these ports.
By following the initial setup for Debian 9, remote-host
will have a firewall rule to allow connections to SSH. We can review this rule by running:
- sudo ufw status verbose
The output will be the following:
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN Anywhere
22/tcp (OpenSSH (v6)) ALLOW IN Anywhere (v6)
Remove these SSH rules to mimic the behavior behind NAT.
Warning: Closing port 22
means you will no longer be able to use SSH to remotely log in to your server. For Droplets, this is not a problem because you can access the server’s console via the DigitalOcean Control Panel, as we did at the end of the Installing Dependencies section of this step. However, if you are not using a Droplet, be careful: closing off port 22
could lock you out of your server if you have no other means of accessing it.
To deny SSH access, use ufw
and run:
- sudo ufw delete allow OpenSSH
We can verify the SSH rules were removed by checking the status of the firewall again:
- sudo ufw status verbose
The output will show no firewall rules, as in the following:
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
Although the firewall is configured, the new configuration is not running until we enable it with:
- sudo ufw enable
After enabling it, note that we won’t be able to access remote-host
via SSH anymore, as mentioned in the output of the command:
OutputCommand may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
Log out of the remote-host
, then test the configuration by trying to establish an SSH or a VNC connection. It will not be possible. From now on, we may access remote-host
exclusively by the DigitalOcean console.
On local-host
, we will leave the SSH ports open. We only need one firewall rule to allow access to the VNC session:
- sudo ufw allow 5901
After modifying the firewall rules, enable it by running:
- sudo ufw enable
Now we may test the VNC connection using the prerequisite VNC client on your local machine to connect to local-host
on port 5901
using the VNC password you’ve set up.
To do this, open up your VNC client and connect to Local_Host_Public_IP:5901
. Once you enter the password, you will connect to the VNC session.
Note: If you have trouble connecting to the VNC session, restart the VNC service on local-host
with sudo systemctl restart vncserver@1
and try to connect again.
On its first start, Xfce will ask about the initial setup of the environment:
For this tutorial, select the Use default config option.
Finally, we need to allow connections to port 80
on the front-end-server
, which will be used by PageKite. Open up a terminal on front-end-server
and use the following command:
- sudo ufw allow 80
Additionally, allow traffic on port 443
for HTTPS:
- sudo ufw allow 443
To enable the new firewall configuration, run the following:
- sudo ufw enable
Now that we’ve set up the Droplets, let’s configure the PageKite front-end server.
Although it is possible to run PageKite using a Python script to set up the front-end server, it is more reliable to run it using a system service. To do so, we will need to install PageKite on the server.
The recommended way to install a service on a Debian server is to use a distribution package. This way, it is possible to obtain automated updates and configure the service to start up on boot.
First, we will configure the repository to install PageKite. To do that, update the package list of the repositories:
- sudo apt-get update
Once the update is done, install the package dirmngr
, which is necessary to support the key-ring import from the PageKite repository to ensure a secure installation:
- sudo apt-get install dirmngr
Next, add the repository to the /etc/apt/sources.list
file, by running:
- echo deb http://pagekite.net/pk/deb/ pagekite main | sudo tee -a /etc/apt/sources.list
After setting up the repository, import the PageKite packaging key to our trusted set of keys, so that we can install packages from this repository. Packaging key management is done with the apt-key
utility. In this case, we have to import the key AED248B1C7B2CAC3
from the key server keys.gnupg.net
, which can be done by running:
- sudo apt-key adv --recv-keys --keyserver keys.gnupg.net AED248B1C7B2CAC3
Next, update the package lists of the repositories again, so that the pagekite
package gets indexed:
- sudo apt-get update
Finally, install it with:
- sudo apt-get install pagekite
Now that we have PageKite installed, let’s set up the front-end server and configure the service to run on boot.
The PageKite package we have just installed can be used to configure a connection to a PageKite front-end server. It can also be used to set up a front-end service to receive PageKite connections, which is what we want to do here. In order to do so, we have to edit PageKite’s configuration files.
PageKite stores its configuration files in the directory /etc/pagekite.d
. The first change we have to do is disable all lines in the /etc/pagekite.d/10_account.rc
file, since this file is only used when PageKite is set up as a client to connect to a front-end server. We can edit the file using nano
:
- sudo nano /etc/pagekite.d/10_account.rc
To disable the lines, add a #
to disable the active lines of the file:
#################################[ This file is placed in the Public Domain. ]#
# Replace the following with your account details.
# kitename = NAME.pagekite.me
# kitesecret = YOURSECRET
# Delete this line!
# abort_not_configured
After making the changes, save them and quit nano
. Next, edit the file /etc/pagekite.d/20_frontends.rc
:
- sudo nano /etc/pagekite.d/20_frontends.rc
Add the following highlighted lines to the file and comment out the defaults
line, making sure to replace your_domain
with the domain name you are using and examplepassword
with a password of your choice:
#################################[ This file is placed in the Public Domain. ]#
# Front-end selection
#
# Front-ends accept incoming requests on your behalf and forward them to
# your PageKite, which in turn forwards them to the actual server. You
# probably need at least one, the service defaults will choose one for you.
# Use the pagekite.net service defaults.
# defaults
# If you want to use your own, use something like:
# frontend = hostname:port
# or:
# frontends = COUNT:dnsname:port
isfrontend
ports=80,443
protos=http,https,raw
domain=http,https,raw:*.pagekite.your_domain:examplepassword
rawports=virtual
Let’s explain these lines one by one. First, to configure PageKite as a front-end server, we added the line isfrontend
. To configure the ports on which the server will be listening, we added ports=80,443
. We also configured the protocols PageKite is going to proxy. To use HTTP, HTTPS, and RAW (which is used by SSH connections), we add the line protos=http,https,raw
. We also disable the defaults
settings so that there are no conflicting configurations for the server.
Besides that, we configured the domain we are going to use for the front-end-server
. For each client, a subdomain will be used, which is why we needed the DNS configurations in the Prerequisites section. We also set up a password that will be used to authenticate the clients. Using the placeholder password examplepassword
, these configurations were done by adding the line domain=http,https,raw:*.pagekite.your_domain:examplepassword
. Finally, we added an extra line in order to connect using SSH (which is not documented, as discussed here): rawports=virtual
.
Save the file and quit nano
. Restart the PageKite service, by running:
- sudo systemctl restart pagekite.service
Then enable it to start on boot with:
- sudo systemctl enable pagekite.service
Now that we have front-end-server
running, let’s test it by exposing an HTTP port on remote-host
and connecting to it from local-host
.
To test the front-end-server
, let’s start an HTTP service on remote-host
and expose it to the internet using PageKite, so that we can connect to it from local-host
. Remember, we have to connect to remote-host
using the DigitalOcean console, since we have configured the firewall to deny incoming SSH connections.
To start up an HTTP server for testing, we can use the Python 3 http.server
module. Since Python is already installed even on the minimal Debian installation and http.server
is part of the standard Python library, to start the HTTP server using port 8000
on remote-host
we’ll run:
- python3 -m http.server 8000 &
As Debian 9 still uses Python 2 by default, it is necessary to invoke Python by running python3
to start the server. The ending &
character indicates for the command to run in the background, so that we can still use the shell terminal. The output will indicate that the server is running:
Outputsammy@remote-host:~$ python3 -m http.server 8000 &
[1] 1782
sammy@remote-host:~$ Serving HTTP on 0.0.0.0 port 8000 ...
Note: The number 1782
that appears in this output refers to the ID that was assigned to the process started with this command and may be different depending on the run. Since it is running in the background, we can use this ID to terminate (kill) the process by issuing kill -9 1782
.
With the HTTP server running, we may establish the PageKite tunnel. A quick way to do this is by using the pagekite.py
script. We can download it to remote-host
running:
- wget https://pagekite.net/pk/pagekite.py
After downloading it, mark it as executable by running:
- chmod a+x pagekite.py
Note: Since PageKite is written in Python 2 and this is the current default version of Python in Debian 9, the proceeding command works without errors. However, since default Python is being progressively migrated to Python 3 in several Linux distributions, it may be necessary to alter the first line of the pagekite.py
script to set it to run with Python 2 (setting it to #!/usr/bin/python2
).
With pagekite.py
available in the current directory, we can connect to front-end-server
and expose the HTTP server on the domain remote-host.pagekite.your_domain
by running the following, substituting your_domain
and examplepassword
with your own credentials:
- ./pagekite.py --clean --frontend=pagekite.your_domain:80 --service_on=http:remote-host.pagekite.your_domain:localhost:8000:examplepassword
Let’s take a look at the arguments in this command:
--clean
is used to ignore the default configuration.--frontend=pagekite.your_domain:80
specifies the address of our frontend. Note we are using port 80
, since we have set the front end to run on this port in Step 3.--service_on=http:remote-host.pagekite.your_domain:localhost:8000:examplepassword
, we set up the service we are going to expose (http
), the domain we are going to use (remote-host.pagekite.your_domain
), the local address and port where the service is running (localhost:8000
since we are exposing a service on the same host we are using to connect to PageKite), and the password to connect to the frontend (examplepassword
).Once this command is run, we will see the message Kites are flying and all is well
displayed in the console. After that, we may open a browser window in the local-host
VNC session and use it to access the HTTP server on remote-host
by accessing the address http://remote-host.pagekite.your_domain
. This will display the file system for remote-host
:
To stop PageKite’s connection on remote-host
, hit CTRL+C
in the remote-host
console.
Now that we have tested front-end-server
, let’s configure remote-host
to make the connection with PageKite persistent and to start on boot.
The connection between the remote-host
and the front-end-server
we set up in Step 4 is not persistent, which means that the connection will not be re-established when the server is restarted. This will be a problem if you would like to use this solution long-term, so let’s make this setup persistent.
It is possible to set up PageKite to run as a service on remote-host
, so that it is started on boot. To do this, we can use the same distribution packages we used for the front-end-server
in Step 3. In the remote-host
console accessed through the DigitalOcean control panel, run the following command to install dirmngr
:
- sudo apt-get install dirmngr
Then to add the PageKite repository and import the GPG key, run:
- echo deb http://pagekite.net/pk/deb/ pagekite main | sudo tee -a /etc/apt/sources.list
- sudo apt-key adv --recv-keys --keyserver keys.gnupg.net AED248B1C7B2CAC3
To update the package list and install PageKite, run:
- sudo apt-get update
- sudo apt-get install pagekite
To set up PageKite as a client, we will configure the front-end-server
address and port in the file /etc/pagekite.d/20_frontends.rc
. We can edit it using nano
:
- sudo nano /etc/pagekite.d/20_frontends.rc
In this file, comment the line with defaults
to avoid using pagekite.net
service defaults. Also, configure the front-end-server
address and port by using the parameter frontend
, adding the line frontend = pagekite.your_domain:80
to the end of the file. Be sure to replace your_domain
with the domain you are using.
Here is the full file with the edited lines highlighted:
#################################[ This file is placed in the Public Domain. ]#
# Front-end selection
#
# Front-ends accept incoming requests on your behalf and forward them to
# your PageKite, which in turn forwards them to the actual server. You
# probably need at least one, the service defaults will choose one for you.
# Use the pagekite.net service defaults.
# defaults
# If you want to use your own, use something like:
frontend = pagekite.your_domain:80
# or:
# frontends = COUNT:dnsname:port
After saving the modifications and quitting nano
, continue the configuration by editing the file /etc/pagekite.d/10_account.rc
and setting the credentials to connect to front-end-server
. First, open up the file by running:
- sudo nano /etc/pagekite.d/10_account.rc
To set up the domain we are going to use the domain name and the password to connect to our front-end-server
, editing the parameters kitename
and kitesecret
respectively. We also have to comment out the last line of the file to enable the configuration, as highlighted next:
#################################[ This file is placed in the Public Domain. ]#
# Replace the following with your account details.
kitename = remote-host.pagekite.your_domain
kitesecret = examplepassword
# Delete this line!
# abort_not_configured
Save and quit from the text editor.
We will now configure our services that will be exposed to the internet. For HTTP and SSH services, PageKite includes sample configuration files with extensions ending in .sample
in its configuration directory /etc/pagekite.d
. Let’s start by copying the sample configuration file into a valid one for HTTP:
- cd /etc/pagekite.d
- sudo cp 80_httpd.rc.sample 80_httpd.rc
The HTTP configuration file is almost set up. We only have to adjust the HTTP port, which we can do by editing the file we just copied:
- sudo nano /etc/pagekite.d/80_httpd.rc
The parameter service_on
defines the address and port of the service we wish to expose. By default, it exposes localhost:80
. As our HTTP server will be running on port 8000
, we just have to change the port number, as highlighted next:
#################################[ This file is placed in the Public Domain. ]#
# Expose the local HTTPD
service_on = http:@kitename : localhost:8000 : @kitesecret
# If you have TLS/SSL configured locally, uncomment this to enable end-to-end
# TLS encryption instead of relying on the wild-card certificate at the relay.
#service_on = https:@kitename : localhost:443 : @kitesecret
#
# Uncomment the following to globally DISABLE the request firewall. Do this
# if you are sure you know what you are doing, for more details please see
# <http://pagekite.net/support/security/>
#
#insecure
#
# To disable the firewall for one kite at a time, use lines like this::
#
#service_cfg = KITENAME.pagekite.me/80 : insecure : True
Note: The service_on
parameter syntax is similar to the one used with the pagekite.py
script. However, the domain name we are going to use and the password are obtained from the /etc/pagekite.d/10_account.rc
file and inserted by the markers @kitename
and @kitesecret
respectively.
After saving the modifications to this configuration file, we have to restart the service so that the changes take effect:
- sudo systemctl restart pagekite.service
To start the service on boot, enable the service with:
- sudo systemctl enable pagekite.service
Just as we have done before, use the http.server
Python module to emulate our HTTP server. It will be already running since we started it to run in the background in Step 4. However, if for some reason it is not running, we may start it again with:
- python3 -m http.server 8000 &
Now that we have the HTTP server and the PageKite service running, open a browser window in the local-host
VNC session and use it to access remote-host
by using the address http://remote-host.pagekite.your_domain
. This will display the file system of remote-host
in the browser.
We have seen how to configure a PageKite front-end server and a client to expose a local HTTP server. Next, we’ll set up remote-host
to expose SSH and allow remote connections.
Besides HTTP, PageKite can be used to proxy other services, such as SSH, which is useful to access hosts remotely behind NAT in environments where it is not possible to modify networking and a router’s configurations.
In this section, we are going to configure remote-host
to expose its SSH service using PageKite, then open an SSH session from local-host
.
Just like we have done to configure HTTP with PageKite, for SSH we will copy the sample configuration file into a valid one to expose the SSH service on remote-host
:
- cd /etc/pagekite.d
- sudo cp 80_sshd.rc.sample 80_sshd.rc
This file is pre-configured to expose the SSH service running on port 22
, which is the default configuration. Let’s take a look at its contents:
- nano 80_sshd.rc
This will show you the file:
#################################[ This file is placed in the Public Domain. ]#
# Expose the local SSH daemon
service_on = raw/22:@kitename : localhost:22 : @kitesecret
This file is very similar to the one used to expose HTTP. The only differences are the port number, which is 22
for SSH, and the protocol, which must be set to raw
when exposing SSH.
Since we do not need to make any changes here, exit from the file.
Restart the PageKite service:
- sudo systemctl restart pagekite.service
Note: We could also expose SSH using the pagekite.py
script if the PageKite service wasn’t installed. We would just have to use the --service-on
argument, setting the protocol to raw
with the proper domain name and password. For example, to expose it using the same parameters we have configured in the PageKite service, we would use the command ./pagekite.py --clean --frontend=pagekite.your_domain:80 --service_on=raw:remote-host.pagekite.your_domain:localhost:22:examplepassword
.
On local-host
, we will use the SSH client to connect to remote-host
. PageKite tunnels the connections using HTTP, so that to use SSH over PageKite, we will need an HTTP proxy. There are several options of HTTP proxies we could use from the Debian repositories, such as Netcat(nc
) and corkscrew
. For this tutorial, we will use corkscrew
, since it requires fewer arguments than nc
.
To install corkscrew
on local-host
, use apt-get install
with the package of the same name:
- sudo apt-get install corkscrew
Next, generate an SSH key on local-host
and append the public key to the .ssh/authorized_keys
file of remote-host
. To do this, follow the How to Set Up SSH Keys on Debian 9 guide, including the Copying Public Key Manually section in Step 2.
To connect to an SSH server using a proxy, we will use ssh
with the -o
argument to pass in ProxyCommand
and specify corkscrew
as the HTTP proxy. This way, on local-host
, we will run the following command to connect to remote-host
through the PageKite tunnel:
- ssh sammy@remote-host.pagekite.your_domain -i ~/id_rsa -o "ProxyCommand corkscrew %h 80 %h %p"
Notice we provided some arguments to corkscrew
. The %h
and %p
are tokens that the SSH client replaces by the remote host name (remote-host.pagekite.your_domain
) and remote port (22
, implicitly used by ssh
) when it runs corkscrew
. The 80
refers to the port on which PageKite is running. This port refers to the communication between the PageKite client and the front-end server.
Once you run this command on local-host
, the command line prompt for remote-host
will appear.
With our SSH connection working via PageKite, let’s next set a VNC session on remote_server
and access it from local-host
using VNC over SSH.
Now we can access a remote host using a shell, which solves a lot of the problems that arise from servers hidden behind NAT. However, in some situations, we require access to the graphical user interface. SSH provides a way of tunneling any service in its connection, such as VNC, which can be used for graphical remote access.
With remote-host
configured to expose SSH using our front-end server, let’s use an SSH connection to tunnel VNC and have access to the remote-host
graphical interface.
Since we have already configured a VNC session to start automatically on remote-host
, we will use local-host
to connect to remote-host
using ssh
with the -L
argument:
- ssh sammy@remote-host.pagekite.your_domain -i ~/id_rsa -o "ProxyCommand corkscrew %h 80 %h %p" -L5902:localhost:5901
The -L
argument specifies that connections to a given local port should be forwarded to a remote host and port. Together with this argument, we provided a port number followed by a colon, then an IP address, domain, or host name, followed by another colon and a port number. Let’s take a look at this information in detail:
local-host
), to receive the tunneled connection from the remote host. In this case, from the point of view of local-host
, the VNC Session from remote-host
will be available locally, on port 5902
. We could not use the port 5901
since it is already being used on local-host
for its own VNC session.remote-host
is serving the SSH connection and the VNC session is also served by this same host, we can use localhost
.5901
, since VNC is running on this port on the remote-host
.After the connection is established, we will be presented with a remote shell on remote-host
.
Now we can reach the remote-host
VNC session from local-host
by connecting to port 5902
itself. To do so, open a shell from the local-host
GUI in your VNC client, then run:
- vncviewer localhost:5902
Upon providing the remote-host
VNC password, we will be able to access its graphical environment.
Note: If the VNC session has been running for too long, you may encounter an error in which the GUI on remote-host
is replaced by a gray screen with an X
for a cursor. If this happens, try restarting the VNC session on remote-host
with sudo systemctl restart vncserver@1
. Once the service is running, try connecting again.
This setup can be useful for support teams using remote access. It is possible to use SSH to tunnel any service that can be reached by remote-host
. This way, we could set up remote-host
as a gateway to a local attached network with many hosts, including some running Windows or another OS. As long as the hosts have a VNC server with a VNC session set up, it would be possible to access them with a graphical user interface through SSH tunneled by our PageKite front-end-server
.
In the final step, we will configure the PageKite frontend to support more clients with different passwords.
Suppose we are going to use our front-end-server
to offer remote access to many clients. In this multi-user setup, it would be a best practice to isolate them, using a different domain name and password for each one to connect to our server. One way of doing this is by running several PageKite services on our server on different ports, each one configured with its own subdomain and password, but this can be difficult to keep organized.
Fortunately, the PageKite frontend supports the configuration of multiple clients itself, so that we can use the same service on a single port. To do this, we would configure the front end with the domain names and passwords.
As we have configured the wildcard DNS entry *.pagekite.your_domain
pointing out to our front-end-server
, DNS entries in subdomains like remote-host.client-1.pagekite.your_domain
can also point out to our server, so that we could use domains ending in client1.pagekite.your_domain
and client2.pagekite.your_domain
to identify hosts of different clients with different passwords.
To do this on the front-end-server
, open the /etc/pagekite.d/20_frontends.rc
file:
- sudo nano /etc/pagekite.d/20_frontends.rc
Add the domains using the domain
keyword and set different passwords for each one. To set up the domains we’ve mentioned, add:
#################################[ This file is placed in the Public Domain. ]#
# Front-end selection
#
# Front-ends accept incoming requests on your behalf and forward them to
# your PageKite, which in turn forwards them to the actual server. You
# probably need at least one, the service defaults will choose one for you.
# Use the pagekite.net service defaults.
# defaults
# If you want to use your own, use something like:
# frontend = hostname:port
# or:
# frontends = COUNT:dnsname:port
isfrontend
ports=80,443
protos=http,https,raw
domain=http,https,raw:*.pagekite.your_domain:examplepassword
domain=http,https,raw:*.client-1.pagekite.your_domain:examplepassword2
domain=http,https,raw:*.client-2.pagekite.your_domain:examplepassword3
rawports=virtual
Save and exit the file.
After modifying the configuration files, restart PageKite:
- sudo systemctl restart pagekite.service
On the remote hosts, let’s configure the PageKite client to connect according to the new domains and passwords. For example, in remote-host
, to connect using client-1.pagekite.your_domain
, modify the file /etc/pagekite.d/10_account.rc
, where the credentials to connect to front-end-server
are stored:
- sudo nano /etc/pagekite.d/10_account.rc
Change kitename
and kitesecret
to the appropriate credentials. For the domain remote-host.client-1.pagekite.your_domain
, the configuration would be:
#################################[ This file is placed in the Public Domain. ]#
# Replace the following with your account details.
kitename = remote-host.client-1.pagekite.your_domain
kitesecret = examplepassword2
# Delete this line!
Save and exit the file.
After modifying the file, restart the PageKite service:
- sudo systemctl restart pagekite.service
Now, on local-host
, we can connect to remote-host
via SSH with:
- ssh sammy@remote-host.client-1.pagekite.your_domain -i ~/id_rsa -o "ProxyCommand corkscrew %h 80 %h %p"
We could use the domain client-2.pagekite.your-domain
for another client. This way, we could administrate the services in an isolated way, with the possibility to change the password of one client or even disable one of them without affecting the other.
In this article, we set up a private PageKite front-end server on a Debian 9 Droplet and used it to expose HTTP and SSH services on a remote host behind NAT. We then connected to these services from a local-host
server and verified the PageKite functionality. As we have mentioned, this could be an effective setup for remote access applications, since we can tunnel other services in the SSH connection, such as VNC.
If you’d like to learn more about PageKite, check out the PageKite Support Info. If you would like to dive deeper into networking with Droplets, take a look through DigitalOcean’s Networking Documentation.
]]>sudo iptables -A INPUT -p tcp --dport 443 --jump ACCEPT
Check with nmap or telnet and it’s always closed … why? What’s going on? Can somebody help me?
Thanks!
]]>Clustering adds high availability to your database by distributing changes to different servers. In the event that one of the instances fails, others are quickly available to continue serving.
Clusters come in two general configurations, active-passive and active-active. In active-passive clusters, all writes are done on a single active server and then copied to one or more passive servers that are poised to take over only in the event of an active server failure. Some active-passive clusters also allow SELECT
operations on passive nodes. In an active-active cluster, every node is read-write and a change made to one is replicated to all.
MariaDB is an open source relational database system that is fully compatible with the popular MySQL RDBMS system. You can read the official documentation for MariaDB at this page. Galera is a database clustering solution that enables you to set up multi-master clusters using synchronous replication. Galera automatically handles keeping the data on different nodes in sync while allowing you to send read and write queries to any of the nodes in the cluster. You can learn more about Galera at the official documentation page.
In this guide, you will configure an active-active MariaDB Galera cluster. For demonstration purposes, you will configure and test three Debian 9 Droplets that will act as nodes in the cluster. This is the smallest configurable cluster.
To follow along, you will need a DigitalOcean account, in addition to the following:
sudo
privileges.
sudo
privileges, follow our Initial Server Setup with Debian 9 tutorial.While the steps in this tutorial have been written for and tested against DigitalOcean Droplets, much of them should also be applicable to non-DigitalOcean servers with private networking enabled.
In this step, you will add the relevant MariaDB package repositories to each of your three servers so that you will be able to install the right version of MariaDB used in this tutorial. Once the repositories are updated on all three servers, you will be ready to install MariaDB.
One thing to note about MariaDB is that it originated as a drop-in replacement for MySQL, so in many configuration files and startup scripts, you’ll see mysql
rather than mariadb
. For consistency’s sake, we will use mysql
in this guide where either could work.
In this tutorial, you will use MariaDB version 10.4. Since this version isn’t included in the default Debian repositories, you’ll start by adding the external Debian repository maintained by the MariaDB project to all three of your servers.
To add the repository, you will first need to install the dirmngr
and software-properties-common
packages. dirmngr
is a server for managing repository certificates and keys. software-properties-common
is a package that allows easy addition and updates of source repository locations. Install the two packages by running:
- sudo apt install dirmngr software-properties-common
Note: MariaDB is a well-respected provider, but not all external repositories are reliable. Be sure to install only from trusted sources.
You’ll add the MariaDB repository key with the apt-key
command, which the APT package manager will use to verify that the package is authentic:
- sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
Once you have the trusted key in the database, you can add the repository with the following command:
- sudo add-apt-repository 'deb [arch=amd64] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.4/debian stretch main'
After adding the repository, run apt update
in order to include package manifests from the new repository:
- sudo apt update
Once you have completed this step on your first server, repeat for your second and third servers.
Now that you have successfully added the package repository on all three of your servers, you’re ready to install MariaDB in the next section.
In this step, you will install the actual MariaDB packages on your three servers.
Beginning with version 10.1
, the MariaDB Server and MariaDB Galera Server packages are combined, so installing mariadb-server
will automatically install Galera and several dependencies:
- sudo apt install mariadb-server
You will be asked to confirm whether you would like to proceed with the installation. Enter yes
to continue with the installation.
From MariaDB version 10.4
onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:
- sudo mysql -uroot
Once you’re inside the MariaDB shell, change the password by executing the following statement:
- set password = password("your_password");
You will see the following output indicating that the password was set correctly:
OutputQuery OK, 0 rows affected (0.001 sec)
Exit the MariaDB shell by running the following command:
- quit;
If you would like to learn more about SQL or need a quick refresher, check out our MySQL tutorial.
You now have all of the pieces necessary to begin configuring the cluster, but since you’ll be relying on rsync
in later steps, make sure it’s installed:
- sudo apt install rsync
This will confirm that the newest version of rsync
is already available or prompt you to upgrade or install it.
Once you have installed MariaDB and set the root password on your first server, repeat these steps for your other two servers.
Now that you have installed MariaDB successfully on each of the three servers, you can proceed to the configuration step in the next section.
In this step you will configure your first node. Each node in the cluster needs to have a nearly identical configuration. Because of this, you will do all of the configuration on your first machine, and then copy it to the other nodes.
By default, MariaDB is configured to check the /etc/mysql/conf.d
directory to get additional configuration settings from files ending in .cnf
. Create a file in this directory with all of your cluster-specific directives:
- sudo nano /etc/mysql/conf.d/galera.cnf
Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses.
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://First_Node_IP,Second_Node_IP,Third_Node_IP"
# Galera Synchronization Configuration
wsrep_sst_method=rsync
# Galera Node Configuration
wsrep_node_address="This_Node_IP"
wsrep_node_name="This_Node_Name"
mysqld
must not be bound to the IP address for localhost
. You can learn about the settings in more detail on the Galera Cluster system configuration page.wsrep_cluster_name
to something more meaningful than test_cluster
or leave it as-is, but you must update wsrep_cluster_address
with the private IP addresses of your three servers.rsync
, because it’s commonly available and does what you’ll need for now.wsrep_node_address
must match the address of the machine you’re on, but you can choose any name you want in order to help you identify the node in log files.When you are satisfied with your cluster configuration file, copy the contents into your clipboard, save and close the file. With the nano text editor, you can do this by pressing CTRL+X
, typing y
, and pressing ENTER
.
Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.
In this step, you will configure the remaining two nodes. On your second node, open the configuration file:
- sudo nano /etc/mysql/conf.d/galera.cnf
Paste in the configuration you copied from the first node, then update the Galera Node Configuration
to use the IP address or resolvable domain name for the specific node you’re setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:
. . .
# Galera Node Configuration
wsrep_node_address="This_Node_IP"
wsrep_node_name="This_Node_Name"
. . .
Save and exit the file.
Once you have completed these steps, repeat them on the third node.
You’re almost ready to bring up the cluster, but before you do, make sure that the appropriate ports are open in your firewall.
In this step, you will configure your firewall so that the ports required for inter-node communication are open. On every server, check the status of the firewall by running:
- sudo ufw status
In this case, only SSH is allowed through:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Since only SSH traffic is permitted in this case, you’ll need to add rules for MySQL and Galera traffic. If you tried to start the cluster, it would fail because of firewall rules.
Galera can make use of four ports:
3306
For MySQL client connections and State Snapshot Transfer that use the mysqldump
method.4567
For Galera Cluster replication traffic. Multicast replication uses both UDP transport and TCP on this port.4568
For Incremental State Transfer.4444
For all other State Snapshot Transfer.In this example, you’ll open all four ports while you do your setup. Once you’ve confirmed that replication is working, you’d want to close any ports you’re not actually using and restrict traffic to just servers in the cluster.
Open the ports with the following command:
- sudo ufw allow 3306,4567,4568,4444/tcp
- sudo ufw allow 4567/udp
Note: Depending on what else is running on your servers you might want to restrict access right away. The UFW Essentials: Common Firewall Rules and Commands guide can help with this.
After you have configured your firewall on the first node, create the same firewall settings on the second and third node.
Now that you have configured the firewalls successfully, you’re ready to start the cluster in the next step.
In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.
Use the following command on all three servers to stop MariaDB so that you can bring them back up in a cluster:
- sudo systemctl stop mysql
systemctl
doesn’t display the outcome of all service management commands, so to be sure you succeeded, use the following command:
- sudo systemctl status mysql
If the last line looks something like the following, the command was successful:
Output. . .
Apr 26 03:34:23 galera-node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.
Once you’ve shut down mysql
on all of the servers, you’re ready to proceed.
To bring up the first node, you’ll need to use a special startup script. The way you’ve configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf
file to get its initial state. Without using the galera_new_cluster
script that allows systemd to pass the --wsrep-new-cluster
parameter, a normal systemctl start mysql
would fail because there are no nodes running for the first node to connect with.
- sudo galera_new_cluster
This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:
- mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
You will see the following output indicating that there is one node in the cluster:
Output+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+
On the remaining nodes, you can start mysql
normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.
Now you can bring up the second node. Start mysql
:
- sudo systemctl start mysql
No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:
- mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.
Output+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 2 |
+--------------------+-------+
It’s now time to bring up the third node. Start mysql
:
- sudo systemctl start mysql
Run the following command to find the cluster size:
- mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
You will see the following output, which indicates that the third node has joined the cluster and that the total number nodes in the cluster is three.
Output+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 3 |
+--------------------+-------+
At this point, the entire cluster is online and communicating successfully. Next, you can ensure the working setup by testing replication in the next section.
You’ve gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the steps below to test and see if the replication is working as expected.
You’ll start by making database changes on your first node. The following commands will create a database called playground
and a table inside of this database called equipment
.
- mysql -u root -p -e 'CREATE DATABASE playground;
- CREATE TABLE playground.equipment ( id INT NOT NULL AUTO_INCREMENT, type VARCHAR(50), quant INT, color VARCHAR(25), PRIMARY KEY(id));
- INSERT INTO playground.equipment (type, quant, color) VALUES ("slide", 2, "blue");'
In the previous command, the CREATE DATABASE
statement creates a database named playground
. The CREATE
statement creates a table named equipment
inside the playground
database having an auto-incrementing identifier column called id
and other columns. The type
column, quant
column, and color
column are defined to store the type, quantity, and color of the equipment respectively. The INSERT
statement inserts an entry of type slide
, quantity 2
, and color blue
.
You now have one value in your table.
Next, look at the second node to verify that replication is working:
- mysql -u root -p -e 'SELECT * FROM playground.equipment;'
If replication is working, the data you entered on the first node will be visible here on the second:
Output+----+-------+-------+-------+
| id | type | quant | color |
+----+-------+-------+-------+
| 1 | slide | 2 | blue |
+----+-------+-------+-------+
From this same node, you can write data to the cluster:
- mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("swing", 10, "yellow");'
From the third node, you can read all of this data by querying the table again:
- mysql -u root -p -e 'SELECT * FROM playground.equipment;'
You will see the following output showing the two rows:
Output +----+-------+-------+--------+
| id | type | quant | color |
+----+-------+-------+--------+
| 1 | slide | 2 | blue |
| 2 | swing | 10 | yellow |
+----+-------+-------+--------+
Again, you can add another value from this node:
- mysql -u root -p -e 'INSERT INTO playground.equipment (type, quant, color) VALUES ("seesaw", 3, "green");'
Back on the first node, you can verify that your data is available everywhere:
- mysql -u root -p -e 'SELECT * FROM playground.equipment;'
You will see the following output that indicates the rows are available on the first node.
Output +----+--------+-------+--------+
| id | type | quant | color |
+----+--------+-------+--------+
| 1 | slide | 2 | blue |
| 2 | swing | 10 | yellow |
| 3 | seesaw | 3 | green |
+----+--------+-------+--------+
You’ve successfully verified that you can write to all of the nodes and that replication is being performed properly.
At this point, you have a working three-node Galera test cluster configured. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.
Before production use, you may want to take a look at some of the other state snapshot transfer (sst) agents like xtrabackup, which allows you to set up new nodes very quickly and without large interruptions to your active nodes. This does not affect the actual replication, but is a concern when nodes are being initialized.
]]>Attempted to update to PHP 7.2 by… sudo apt update sudo apt install php7.2 but get error unable to locate package
the Arr.php file is in \vendor\laravel\framework\src\Illuminate\Support
]]>Err: 1 http://mirrors.digitalocean.com/debian stretch InRelease Temporary failure resolving ‘mirrors.digitalocean.com’ Err: 2 http://security.debian.org stretch / updates InRelease Temporary failure resolving ‘security.debian.org’ Err: 3 http://mirrors.digitalocean.com/debian stretch-updates InRelease Temporary failure resolving ‘mirrors.digitalocean.com’ Reading package lists … Done Building dependency tree Reading state information … Done All packages are up to date. W: Failed to fetch http://mirrors.digitalocean.com/debian/dists/stretch/InRelease Temporary failure resolving ‘mirrors.digitalocean.com’ W: Failed to fetch http://security.debian.org/dists/stretch/updates/InRelease Temporary failure resolving ‘security.debian.org’ W: Failed to fetch http://mirrors.digitalocean.com/debian/dists/stretch-updates/InRelease Temporary failure resolving ‘mirrors.digitalocean.com’ W: Some index files failed to download. They have been ignored, or old ones used instead.
Why do I have this message?
]]>While many users need the functionality of a database system like MySQL, interacting with the system solely from the MySQL command-line client requires familiarity with the SQL language, so it may not be the preferred interface for some.
phpMyAdmin was created so that users can interact with MySQL through an intuitive web interface, running alongside a PHP development environment. In this guide, we’ll discuss how to install phpMyAdmin on top of an Nginx server, and how to configure the server for increased security.
Note: There are important security considerations when using software like phpMyAdmin, since it runs on the database server, it deals with database credentials, and it enables a user to easily execute arbitrary SQL queries into your database. Because phpMyAdmin is a widely-deployed PHP application, it is frequently targeted for attack. We will go over some security measures you can take in this tutorial so that you can make informed decisions.
Before you get started with this guide, you’ll need the following available to you:
ufw
, as described in the initial server setup guide for Debian 9. If you haven’t set up your server yet, you can follow the guide on installing a LEMP stack on Debian 9.sudo
privileges.Because phpMyAdmin handles authentication using MySQL credentials, it is strongly advisable to install an SSL/TLS certificate to enable encrypted traffic between server and client. If you don’t have an existing domain configured with a valid certificate, you can follow the guide on How to Secure Nginx with Let’s Encrypt on Debian 9.
Warning: If you don’t have an SSL/TLS certificate installed on the server and you still want to proceed, please consider enforcing access via SSH Tunnels as explained in Step 5 of this guide.
Once you have met these prerequisites, you can go ahead with the rest of the guide.
The first thing we need to do is install phpMyAdmin on the LEMP server. We’re going to use the default Debian repositories to achieve this goal.
Let’s start by updating the server’s package index with:
- sudo apt update
Now you can install phpMyAdmin with:
- sudo apt install phpmyadmin
During the installation process, you will be prompted to choose the web server (either Apache or Lighthttp) to configure. Because we are using Nginx as web server, we shouldn’t make a choice here. Press tab
and then OK
to advance to the next step.
Next, you’ll be prompted whether to use dbconfig-common
for configuring the application database. Select Yes
. This will set up the internal database and administrative user for phpMyAdmin. You will be asked to define a new password for the phpmyadmin MySQL user. You can also leave it blank and let phpMyAdmin randomly create a password.
The installation will now finish. For the Nginx web server to find and serve the phpMyAdmin files correctly, we’ll need to create a symbolic link from the installation files to Nginx’s document root directory:
- sudo ln -s /usr/share/phpmyadmin /var/www/html/phpmyadmin
Your phpMyAdmin installation is now operational. To access the interface, go to your server’s domain name or public IP address followed by /phpmyadmin
in your web browser:
https://server_domain_or_IP/phpmyadmin
As mentioned before, phpMyAdmin handles authentication using MySQL credentials, which means you should use the same username and password you would normally use to connect to the database via console or via an API. If you need help creating MySQL users, check this guide on How To Manage an SQL Database.
Note: Logging into phpMyAdmin as the root MySQL user is discouraged because it represents a significant security risk. We’ll see how to disable root login in a subsequent step of this guide.
Your phpMyAdmin installation should be completely functional at this point. However, by installing a web interface, we’ve exposed our MySQL database server to the outside world. Because of phpMyAdmin’s popularity, and the large amounts of data it may provide access to, installations like these are common targets for attacks. In the following sections of this guide, we’ll see a few different ways in which we can make our phpMyAdmin installation more secure.
One of the most basic ways to protect your phpMyAdmin installation is by making it harder to find. Bots will scan for common paths, like /phpmyadmin
, /pma
, /admin
, /mysql
and such. Changing the interface’s URL from /phpmyadmin
to something non-standard will make it much harder for automated scripts to find your phpMyAdmin installation and attempt brute-force attacks.
With our phpMyAdmin installation, we’ve created a symbolic link pointing to /usr/share/phpmyadmin
, where the actual application files are located. To change phpMyAdmin’s interface URL, we will rename this symbolic link.
First, let’s navigate to the Nginx document root directory and list the files it contains to get a better sense of the change we’ll make:
- cd /var/www/html/
- ls -l
You’ll receive the following output:
Outputtotal 8
-rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html
lrwxrwxrwx 1 root root 21 Apr 8 15:36 phpmyadmin -> /usr/share/phpmyadmin
The output shows that we have a symbolic link called phpmyadmin
in this directory. We can change this link name to whatever we’d like. This will in turn change phpMyAdmin’s access URL, which can help obscure the endpoint from bots hardcoded to search common endpoint names.
Choose a name that obscures the purpose of the endpoint. In this guide, we’ll name our endpoint /nothingtosee
, but you should choose an alternate name. To accomplish this, we’ll rename the link:
- sudo mv phpmyadmin nothingtosee
- ls -l
After running the above commands, you’ll receive this output:
Outputtotal 8
-rw-r--r-- 1 root root 612 Apr 8 13:30 index.nginx-debian.html
lrwxrwxrwx 1 root root 21 Apr 8 15:36 nothingtosee -> /usr/share/phpmyadmin
Now, if you go to the old URL, you’ll get a 404 error:
https://server_domain_or_IP/phpmyadmin
Your phpMyAdmin interface will now be available at the new URL we just configured:
https://server_domain_or_IP/nothingtosee
By obfuscating phpMyAdmin’s real location on the server, you’re securing its interface against automated scans and manual brute-force attempts.
On MySQL as well as within regular Linux systems, the root account is a special administrative account with unrestricted access to the system. In addition to being a privileged account, it’s a known login name, which makes it an obvious target for brute-force attacks. To minimize risks, we’ll configure phpMyAdmin to deny any login attempts coming from the user root. This way, even if you provide valid credentials for the user root, you’ll still get an “access denied” error and won’t be allowed to log in.
Because we chose to use dbconfig-common
to configure and store phpMyAdmin settings, the default configuration is currently stored in the database. We’ll need to create a new config.inc.php
file to define our custom settings.
Even though the PHP files for phpMyAdmin are located inside /usr/share/phpmyadmin
, the application uses configuration files located at /etc/phpmyadmin
. We will create a new custom settings file inside /etc/phpmyadmin/conf.d
, and name it pma_secure.php
:
- sudo nano /etc/phpmyadmin/conf.d/pma_secure.php
The following configuration file contains the necessary settings to disable passwordless logins (AllowNoPassword
set to false
) and root login (AllowRoot
set to false
):
<?php
# PhpMyAdmin Settings
# This should be set to a random string of at least 32 chars
$cfg['blowfish_secret'] = '3!#32@3sa(+=_4?),5XP_:U%%8\34sdfSdg43yH#{o';
$i=0;
$i++;
$cfg['Servers'][$i]['auth_type'] = 'cookie';
$cfg['Servers'][$i]['AllowNoPassword'] = false;
$cfg['Servers'][$i]['AllowRoot'] = false;
?>
Save the file when you’re done editing by pressing CTRL
+ X
then y
to confirm changes and ENTER
. The changes will apply automatically. If you reload the login page now and try to log in as root, you will get an Access Denied error:
Root login is now prohibited on your phpMyAdmin installation. This security measure will block brute-force scripts from trying to guess the root database password on your server. Moreover, it will enforce the usage of less-privileged MySQL accounts for accessing phpMyAdmin’s web interface, which by itself is an important security practice.
##Step 4 — Creating an Authentication Gateway
Hiding your phpMyAdmin installation on an unusual location might sidestep some automated bots scanning the network, but it’s useless against targeted attacks. To better protect a web application with restricted access, it’s generally more effective to stop attackers before they can even reach the application. This way, they’ll be unable to use generic exploits and brute-force attacks to guess access credentials.
In the specific case of phpMyAdmin, it’s even more important to keep the login interface locked away. By keeping it open to the world, you’re offering a brute-force platform for attackers to guess your database credentials.
Adding an extra layer of authentication to your phpMyAdmin installation enables you to increase security. Users will be required to pass through an HTTP authentication prompt before ever seeing the phpMyAdmin login screen. Most web servers, including Nginx, provide this capability natively.
To set this up, we first need to create a password file to store the authentication credentials. Nginx requires that passwords be encrypted using the crypt()
function. The OpenSSL suite, which should already be installed on your server, includes this functionality.
To create an encrypted password, type:
- openssl passwd
You will be prompted to enter and confirm the password that you wish to use. The utility will then display an encrypted version of the password that will look something like this:
OutputO5az.RSPzd.HE
Copy this value, as you will need to paste it into the authentication file we’ll be creating.
Now, create an authentication file. We’ll call this file pma_pass
and place it in the Nginx configuration directory:
- sudo nano /etc/nginx/pma_pass
In this file, you’ll specify the username you would like to use, followed by a colon (:
), followed by the encrypted version of the password you received from the openssl passwd
utility.
We are going to name our user sammy
, but you should choose a different username. The file should look like this:
sammy:O5az.RSPzd.HE
Save and close the file when you’re done.
Now we’re ready to modify the Nginx configuration file. For this guide, we’ll use the configuration file located at /etc/nginx/sites-available/example.com
. You should use the relevant Nginx configuration file for the web location where phpMyAdmin is currently hosted. Open this file in your text editor to get started:
- sudo nano /etc/nginx/sites-available/example.com
Locate the server
block, and the location /
section within it. We need to create a new location
section within this block to match phpMyAdmin’s current path on the server. In this guide, phpMyAdmin’s location relative to the web root is /nothingtosee
:
server {
. . .
location / {
try_files $uri $uri/ =404;
}
location /nothingtosee {
# Settings for phpMyAdmin will go here
}
. . .
}
Within this block, we’ll need to set up two different directives: auth_basic
, which defines the message that will be displayed on the authentication prompt, and auth_basic_user_file
, pointing to the file we just created. This is how your configuration file should look like when you’re finished:
server {
. . .
location /nothingtosee {
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/pma_pass;
}
. . .
}
Save and close the file when you’re done. To check if the configuration file is valid, you can run:
- sudo nginx -t
The following output is expected:
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
To activate the new authentication gate, you must reload the web server:
- sudo systemctl reload nginx
Now, if you visit the phpMyAdmin URL in your web browser, you should be prompted for the username and password you added to the pma_pass
file:
https://server_domain_or_IP/nothingtosee
Once you enter your credentials, you’ll be taken to the standard phpMyAdmin login page.
Note: If refreshing the page does not work, you may have to clear your cache or use a different browser session if you’ve already been using phpMyAdmin.
In addition to providing an extra layer of security, this gateway will help keep your MySQL logs clean of spammy authentication attempts.
##Step 5 — Setting Up Access via Encrypted Tunnels (Optional)
For increased security, it is possible to lock down your phpMyAdmin installation to authorized hosts only. You can whitelist authorized hosts in your Nginx configuration file, so that any request coming from an IP address that is not on the list will be denied.
Even though this feature alone can be enough in some use cases, it’s not always the best long-term solution, mainly due to the fact that most people don’t access the Internet from static IP addresses. As soon as you get a new IP address from your Internet provider, you’ll be unable to get to the phpMyAdmin interface until you update the Nginx configuration file with your new IP address.
For a more robust long-term solution, you can use IP-based access control to create a setup in which users will only have access to your phpMyAdmin interface if they’re accessing from either an authorized IP address or localhost via SSH tunneling. We’ll see how to set this up in the sections below.
Combining IP-based access control with SSH tunneling greatly increases security because it fully blocks access coming from the public internet (except for authorized IPs), in addition to providing a secure channel between user and server through the use of encrypted tunnels.
On Nginx, IP-based access control can be defined in the corresponding location
block of a given site, using the directives allow
and deny
. For instance, if we want to only allow requests coming from a given host, we should include the following two lines, in this order, inside the relevant location
block for the site we would like to protect:
allow hostname_or_IP;
deny all;
You can allow as many hosts as you want, you only need to include one allow
line for each authorized host/IP inside the respective location
block for the site you’re protecting. The directives will be evaluated in the same order as they are listed, until a match is found or the request is finally denied due to the deny all
directive.
We’ll now configure Nginx to only allow requests coming from localhost or your current IP address. First, you’ll need to know the current public IP address your local machine is using to connect to the Internet. There are various ways to obtain this information; for simplicity, we’re going to use the service provided by ipinfo.io. You can either open the URL https://ipinfo.io/ip in your browser, or run the following command from your local machine:
- curl https://ipinfo.io/ip
You should get a simple IP address as output, like this:
Output203.0.113.111
That is your current public IP address. We’ll configure phpMyAdmin’s location block to only allow requests coming from that IP, in addition to localhost. We’ll need to edit once again the configuration block for phpMyAdmin inside /etc/nginx/sites-available/example.com
.
Open the Nginx configuration file using your command-line editor of choice:
- sudo nano /etc/nginx/sites-available/example.com
Because we already have an access rule within our current configuration, we need to combine it with IP-based access control using the directive satisfy all
. This way, we can keep the current HTTP authentication prompt for increased security.
This is how your phpMyAdmin Nginx configuration should look like after you’re done editing:
server {
. . .
location /nothingtosee {
satisfy all; #requires both conditions
allow 203.0.113.111; #allow your IP
allow 127.0.0.1; #allow localhost via SSH tunnels
deny all; #deny all other sources
auth_basic "Admin Login";
auth_basic_user_file /etc/nginx/pma_pass;
}
. . .
}
Remember to replace nothingtosee with the actual path where phpMyAdmin can be found, and the highlighted IP address with your current public IP address.
Save and close the file when you’re done. To check if the configuration file is valid, you can run:
- sudo nginx -t
The following output is expected:
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Now reload the web server so the changes take effect:
- sudo systemctl reload nginx
Because your IP address is explicitly listed as an authorized host, your access shouldn’t be disturbed. Anyone else trying to access your phpMyAdmin installation will now get a 403 error (Forbidden):
https://server_domain_or_IP/nothingtosee
In the next section, we’ll see how to use SSH tunneling to access the web server through local requests. This way, you’ll still be able to access phpMyAdmin’s interface even when your IP address changes.
SSH tunneling works as a way of redirecting network traffic through encrypted channels. By running an ssh
command similar to what you would use to log into a server, you can create a secure “tunnel” between your local machine and that server. All traffic coming in on a given local port can now be redirected through the encrypted tunnel and use the remote server as a proxy, before reaching out to the internet. It’s similar to what happens when you use a VPN (Virtual Private Network), however SSH tunneling is much simpler to set up.
We’ll use SSH tunneling to proxy our requests to the remote web server running phpMyAdmin. By creating a tunnel between your local machine and the server where phpMyAdmin is installed, you can redirect local requests to the remote web server, and what’s more important, traffic will be encrypted and requests will reach Nginx as if they’re coming from localhost. This way, no matter what IP address you’re connecting from, you’ll be able to securely access phpMyAdmin’s interface.
Because the traffic between your local machine and the remote web server will be encrypted, this is a safe alternative for situations where you can’t have an SSL/TLS certificate installed on the web server running phpMyAdmin.
From your local machine, run this command whenever you need access to phpMyAdmin:
- ssh user@server_domain_or_IP -L 8000:localhost:80 -L 8443:localhost:443 -N
Let’s examine each part of the command:
Note: This command will block the terminal until interrupted with a CTRL+C
, in which case it will end the SSH connection and stop the packet redirection. If you’d prefer to run this command in background mode, you can use the SSH option -f
.
Now, go to your browser and replace server_domain_or_IP with localhost:PORT
, where PORT
is either 8000
for HTTP or 8443
for HTTPS:
http://localhost:8000/nothingtosee
https://localhost:443/nothingtosee
Note: If you’re accessing phpMyAdmin via https, you might get an alert message questioning the security of the SSL certificate. This happens because the domain name you’re using (localhost) doesn’t match the address registered within the certificate (domain where phpMyAdmin is actually being served). It is safe to proceed.
All requests on localhost:8000
(HTTP) and localhost:8443
(HTTPS) are now being redirected through a secure tunnel to your remote phpMyAdmin application. Not only have you increased security by disabling public access to your phpMyAdmin, you also protected all traffic between your local computer and the remote server by using an encrypted tunnel to send and receive data.
If you’d like to enforce the usage of SSH tunneling to anyone who wants access to your phpMyAdmin interface (including you), you can do that by removing any other authorized IPs from the Nginx configuration file, leaving 127.0.0.1
as the only allowed host to access that location. Considering nobody will be able to make direct requests to phpMyAdmin, it is safe to remove HTTP authentication in order to simplify your setup. This is how your configuration file would look like in such a scenario:
server {
. . .
location /nothingtosee {
allow 127.0.0.1; #allow localhost only
deny all; #deny all other sources
}
. . .
}
Once you reload Nginx’s configuration with sudo systemctl reload nginx
, your phpMyAdmin installation will be locked down and users will be required to use SSH tunnels in order to access phpMyAdmin’s interface via redirected requests.
In this tutorial, we saw how to install phpMyAdmin on Ubuntu 18.04 running Nginx as the web server. We also covered advanced methods to secure a phpMyAdmin installation on Ubuntu, such as disabling root login, creating an extra layer of authentication, and using SSH tunneling to access a phpMyAdmin installation via local requests only.
After completing this tutorial, you should be able to manage your MySQL databases from a reasonably secure web interface. This user interface exposes most of the functionality available via the MySQL command line. You can browse databases and schema, execute queries, and create new data sets and structures.
]]>I have started the automatic upgrade process of my DO-managed Kubernetes cluster from 1.12.7 to 1.12.8. After the upgrade of the Master control-plane went through I was expecting the worker nodes to get upgraded as well, however it somehow got stuck and the nodes are not being upgraded.
So currently I can no longer schedule new workloads as all new pods are going into the state ContainerCreating and are stuck there.
I tried to resize the node pool and this caused a new droplet to be spun up with the up-to-date Kubernetes version from DO (Debian do-kube-1.12.8-do.4). Using kubectl get nodes
I can see the old nodes still in status Ready
while the new node (even after around 30min) still shows up as NotReady
and the latest event is Kubelet starting
.
In addition all the old but still running droplets in the node pool no longer report metrics into the DO web interface.
Any idea what I can do? If nothing works I will probably have to tear down the whole cluster and set it up from scratch again. To be honest, this is very worrying to me.
]]>Go, also known as golang, is a modern, open-source programming language developed by Google. Increasingly popular for many applications, Go takes a minimalist approach to development, helping you build reliable and efficient software.
This tutorial will guide you through downloading and installing Go, as well as compiling and executing a basic “Hello, World!” program on a Debian 9 server.
To complete this tutorial, you will need access to a Debian 9 server and a non-root user with sudo
privileges, as described in Initial Server Setup with Debian 9.
In this step, we’ll install Go on your server.
First, install curl
so you will be able to grab the latest Go release:
- sudo apt install curl
Next, visit the official Go downloads page and find the URL for the current binary release’s tarball. Make sure you copy the link for the latest version that is compatible with a 64-bit architecture.
From your home directory, use curl
to retrieve the tarball:
- curl -O https://dl.google.com/go/go1.12.5.linux-amd64.tar.gz
Although the tarball came from a genuine source, it is best practice to verify both the authenticity and integrity of items downloaded from the Internet. This verification method certifies that the file was neither tampered with nor corrupted or damaged during the download process. The sha256sum
command produces a unique 256-bit hash:
- sha256sum go1.12.5.linux-amd64.tar.gz
Outputgo1.12.5.linux-amd64.tar.gz
aea86e3c73495f205929cfebba0d63f1382c8ac59be081b6351681415f4063cf go1.12.5.linux-amd64.tar.gz
Compare the hash in your output to the checksum value on the Go download page. If they match, then it is safe to conclude that the download is legitimate.
With Go downloaded and the integrity of the file validated, let’s proceed with the installation.
We’ll now use tar
to extract the tarball. The x
flag tells tar
to extract, v
tells it we want verbose output, including a list of the files being extracted, and f
tells it we’ll specify a filename:
- tar xvf go1.12.5.linux-amd64.tar.gz
You should now have a directory called go
in your home directory. Recursively change the owner and group of this directory to root, and move it to /usr/local
:
- sudo chown -R root:root ./go
- sudo mv go /usr/local
Note: Although /usr/local/go
is the officially-recommended location, some users may prefer or require different paths.
At this point, using Go would require specifying the full path to its install location in the command line. To make interacting with Go more user-friendly, we will set a few paths.
In this step, we’ll set some paths in your environment.
First, set Go’s root value, which tells Go where to look for its files:
- nano ~/.profile
At the end of the file, add the following lines:
export GOPATH=$HOME/work
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
If you chose a different installation location for Go, then you should add the following lines to this file instead of the lines shown above. In this example, we are adding the lines that would be required if you installed Go in your home directory:
export GOROOT=$HOME/go
export GOPATH=$HOME/work
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
With the appropriate lines pasted into your profile, save and close the file.
Next, refresh your profile by running:
- source ~/.profile
With the Go installation in place and the necessary environment paths set, let’s confirm that our setup works by composing a short program.
Now that Go is installed and the paths are set for your server, you can ensure that Go is working as expected.
Create a new directory for your Go workspace, which is where Go will build its files:
- mkdir $HOME/work
Then, create a directory hierarchy in this folder so that you will be able to create your test file. We’ll use the directory my_project
as an example:
- mkdir -p work/src/my_project/hello
Next, you can create a traditional “Hello World” Go file:
- nano ~/work/src/my_project/hello/hello.go
Inside your editor, add the following code to the file, which uses the main Go packages, imports the formatted IO content component, and sets a new function to print “Hello, World!” when run:
package main
import "fmt"
func main() {
fmt.Printf("Hello, World!\n")
}
When it runs, this program will print “Hello, World!,” indicating that Go programs are compiling correctly.
Save and close the file, then compile it by invoking the Go command install
:
- go install my_project/hello
With the program compiled, you can run it by executing the command:
- hello
Go is successfully installed and functional if you see the following output:
OutputHello, World!
You can see where the compiled hello
binary is installed by using the which
command:
- which hello
Output/home/sammy/work/bin/hello
The “Hello, World!” program established that you have a Go development environment.
By downloading and installing the latest Go package and setting its paths, you now have a system to use for Go development. To learn more about working with Go, see our development series How To Code in Go. You can also consult the official documentation on How to Write Go Code.
Additionally, you can read some Go tips from our development team at DigitalOcean.
]]>MQTT is a machine-to-machine messaging protocol, designed to provide lightweight publish/subscribe communication to “Internet of Things” devices. It is commonly used for geo-tracking fleets of vehicles, home automation, environmental sensor networks, and utility-scale data collection.
Mosquitto is a popular MQTT server (or broker, in MQTT parlance) that has great community support and is easy to install and configure.
In this tutorial, we’ll install Mosquitto and set up our broker to use SSL to secure our password-protected MQTT communications.
Before starting this tutorial, you will need:
A Debian 9 server with a non-root, sudo-enabled user and basic firewall set up, as detailed in this Debian 9 server setup tutorial.
A domain name pointed at your server, as documented in our DigitalOcean DNS product documentation. This tutorial will use mqtt.example.com
throughout.
An auto-renewable Let’s Encrypt SSL certificate for use with your domain and Mosquitto, generated using the Certbot tool. You can learn how to set this up in How To Use Certbot Standalone Mode to Retrieve Let’s Encrypt SSL Certificates on Debian 9
. You can add systemctl restart mosquitto
as a renew_hook
in Step 4. Be sure to use the same domain configured in the previous prerequisite step.
Debian 9 has a fairly recent version of Mosquitto in its default software repository, so we can install it from there.
First, log in using your non-root user and update the package lists using apt update
:
- sudo apt update
Now, install Mosquitto using apt install
:
- sudo apt install mosquitto mosquitto-clients
By default, Debian will start the Mosquitto service after install. Let’s test the default configuration. We’ll use one of the Mosquitto clients we just installed to subscribe to a topic on our broker.
Topics are labels that you publish messages to and subscribe to. They are arranged as a hierarchy, so you could have sensors/outside/temp
and sensors/outside/humidity
, for example. How you arrange topics is up to you and your needs. Throughout this tutorial we will use a simple test topic to test our configuration changes.
Log in to your server a second time, so you have two terminals side-by-side. In the new terminal, use mosquitto_sub
to subscribe to the test topic:
- mosquitto_sub -h localhost -t test
-h
is used to specify the hostname of the MQTT server, and -t
is the topic name. You’ll see no output after hitting ENTER
because mosquitto_sub
is waiting for messages to arrive. Switch back to your other terminal and publish a message:
- mosquitto_pub -h localhost -t test -m "hello world"
The options for mosquitto_pub
are the same as mosquitto_sub
, though this time we use the additional -m
option to specify our message. Hit ENTER
, and you should see hello world pop up in the other terminal. You’ve sent your first MQTT message!
Enter CTRL+C
in the second terminal to exit out of mosquitto_sub
, but keep the connection to the server open. We’ll use it again for another test in Step 5.
Next, we’ll secure our installation using password-based authentication.
Let’s configure Mosquitto to use passwords. Mosquitto includes a utility to generate a special password file called mosquitto_passwd
. This command will prompt you to enter a password for the specified username, and place the results in /etc/mosquitto/passwd
.
- sudo mosquitto_passwd -c /etc/mosquitto/passwd sammy
Now we’ll open up a new configuration file for Mosquitto and tell it to use this password file to require logins for all connections:
- sudo nano /etc/mosquitto/conf.d/default.conf
This should open an empty file. Paste in the following:
allow_anonymous false
password_file /etc/mosquitto/passwd
Be sure to leave a trailing newline at the end of the file.
allow_anonymous false
will disable all non-authenticated connections, and the password_file
line tells Mosquitto where to look for user and password information. Save and exit the file.
Now we need to restart Mosquitto and test our changes.
- sudo systemctl restart mosquitto
Try to publish a message without a password:
- mosquitto_pub -h localhost -t "test" -m "hello world"
The message should be rejected:
OutputConnection Refused: not authorised.
Error: The connection was refused.
Before we try again with the password, switch to your second terminal window again, and subscribe to the ‘test’ topic, using the username and password this time:
- mosquitto_sub -h localhost -t test -u "sammy" -P "password"
It should connect and sit, waiting for messages. You can leave this terminal open and connected for the rest of the tutorial, as we’ll periodically send it test messages.
Now publish a message with your other terminal, again using the username and password:
- mosquitto_pub -h localhost -t "test" -m "hello world" -u "sammy" -P "password"
The message should go through as in Step 1. We’ve successfully added password protection to Mosquitto. Unfortunately, we’re sending passwords unencrypted over the internet. We’ll fix that next by adding SSL encryption to Mosquitto.
To enable SSL encryption, we need to tell Mosquitto where our Let’s Encrypt certificates are stored. Open up the configuration file we previously started:
- sudo nano /etc/mosquitto/conf.d/default.conf
Paste in the following at the end of the file, leaving the two lines we already added:
. . .
listener 1883 localhost
listener 8883
certfile /etc/letsencrypt/live/mqtt.example.com/cert.pem
cafile /etc/letsencrypt/live/mqtt.example.com/chain.pem
keyfile /etc/letsencrypt/live/mqtt.example.com/privkey.pem
Again, be sure to leave a trailing newline at the end of the file.
We’re adding two separate listener
blocks to the config. The first, listener 1883 localhost
, updates the default MQTT listener on port 1883
, which is what we’ve been connecting to so far. 1883
is the standard unencrypted MQTT port. The localhost
portion of the line instructs Mosquitto to only bind this port to the localhost interface, so it’s not accessible externally. External requests would have been blocked by our firewall anyway, but it’s good to be explicit.
listener 8883
sets up an encrypted listener on port 8883
. This is the standard port for MQTT + SSL, often referred to as MQTTS. The next three lines, certfile
, cafile
, and keyfile
, all point Mosquitto to the appropriate Let’s Encrypt files to set up the encrypted connections.
Save and exit the file, then restart Mosquitto to update the settings:
- sudo systemctl restart mosquitto
Update the firewall to allow connections to port 8883
.
- sudo ufw allow 8883
OutputRule added
Rule added (v6)
Now we test again using mosquitto_pub
, with a few different options for SSL:
- mosquitto_pub -h mqtt.example.com -t test -m "hello again" -p 8883 --capath /etc/ssl/certs/ -u "sammy" -P "password"
Note that we’re using the full hostname instead of localhost
. Because our SSL certificate is issued for mqtt.example.com
, if we attempt a secure connection to localhost
we’ll get an error saying the hostname does not match the certificate hostname (even though they both point to the same Mosquitto server).
--capath /etc/ssl/certs/
enables SSL for mosquitto_pub
, and tells it where to look for root certificates. These are typically installed by your operating system, so the path is different for Mac OS, Windows, etc. mosquitto_pub
uses the root certificate to verify that the Mosquitto server’s certificate was properly signed by the Let’s Encrypt certificate authority. It’s important to note that mosquitto_pub
and mosquitto_sub
will not attempt an SSL connection without this option (or the similar --cafile
option), even if you’re connecting to the standard secure port of 8883
.
If all goes well with the test, we’ll see hello again show up in the other mosquitto_sub
terminal. This means your server is fully set up! If you’d like to extend the MQTT protocol to work with websockets, you can follow the final step.
In order to speak MQTT using JavaScript from within web browsers, the protocol was adapted to work over standard websockets. If you don’t need this functionality, you may skip this step.
We need to add one more listener
block to our Mosquitto config:
- sudo nano /etc/mosquitto/conf.d/default.conf
At the end of the file, add the following:
. . .
listener 8083
protocol websockets
certfile /etc/letsencrypt/live/mqtt.example.com/cert.pem
cafile /etc/letsencrypt/live/mqtt.example.com/chain.pem
keyfile /etc/letsencrypt/live/mqtt.example.com/privkey.pem
Again, be sure to leave a trailing newline at the end of the file.
This is mostly the same as the previous block, except for the port number and the protocol websockets
line. There is no official standardized port for MQTT over websockets, but 8083
is the most common.
Save and exit the file, then restart Mosquitto.
- sudo systemctl restart mosquitto
Now, open up port 8083
in the firewall.
- sudo ufw allow 8083
To test this functionality, we’ll use a public, browser-based MQTT client. There are a few out there, but the Eclipse Paho JavaScript Client is simple and straightforward to use. Open the Paho client in your browser. You’ll see the following:
Fill out the connection information as follows:
mqtt.example.com
.8083
.The remaining fields can be left to their default values.
After pressing Connect, the Paho browser-based client will connect to your Mosquitto server.
To publish a message, navigate to the Publish Message pane, fill out Topic as test, and enter any message in the Message section. Next, press Publish. The message will show up in your mosquitto_sub
terminal.
We’ve now set up a secure, password-protected and SSL-secured MQTT server. This can serve as a robust and secure messaging platform for whatever projects you dream up. Some popular software and hardware that work well with the MQTT protocol include:
These are just a few popular examples from the MQTT ecosystem. There is much more hardware and software out there that speaks the protocol. If you already have a favorite hardware platform, or software language, it probably has MQTT capabilities. Have fun getting your “things” talking to each other!
]]>Let’s Encrypt is a service offering free SSL certificates through an automated API. The most popular Let’s Encrypt client is EFF’s Certbot.
Certbot offers a variety of ways to validate your domain, fetch certificates, and automatically configure Apache and Nginx. In this tutorial, we’ll discuss Certbot’s standalone mode and how to use it to secure other types of services, such as a mail server or a message broker like RabbitMQ.
We won’t discuss the details of SSL configuration, but when you are done you will have a valid certificate that is automatically renewed. Additionally, you will be able to automate reloading your service to pick up the renewed certificate.
Before starting this tutorial, you will need:
example.com
throughout.Debian 9 includes the Certbot client in their default repository, and it should be up-to-date enough for basic use. If you need to do DNS-based challenges or use other newer Certbot features, you should instead install from the stretch-backports
repo as instructed by the official Certbot documentation.
Use apt
to install the certbot
package:
- sudo apt install certbot
You may test your install by asking certbot
to output its version number:
- certbot --version
Outputcertbot 0.28.0
Now that we have Certbot installed, let’s run it to get our certificate.
Certbot needs to answer a cryptographic challenge issued by the Let’s Encrypt API in order to prove we control our domain. It uses ports 80
(HTTP) or 443
(HTTPS) to accomplish this. Open up the appropriate port in your firewall:
- sudo ufw allow 80
Substitute 443
above if that’s the port you’re using. ufw
will output confirmation that your rule was added:
OutputRule added
Rule added (v6)
We can now run Certbot to get our certificate. We’ll use the --standalone
option to tell Certbot to handle the challenge using its own built-in web server. The --preferred-challenges
option instructs Certbot to use port 80 or port 443. If you’re using port 80, you want --preferred-challenges http
. For port 443 it would be --preferred-challenges tls-sni
. Finally, the -d
flag is used to specify the domain you’re requesting a certificate for. You can add multiple -d
options to cover multiple domains in one certificate.
- sudo certbot certonly --standalone --preferred-challenges http -d example.com
When running the command, you will be prompted to enter an email address and agree to the terms of service. After doing so, you should see a message telling you the process was successful and where your certificates are stored:
OutputIMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2019-08-28. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
We’ve got our certificates. Let’s take a look at what we downloaded and how to use the files with our software.
Configuring your application for SSL is beyond the scope of this article, as each application has different requirements and configuration options, but let’s take a look at what Certbot has downloaded for us. Use ls
to list out the directory that holds our keys and certificates:
- sudo ls /etc/letsencrypt/live/example.com
Outputcert.pem chain.pem fullchain.pem privkey.pem README
The README
file in this directory has more information about each of these files. Most often you’ll only need two of these files:
privkey.pem
: This is the private key for the certificate. This needs to be kept safe and secret, which is why most of the /etc/letsencrypt
directory has very restrictive permissions and is accessible by only the root user. Most software configuration will refer to this as something similar to ssl-certificate-key
or ssl-certificate-key-file
.fullchain.pem
: This is our certificate, bundled with all intermediate certificates. Most software will use this file for the actual certificate, and will refer to it in their configuration with a name like ‘ssl-certificate’.For more information on the other files present, refer to the “Where are my certificates” section of the Certbot docs.
Some software will need its certificates in other formats, in other locations, or with other user permissions. It is best to leave everything in the letsencrypt
directory, and not change any permissions in there (permissions will just be overwritten upon renewal anyway), but sometimes that’s just not an option. In that case, you’ll need to write a script to move files and change permissions as needed. This script will need to be run whenever Certbot renews the certificates, which we’ll talk about next.
Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot
package we installed takes care of this for us by adding a renew script to /etc/cron.d
. This script runs twice a day and will renew any certificate that’s within thirty days of expiration.
With our certificates renewing automatically, we still need a way to run other tasks after a renewal. We need to at least restart or reload our server to pick up the new certificates, and as mentioned in Step 3 we may need to manipulate the certificate files in some way to make them work with the software we’re using. This is the purpose of Certbot’s renew_hook
option.
To add a renew_hook
, we update Certbot’s renewal config file. Certbot remembers all the details of how you first fetched the certificate, and will run with the same options upon renewal. We just need to add in our hook. Open the config file with you favorite editor:
- sudo nano /etc/letsencrypt/renewal/example.com.conf
A text file will open with some configuration options. Add your hook on the last line:
renew_hook = systemctl reload rabbitmq
Update the command above to whatever you need to run to reload your server or run your custom file munging script. Usually, on Debian, you’ll mostly be using systemctl
to reload a service. Save and close the file, then run a Certbot dry run to make sure the syntax is ok:
- sudo certbot renew --dry-run
If you see no errors, you’re all set. Certbot is set to renew when necessary and run any commands needed to get your service using the new files.
In this tutorial, we’ve installed the Certbot Let’s Encrypt client, downloaded an SSL certificate using standalone mode, and enabled automatic renewals with renew hooks. This should give you a good start on using Let’s Encrypt certificates with services other than your typical web server.
For more information, please refer to Certbot’s documentation.
]]>One of the easiest way of guarding against out-of-memory errors in applications is to add some swap space to your server. In this guide, we will cover how to add a swap file to a Debian 9 server.
Warning: Although swap is generally recommended for systems using traditional spinning hard drives, using swap with SSDs can cause issues with hardware degradation over time. Due to this consideration, we do not recommend enabling swap on DigitalOcean or any other provider that utilizes SSD storage. Doing so can impact the reliability of the underlying hardware for you and your neighbors. This guide is provided as reference for users who may have spinning disk systems elsewhere.
If you need to improve the performance of your server on DigitalOcean, we recommend upgrading your Droplet. This will lead to better results in general and will decrease the likelihood of contributing to hardware issues that can affect your service.
Swap is an area on a hard drive that has been designated as a place where the operating system can temporarily store data that it can no longer hold in RAM. Basically, this gives you the ability to increase the amount of information that your server can keep in its working “memory”, with some caveats. The swap space on the hard drive will be used mainly when there is no longer sufficient space in RAM to hold in-use application data.
The information written to disk will be significantly slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fallback for when your system’s RAM is depleted can be a good safety net against out-of-memory exceptions on systems with non-SSD storage available.
Before we begin, we can check if the system already has some swap space available. It is possible to have multiple swap files or swap partitions, but generally one should be enough.
We can see if the system has any configured swap by typing:
- sudo swapon --show
If you don’t get back any output, this means your system does not have swap space available currently.
You can verify that there is no active swap using the free
utility:
- free -h
Output total used free shared buff/cache available
Mem: 996M 44M 639M 4.5M 312M 812M
Swap: 0B 0B 0B
As you can see in the Swap row of the output, no swap is active on the system.
Before we create our swap file, we’ll check our current disk usage to make sure we have enough space. Do this by entering:
- df -h
OutputFilesystem Size Used Avail Use% Mounted on
udev 488M 0 488M 0% /dev
tmpfs 100M 4.5M 96M 5% /run
/dev/vda1 25G 989M 23G 5% /
tmpfs 499M 0 499M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 499M 0 499M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/1001
The device with /
in the Mounted on
column is our disk in this case. We have plenty of space available in this example (only 1.4G used). Your usage will probably be different.
Although there are many opinions about the appropriate size of a swap space, it really depends on your personal preferences and your application requirements. Generally, an amount equal to or double the amount of RAM on your system is a good starting point. Another good rule of thumb is that anything over 4G of swap is probably unnecessary if you are just using it as a RAM fallback.
Now that we know our available hard drive space, we can create a swap file on our filesystem. We will allocate a file of the swap size that we want called swapfile
in our root (/) directory.
The best way of creating a swap file is with the fallocate
program. This command instantly creates a file of the specified size.
Since the server in our example has 1G of RAM, we will create a 1G file in this guide. Adjust this to meet the needs of your own server:
- sudo fallocate -l 1G /swapfile
We can verify that the correct amount of space was reserved by typing:
- ls -lh /swapfile
Output-rw-r--r-- 1 root root 1.0G May 29 17:34 /swapfile
Our file has been created with the correct amount of space set aside.
Now that we have a file of the correct size available, we need to actually turn this into swap space.
First, we need to lock down the permissions of the file so that only the users with root privileges can read the contents. This prevents normal users from being able to access the file, which would have significant security implications.
Make the file only accessible to root by typing:
- sudo chmod 600 /swapfile
Verify the permissions change by typing:
- ls -lh /swapfile
Output-rw------- 1 root root 1.0G May 29 17:34 /swapfile
As you can see, only the root user has the read and write flags enabled.
We can now mark the file as swap space by typing:
- sudo mkswap /swapfile
OutputSetting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=b591444e-c12b-45a6-90fc-e8b24c67c006f
After marking the file, we can enable the swap file, allowing our system to start utilizing it:
- sudo swapon /swapfile
Verify that the swap is available by typing:
- sudo swapon --show
OutputNAME TYPE SIZE USED PRIO
/swapfile file 1024M 0B -1
We can check the output of the free
utility again to corroborate our findings:
- free -h
Output total used free shared buff/cache available
Mem: 996M 44M 637M 4.5M 314M 811M
Swap: 1.0G 0B 1.0G
Our swap has been set up successfully and our operating system will begin to use it as necessary.
Our recent changes have enabled the swap file for the current session. However, if we reboot, the server will not retain the swap settings automatically. We can change this by adding the swap file to our /etc/fstab
file.
Back up the /etc/fstab
file in case anything goes wrong:
- sudo cp /etc/fstab /etc/fstab.bak
Add the swap file information to the end of your /etc/fstab
file by typing:
- echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Next we’ll review some settings we can update to tune our swap space.
There are a few options that you can configure that will have an impact on your system’s performance when dealing with swap.
The swappiness
parameter configures how often your system swaps data out of RAM to the swap space. This is a value between 0 and 100 that represents a percentage.
With values close to zero, the kernel will not swap data to the disk unless absolutely necessary. Remember, interactions with the swap file are “expensive” in that they take a lot longer than interactions with RAM and they can cause a significant reduction in performance. Telling the system not to rely on the swap much will generally make your system faster.
Values that are closer to 100 will try to put more data into swap in an effort to keep more RAM space free. Depending on your applications’ memory profile or what you are using your server for, this might be better in some cases.
We can see the current swappiness value by typing:
- cat /proc/sys/vm/swappiness
Output60
For a Desktop, a swappiness setting of 60 is not a bad value. For a server, you might want to move it closer to 0.
We can set the swappiness to a different value by using the sysctl
command.
For instance, to set the swappiness to 10, we could type:
- sudo sysctl vm.swappiness=10
Outputvm.swappiness = 10
This setting will persist until the next reboot. We can set this value automatically at restart by adding the line to our /etc/sysctl.conf
file:
- sudo nano /etc/sysctl.conf
At the bottom, you can add:
vm.swappiness=10
Save and close the file when you are finished.
Another related value that you might want to modify is the vfs_cache_pressure
. This setting configures how much the system will choose to cache inode and dentry information over other data.
Basically, this is access data about the filesystem. This is generally very costly to look up and very frequently requested, so it’s an excellent thing for your system to cache. You can see the current value by querying the proc
filesystem again:
- cat /proc/sys/vm/vfs_cache_pressure
Output100
As it is currently configured, our system removes inode information from the cache too quickly. We can set this to a more conservative setting like 50 by typing:
- sudo sysctl vm.vfs_cache_pressure=50
Outputvm.vfs_cache_pressure = 50
Again, this is only valid for our current session. We can change that by adding it to our configuration file like we did with our swappiness setting:
- sudo nano /etc/sysctl.conf
At the bottom, add the line that specifies your new value:
vm.vfs_cache_pressure=50
Save and close the file when you are finished.
Following the steps in this guide will give you some breathing room in cases that would otherwise lead to out-of-memory exceptions. Swap space can be incredibly useful in avoiding some of these common problems.
If you are running into OOM (out of memory) errors, or if you find that your system is unable to use the applications you need, the best solution is to optimize your application configurations or upgrade your server.
]]>haproxy config:
global
maxconn 300
daemon
defaults
mode http
timeout connect 50s
timeout client 50s
timeout server 50s
frontend http
bind *:443 ssl crt /etc/ssl/certs/final_efektum.crt
mode http
reqadd X-Forwarded-Proto:\ https
default_backend servers
backend servers
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
balance roundrobin
option httpclose
balance roundrobin
cookie SERVERID insert indirect nocache
cookie JSESSIONID prefix nocache
option forwardfor
reqadd X-Forwarded-Proto:\ http
server poczta2 127.0.0.1:85 check cookie poczta2 maxconn 1
server digitalocean 165.22.68.126:85 check cookie digitalocean maxconn 1
frontend ldap
mode tcp
log global
bind :389
description LDAP Service
option tcplog
option logasap
option socket-stats
option tcpka
timeout client 5s
default_backend ad_server
backend ad_server
server ad 10.172.90.3:389 check fall 1 rise 1 inter 2s
mode tcp
balance source
timeout server 2s
timeout connect 1s
option tcpka
option tcp-check
tcp-check connect port 389
tcp-check send-binary 300c0201 # LDAP bind request "<ROOT>" simple
tcp-check send-binary 01 # message ID
tcp-check send-binary 6007 # protocol Op
tcp-check send-binary 0201 # bind request
tcp-check send-binary 03 # LDAP v3
tcp-check send-binary 04008000 # name, simple authentication
tcp-check expect binary 0a0100 # bind response + result code: success
tcp-check send-binary 30050201034200 # unbind request
]]>Ign:1 http://mirrors.digitalocean.com/ubuntu stretch InRelease Ign:2 http://mirrors.digitalocean.com/ubuntu stretch-updates InRelease Ign:3 http://mirrors.digitalocean.com/ubuntu stretch-backports InRelease Err:4 http://mirrors.digitalocean.com/ubuntu stretch Release 404 Not Found [IP: 104.24.117.209 80] Err:5 http://mirrors.digitalocean.com/ubuntu stretch-updates Release 404 Not Found [IP: 104.24.117.209 80] Err:6 http://mirrors.digitalocean.com/ubuntu stretch-backports Release 404 Not Found [IP: 104.24.117.209 80] Ign:7 http://security.ubuntu.com/ubuntu stretch-security InRelease Err:8 http://security.ubuntu.com/ubuntu stretch-security Release 404 Not Found [IP: 91.189.91.23 80] Reading package lists… Done E: The repository ‘http://mirrors.digitalocean.com/ubuntu stretch Release’ does not have a Release file. N: Updating from such a repository can’t be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. E: The repository ‘http://mirrors.digitalocean.com/ubuntu stretch-updates Release’ does not have a Release file. N: Updating from such a repository can’t be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. E: The repository ‘http://mirrors.digitalocean.com/ubuntu stretch-backports Release’ does not have a Release file. N: Updating from such a repository can’t be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details. E: The repository ‘http://security.ubuntu.com/ubuntu stretch-security Release’ does not have a Release file. N: Updating from such a repository can’t be done securely, and is therefore disabled by default. N: See apt-secure(8) manpage for repository creation and user configuration details.
]]>Apache Kafka is a popular distributed message broker designed to efficiently handle large volumes of real-time data. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a publish/subscribe messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.
A publish/subscribe messaging system allows one or more producers to publish messages without considering the number of consumers or how they will process the messages. Subscribed clients are notified automatically about updates and the creation of new messages. This system is more efficient and scalable than systems where clients poll periodically to determine if new messages are available.
In this tutorial, you will install and use Apache Kafka 2.1.1 on Debian 9.
To follow along, you will need:
Since Kafka can handle requests over a network, you should create a dedicated user for it. This minimizes damage to your Debian machine should the Kafka server be compromised. We will create a dedicated kafka user in this step, but you should create a different non-root user to perform other tasks on this server once you have finished setting up Kafka.
Logged in as your non-root sudo user, create a user called kafka with the useradd
command:
- sudo useradd kafka -m
The -m
flag ensures that a home directory will be created for the user. This home directory, /home/kafka
, will act as our workspace directory for executing commands in the sections below.
Set the password using passwd
:
- sudo passwd kafka
Add the kafka user to the sudo
group with the adduser
command, so that it has the privileges required to install Kafka’s dependencies:
- sudo adduser kafka sudo
Your kafka user is now ready. Log into this account using su
:
- su -l kafka
Now that we’ve created the Kafka-specific user, we can move on to downloading and extracting the Kafka binaries.
Let’s download and extract the Kafka binaries into dedicated folders in our kafka user’s home directory.
To start, create a directory in /home/kafka
called Downloads
to store your downloads:
- mkdir ~/Downloads
Install curl
using apt-get
so that you’ll be able to download remote files:
- sudo apt-get update && sudo apt-get install -y curl
Once curl
is installed, use it to download the Kafka binaries:
- curl "https://www.apache.org/dist/kafka/2.1.1/kafka_2.11-2.1.1.tgz" -o ~/Downloads/kafka.tgz
Create a directory called kafka
and change to this directory. This will be the base directory of the Kafka installation:
- mkdir ~/kafka && cd ~/kafka
Extract the archive you downloaded using the tar
command:
- tar -xvzf ~/Downloads/kafka.tgz --strip 1
We specify the --strip 1
flag to ensure that the archive’s contents are extracted in ~/kafka/
itself and not in another directory (such as ~/kafka/kafka_2.11-2.1.1/
) inside of it.
Now that we’ve downloaded and extracted the binaries successfully, we can move on to configuring Kafka to allow for topic deletion.
Kafka’s default behavior will not allow us to delete a topic, the category, group, or feed name to which messages can be published. To modify this, let’s edit the configuration file.
Kafka’s configuration options are specified in server.properties
. Open this file with nano
or your favorite editor:
- nano ~/kafka/config/server.properties
Let’s add a setting that will allow us to delete Kafka topics. Add the following to the bottom of the file:
delete.topic.enable = true
Save the file, and exit nano
. Now that we’ve configured Kafka, we can move on to creating systemd unit files for running and enabling it on startup.
In this section, we will create systemd unit files for the Kafka service. This will help us perform common service actions such as starting, stopping, and restarting Kafka in a manner consistent with other Linux services.
ZooKeeper is a service that Kafka uses to manage its cluster state and configurations. It is commonly used in many distributed systems as an integral component. If you would like to know more about it, visit the official ZooKeeper docs.
Create the unit file for zookeeper
:
- sudo nano /etc/systemd/system/zookeeper.service
Enter the following unit definition into the file:
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=kafka
ExecStart=/home/kafka/kafka/bin/zookeeper-server-start.sh /home/kafka/kafka/config/zookeeper.properties
ExecStop=/home/kafka/kafka/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
The [Unit]
section specifies that ZooKeeper requires networking and the filesystem to be ready before it can start.
The [Service]
section specifies that systemd should use the zookeeper-server-start.sh
and zookeeper-server-stop.sh
shell files for starting and stopping the service. It also specifies that ZooKeeper should be restarted automatically if it exits abnormally.
Next, create the systemd service file for kafka
:
- sudo nano /etc/systemd/system/kafka.service
Enter the following unit definition into the file:
[Unit]
Requires=zookeeper.service
After=zookeeper.service
[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/home/kafka/kafka/bin/kafka-server-start.sh /home/kafka/kafka/config/server.properties > /home/kafka/kafka/kafka.log 2>&1'
ExecStop=/home/kafka/kafka/bin/kafka-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
The [Unit]
section specifies that this unit file depends on zookeeper.service
. This will ensure that zookeeper
gets started automatically when the kafka
service starts.
The [Service]
section specifies that systemd should use the kafka-server-start.sh
and kafka-server-stop.sh
shell files for starting and stopping the service. It also specifies that Kafka should be restarted automatically if it exits abnormally.
Now that the units have been defined, start Kafka with the following command:
- sudo systemctl start kafka
To ensure that the server has started successfully, check the journal logs for the kafka
unit:
- sudo journalctl -u kafka
You should see output similar to the following:
OutputMar 23 13:31:48 kafka systemd[1]: Started kafka.service.
You now have a Kafka server listening on port 9092
.
While we have started the kafka
service, if we were to reboot our server, it would not be started automatically. To enable kafka
on server boot, run:
- sudo systemctl enable kafka
Now that we’ve started and enabled the services, let’s check the installation.
Let’s publish and consume a “Hello World” message to make sure the Kafka server is behaving correctly. Publishing messages in Kafka requires:
First, create a topic named TutorialTopic
by typing:
- ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic TutorialTopic
You can create a producer from the command line using the kafka-console-producer.sh
script. It expects the Kafka server’s hostname, port, and a topic name as arguments.
Publish the string "Hello, World"
to the TutorialTopic
topic by typing:
- echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null
Next, you can create a Kafka consumer using the kafka-console-consumer.sh
script. It expects the ZooKeeper server’s hostname and port, along with a topic name as arguments.
The following command consumes messages from TutorialTopic
. Note the use of the --from-beginning
flag, which allows the consumption of messages that were published before the consumer was started:
- ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TutorialTopic --from-beginning
If there are no configuration issues, you should see Hello, World
in your terminal:
OutputHello, World
The script will continue to run, waiting for more messages to be published to the topic. Feel free to open a new terminal and start a producer to publish a few more messages. You should be able to see them all in the consumer’s output.
When you are done testing, press CTRL+C
to stop the consumer script. Now that we have tested the installation, let’s move on to installing KafkaT.
KafkaT is a tool from Airbnb that makes it easier for you to view details about your Kafka cluster and perform certain administrative tasks from the command line. Because it is a Ruby gem, you will need Ruby to use it. You will also need the build-essential
package to be able to build the other gems it depends on. Install them using apt
:
- sudo apt install ruby ruby-dev build-essential
You can now install KafkaT using the gem command:
- sudo gem install kafkat
KafkaT uses .kafkatcfg
as the configuration file to determine the installation and log directories of your Kafka server. It should also have an entry pointing KafkaT to your ZooKeeper instance.
Create a new file called .kafkatcfg
:
- nano ~/.kafkatcfg
Add the following lines to specify the required information about your Kafka server and Zookeeper instance:
{
"kafka_path": "~/kafka",
"log_path": "/tmp/kafka-logs",
"zk_path": "localhost:2181"
}
You are now ready to use KafkaT. For a start, here’s how you would use it to view details about all Kafka partitions:
- kafkat partitions
You will see the following output:
OutputTopic Partition Leader Replicas ISRs
TutorialTopic 0 0 [0] [0]
__consumer_offsets 0 0 [0] [0]
...
...
You will see TutorialTopic
, as well as __consumer_offsets
, an internal topic used by Kafka for storing client-related information. You can safely ignore lines starting with __consumer_offsets
.
To learn more about KafkaT, refer to its GitHub repository.
If you want to create a multi-broker cluster using more Debian 9 machines, you should repeat Step 1, Step 4, and Step 5 on each of the new machines. Additionally, you should make the following changes in the server.properties
file for each:
The value of the broker.id
property should be changed such that it is unique throughout the cluster. This property uniquely identifies each server in the cluster and can have any string as its value. For example, "server1"
, "server2"
, etc.
The value of the zookeeper.connect
property should be changed such that all nodes point to the same ZooKeeper instance. This property specifies the ZooKeeper instance’s address and follows the <HOSTNAME/IP_ADDRESS>:<PORT>
format. For example, "203.0.113.0:2181"
, "203.0.113.1:2181"
etc.
If you want to have multiple ZooKeeper instances for your cluster, the value of the zookeeper.connect
property on each node should be an identical, comma-separated string listing the IP addresses and port numbers of all the ZooKeeper instances.
Now that all of the installations are done, you can remove the kafka user’s admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started this tutorial with, simply type exit
.
Remove the kafka user from the sudo group:
- sudo deluser kafka sudo
To further improve your Kafka server’s security, lock the kafka user’s password using the passwd
command. This makes sure that nobody can directly log into the server using this account:
- sudo passwd kafka -l
At this point, only root or a sudo user can log in as kafka
by typing in the following command:
- sudo su - kafka
In the future, if you want to unlock it, use passwd
with the -u
option:
- sudo passwd kafka -u
You have now successfully restricted the kafka user’s admin privileges.
You now have Apache Kafka running securely on your Debian server. You can make use of it in your projects by creating Kafka producers and consumers using Kafka clients, which are available for most programming languages. To learn more about Kafka, you can also consult its documentation.
]]>postfix main.cf
# wiadomość powitalna serwera dla DNS
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
# ustawienia zabezpieczeń
smtpd_tls_cert_file=/etc/ssl/certs/final_efektum.crt
smtpd_tls_key_file=$smtpd_tls_cert_file
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
# Głłówne ustawienia serwera pocztowego
myhostname = poczta2.efektum.pl
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = poczta2.efektum.pl, localhost.mydomain.local, localhost
relayhost =
mynetworks = 165.22.68.0/24 10.172.90.0/24 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_command = procmail -a "$EXTENSION"
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
# Limit wiadomości parsowanych jednocześnie
dovecot_destination_recipient_limit = 1
# mapa kont lokalnych ( NUE UŻYWAMY KONT LOKALNIE DLATEGO NIE MA TU PLIKU )
local_recipient_maps =
# ustawienia autoryzacji użytkownika między postfixem a dovecotem
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
smtpd_sasl_local_domain = poczta2.efektum.pl
broken_sasl_auth_clients = yes
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
# restrykcje określające jakie maile możemy przyjąć i jakie odrzucamy z automatu
smtpd_recipient_restrictions =
permit_mynetworks,
permit_sasl_authenticated,
reject_non_fqdn_hostname,
reject_non_fqdn_sender,
reject_non_fqdn_recipient,
reject_unauth_destination,
reject_unauth_pipelining,
reject_invalid_hostname
# konfiguracja LDAP
virtual_mailbox_domains = poczta2.efektum.pl
# lokalizacja skrzynek pocztowych użytkowników
virtual_mailbox_base = /home/AD/
# pobieranie użytkowników z serwera AD
virtual_mailbox_maps = proxy:ldap:/etc/postfix/ldap/accounts.cf
# mapowanie nadawcy wiadomości do konta AD
smtpd_sender_login_maps = proxy:ldap:/etc/postfix/ldap/sender.cf
# określenie jaki użytkownik odpowiada za tworzenie oraz zapisywanie wiadoości pocztowych
virtual_uid_maps = static:1001
virtual_gid_maps = static:1001
# określenie jaki mechanizm odpowiada za transport informacji między ad a postfixem i dovecotem
virtual_transport = dovecot
postfix users.cf
server_host = 10.172.90.3
server_port = 389
version = 3
bind = yes
start_tls = no
bind_dn = cn=postmaster,ou=services,dc=ad,dc=efektum,dc=pl
bind_pw = Das5ahec23a
search_base = cn=users,dc=ad,dc=efektum,dc=pl
scope = sub
query_filter = (&(objectClass=person)(mail=%s))
#result_format = /home/AD/%u
result_attribute = mail
special_result_filter = %s@%d
debuglevel = 0
dovecot-ldap
hosts = 10.172.90.3:389
#uris = ldap://dc1.mydomain.local
ldap_version = 3
base = dc=ad,dc=efektum,dc=pl
deref = never
scope = subtree
auth_bind = yes
auth_bind_userdn = %u
#auth_bind_userdn = CN=Read Only,CN=Users,DC=mydomain,DC=local
#auth_bind_userdn = readonly@mydomain.local
pass_filter = (&(objectClass=person)(userPrincipalName=%n))
debug_level = 0
mail err log
May 10 00:51:26 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<Pv1A932IZsOlFkR+>
May 10 00:51:38 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 6 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<euu+932IaMOlFkR+>
May 10 00:52:04 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski@poczta2.efektum.pl>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<LEmL+X2IbMOlFkR+>
May 10 00:52:18 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 6 secs): user=<adam.dabrowski@poczta2.efektum.pl>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<2B8f+n2IbsOlFkR+>
May 10 00:52:34 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 3 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<QwBN+32IcMOlFkR+>
May 10 00:52:52 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<Pt1t/H2IcsOlFkR+>
May 10 00:54:32 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<cqpXAn6IeMOlFkR+>
May 10 00:54:53 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<WrKiA36IfMOlFkR+>
May 10 00:56:23 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<amz6CH6IgsOlFkR+>
May 10 01:00:41 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<adam.dabrowski>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<9C5dGH6IjsOlFkR+>
May 10 01:02:43 poczta2 dovecot: imap-login: Disconnected (auth failed, 1 attempts in 2 secs): user=<marcin.testowy>, method=PLAIN, rip=165.22.68.126, lip=165.22.68.126, secured, session=<sradH36IlsOlFkR+>
]]>Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.
Note: This tutorial uses version 1.14 of Kubernetes, the official supported version at the time of this article’s publication. For up-to-date information on the latest version, please see the current release notes in the official Kubernetes documentation.
Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating-system-level dependencies and their configuration. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error prone.
In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.
If you’re looking for a managed Kubernetes hosting service, check out our simple, managed Kubernetes service built for growth.
Your cluster will include the following physical resources:
One master node
The master node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.
Two worker nodes
Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the master goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.
After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.
Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly.
An SSH key pair on your local Linux/macOS/BSD machine. If you haven’t used SSH keys before, you can learn how to set them up by following this explanation of how to set up SSH keys on your local machine.
Three servers running Debian 9 with at least 2GB RAM and 2 vCPUs each. You should be able to SSH into each server as the root user with your SSH key pair.
Ansible installed on your local machine. For installation instructions, follow the official Ansible installation documentation.
Familiarity with Ansible playbooks. For review, check out Configuration Management 101: Writing Ansible Playbooks.
Knowledge of how to launch a container from a Docker image. Look at “Step 5 — Running a Docker Container” in How To Install and Use Docker on Debian 9 if you need a refresher.
In this section, you will create a directory on your local machine that will serve as your workspace. You will configure Ansible locally so that it can communicate with and execute commands on your remote servers. Once that’s done, you will create a hosts
file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.
Out of your three servers, one will be the master with an IP displayed as master_ip
. The other two servers will be workers and will have the IPs worker_1_ip
and worker_2_ip
.
Create a directory named ~/kube-cluster
in the home directory of your local machine and cd
into it:
- mkdir ~/kube-cluster
- cd ~/kube-cluster
This directory will be your workspace for the rest of the tutorial and will contain all of your Ansible playbooks. It will also be the directory inside which you will run all local commands.
Create a file named ~/kube-cluster/hosts
using nano
or your favorite text editor:
- nano ~/kube-cluster/hosts
Add the following text to the file, which will specify information about the logical structure of your cluster:
[masters]
master ansible_host=master_ip ansible_user=root
[workers]
worker1 ansible_host=worker_1_ip ansible_user=root
worker2 ansible_host=worker_2_ip ansible_user=root
[all:vars]
ansible_python_interpreter=/usr/bin/python3
You may recall that inventory files in Ansible are used to specify server information such as IP addresses, remote users, and groupings of servers to target as a single unit for executing commands. ~/kube-cluster/hosts
will be your inventory file and you’ve added two Ansible groups (masters and workers) to it specifying the logical structure of your cluster.
In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip
) and specifies that Ansible should run remote commands as the root user.
Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip
and worker_2_ip
) that also specify the ansible_user
as root.
The last line of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.
Save and close the file after you’ve added the text.
Having set up the server inventory with groups, let’s move on to installing operating system level dependencies and creating configuration settings.
In this section you will create a non-root user with sudo privileges on all servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information with commands such as top/htop
, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations.
Create a file named ~/kube-cluster/initial.yml
in the workspace:
- nano ~/kube-cluster/initial.yml
Next, add the following play to the file to create a non-root user with sudo privileges on all of the servers. A play in Ansible is a collection of steps to be performed that target specific servers and groups. The following play will create a non-root sudo user:
- hosts: all
become: yes
tasks:
- name: create the 'sammy' user
user: name=sammy append=yes state=present createhome=yes shell=/bin/bash
- name: allow 'sammy' to have passwordless sudo
lineinfile:
dest: /etc/sudoers
line: 'sammy ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: set up authorized keys for the sammy user
authorized_key: user=sammy key="{{item}}"
with_file:
- ~/.ssh/id_rsa.pub
Here’s a breakdown of what this playbook does:
Creates the non-root user sammy
.
Configures the sudoers
file to allow the sammy
user to run sudo
commands without a password prompt.
Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub
) to the remote sammy
user’s authorized key list. This will allow you to SSH into each server as the sammy
user.
Save and close the file after you’ve added the text.
Next, execute the playbook by locally running:
- ansible-playbook -i hosts ~/kube-cluster/initial.yml
The command will complete within two to five minutes. On completion, you will see output similar to the following:
OutputPLAY [all] ****
TASK [Gathering Facts] ****
ok: [master]
ok: [worker1]
ok: [worker2]
TASK [create the 'sammy' user] ****
changed: [master]
changed: [worker1]
changed: [worker2]
TASK [allow 'sammy' user to have passwordless sudo] ****
changed: [master]
changed: [worker1]
changed: [worker2]
TASK [set up authorized keys for the sammy user] ****
changed: [worker1] => (item=ssh-rsa AAAAB3...)
changed: [worker2] => (item=ssh-rsa AAAAB3...)
changed: [master] => (item=ssh-rsa AAAAB3...)
PLAY RECAP ****
master : ok=5 changed=4 unreachable=0 failed=0
worker1 : ok=5 changed=4 unreachable=0 failed=0
worker2 : ok=5 changed=4 unreachable=0 failed=0
Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.
In this section, you will install the operating-system-level packages required by Kubernetes with Debian’s package manager. These packages are:
Docker - a container runtime. It is the component that runs your containers. Support for other runtimes such as rkt is under active development in Kubernetes.
kubeadm
- a CLI tool that will install and configure the various components of a cluster in a standard way.
kubelet
- a system service/program that runs on all nodes and handles node-level operations.
kubectl
- a CLI tool used for issuing commands to the cluster through its API Server.
Create a file named ~/kube-cluster/kube-dependencies.yml
in the workspace:
- nano ~/kube-cluster/kube-dependencies.yml
Add the following plays to the file to install these packages to your servers:
- hosts: all
become: yes
tasks:
- name: install remote apt deps
apt:
name: "{{ item }}"
state: present
with_items:
- apt-transport-https
- ca-certificates
- gnupg2
- software-properties-common
- name: add Docker apt-key
apt_key:
url: https://download.docker.com/linux/debian/gpg
state: present
- name: add Docker's APT repository
apt_repository:
repo: deb https://download.docker.com/linux/debian stretch stable
state: present
filename: 'docker'
- name: install Docker
apt:
name: docker-ce
state: present
update_cache: true
- name: add Kubernetes apt-key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: add Kubernetes' APT repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: 'kubernetes'
- name: install kubelet
apt:
name: kubelet=1.14.0-00
state: present
update_cache: true
- name: install kubeadm
apt:
name: kubeadm=1.14.0-00
state: present
- hosts: master
become: yes
tasks:
- name: install kubectl
apt:
name: kubectl=1.14.0-00
state: present
force: yes
The first play in the playbook does the following:
Add dependencies for adding, verifying and installing packages from remote repositories.
Adds the Docker APT repository’s apt-key for key verification.
Installs Docker, the container runtime.
Adds the Kubernetes APT repository’s apt-key for key verification.
Adds the Kubernetes APT repository to your remote servers’ APT sources list.
Installs kubelet
and kubeadm
.
The second play consists of a single task that installs kubectl
on your master node.
Note: While the Kubernetes documentation recommends you use the latest stable release of Kubernetes for your environment, this tutorial uses a specific version. This will ensure that you can follow the steps successfully, as Kubernetes changes rapidly and the latest version may not work with this tutorial.
Save and close the file when you are finished.
Next, execute the playbook by locally running:
- ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml
On completion, you will see output similar to the following:
OutputPLAY [all] ****
TASK [Gathering Facts] ****
ok: [worker1]
ok: [worker2]
ok: [master]
TASK [install Docker] ****
changed: [master]
changed: [worker1]
changed: [worker2]
TASK [install APT Transport HTTPS] *****
ok: [master]
ok: [worker1]
changed: [worker2]
TASK [add Kubernetes apt-key] *****
changed: [master]
changed: [worker1]
changed: [worker2]
TASK [add Kubernetes' APT repository] *****
changed: [master]
changed: [worker1]
changed: [worker2]
TASK [install kubelet] *****
changed: [master]
changed: [worker1]
changed: [worker2]
TASK [install kubeadm] *****
changed: [master]
changed: [worker1]
changed: [worker2]
PLAY [master] *****
TASK [Gathering Facts] *****
ok: [master]
TASK [install kubectl] ******
ok: [master]
PLAY RECAP ****
master : ok=9 changed=5 unreachable=0 failed=0
worker1 : ok=7 changed=5 unreachable=0 failed=0
worker2 : ok=7 changed=5 unreachable=0 failed=0
After execution, Docker, kubeadm
, and kubelet
will be installed on all of the remote servers. kubectl
is not a required component and is only needed for executing cluster commands. Installing it only on the master node makes sense in this context, since you will run kubectl
commands only from the master. Note, however, that kubectl
commands can be run from any of the worker nodes or from any machine where it can be installed and configured to point to a cluster.
All system dependencies are now installed. Let’s set up the master node and initialize the cluster.
In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.
A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.
Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.
This functionality is provided by pod network plugins. For this cluster, you will use Flannel, a stable and performant option.
Create an Ansible playbook named master.yml
on your local machine:
- nano ~/kube-cluster/master.yml
Add the following play to the file to initialize the cluster and install Flannel:
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: create .kube directory
become: yes
become_user: sammy
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/sammy/.kube/config
remote_src: yes
owner: sammy
- name: install Pod network
become: yes
become_user: sammy
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
Here’s a breakdown of this play:
The first task initializes the cluster by running kubeadm init
. Passing the argument --pod-network-cidr=10.244.0.0/16
specifies the private subnet that the pod IPs will be assigned from. Flannel uses the above subnet by default; we’re telling kubeadm
to use the same subnet.
The second task creates a .kube
directory at /home/sammy
. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.
The third task copies the /etc/kubernetes/admin.conf
file that was generated from kubeadm init
to your non-root user’s home directory. This will allow you to use kubectl
to access the newly-created cluster.
The last task runs kubectl apply
to install Flannel
. kubectl apply -f descriptor.[yml|json]
is the syntax for telling kubectl
to create the objects described in the descriptor.[yml|json]
file. The kube-flannel.yml
file contains the descriptions of objects required for setting up Flannel
in the cluster.
Save and close the file when you are finished.
Execute the playbook locally by running:
- ansible-playbook -i hosts ~/kube-cluster/master.yml
On completion, you will see output similar to the following:
Output
PLAY [master] ****
TASK [Gathering Facts] ****
ok: [master]
TASK [initialize the cluster] ****
changed: [master]
TASK [create .kube directory] ****
changed: [master]
TASK [copy admin.conf to user's kube config] *****
changed: [master]
TASK [install Pod network] *****
changed: [master]
PLAY RECAP ****
master : ok=5 changed=4 unreachable=0 failed=0
To check the status of the master node, SSH into it with the following command:
- ssh sammy@master_ip
Once inside the master node, execute:
- kubectl get nodes
You will now see the following output:
OutputNAME STATUS ROLES AGE VERSION
master Ready master 1d v1.14.0
The output states that the master
node has completed all initialization tasks and is in a Ready
state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.
Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the master’s API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.
Navigate back to your workspace and create a playbook named workers.yml
:
- nano ~/kube-cluster/workers.yml
Add the following text to the file to add the workers to the cluster:
- hosts: master
become: yes
gather_facts: false
tasks:
- name: get join command
shell: kubeadm token create --print-join-command
register: join_command_raw
- name: set join command
set_fact:
join_command: "{{ join_command_raw.stdout_lines[0] }}"
- hosts: workers
become: yes
tasks:
- name: join cluster
shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
args:
chdir: $HOME
creates: node_joined.txt
Here’s what the playbook does:
The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.
The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.
Save and close the file when you are finished.
Execute the playbook by locally running:
- ansible-playbook -i hosts ~/kube-cluster/workers.yml
On completion, you will see output similar to the following:
OutputPLAY [master] ****
TASK [get join command] ****
changed: [master]
TASK [set join command] *****
ok: [master]
PLAY [workers] *****
TASK [Gathering Facts] *****
ok: [worker1]
ok: [worker2]
TASK [join cluster] *****
changed: [worker1]
changed: [worker2]
PLAY RECAP *****
master : ok=2 changed=1 unreachable=0 failed=0
worker1 : ok=2 changed=1 unreachable=0 failed=0
worker2 : ok=2 changed=1 unreachable=0 failed=0
With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let’s verify that the cluster is working as intended.
A cluster can sometimes fail during setup because a node is down or network connectivity between the master and worker is not working correctly. Let’s verify the cluster and ensure that the nodes are operating correctly.
You will need to check the current state of the cluster from the master node to ensure that the nodes are ready. If you disconnected from the master node, you can SSH back into it with the following command:
- ssh sammy@master_ip
Then execute the following command to get the status of the cluster:
- kubectl get nodes
You will see output similar to the following:
OutputNAME STATUS ROLES AGE VERSION
master Ready master 1d v1.14.0
worker1 Ready <none> 1d v1.14.0
worker2 Ready <none> 1d v1.14.0
If all of your nodes have the value Ready
for STATUS
, it means that they’re part of the cluster and ready to run workloads.
If, however, a few of the nodes have NotReady
as the STATUS
, it could mean that the worker nodes haven’t finished their setup yet. Wait for around five to ten minutes before re-running kubectl get nodes
and inspecting the new output. If a few nodes still have NotReady
as the status, you might have to verify and re-run the commands in the previous steps.
Now that your cluster is verified successfully, let’s schedule an example Nginx application on the cluster.
You can now deploy any containerized application to your cluster. To keep things familiar, let’s deploy Nginx using Deployments and Services to see how this application can be deployed to the cluster. You can use the commands below for other containerized applications as well, provided you change the Docker image name and any relevant flags (such as ports
and volumes
).
Still within the master node, execute the following command to create a deployment named nginx
:
- kubectl create deployment nginx --image=nginx
A deployment is a type of Kubernetes object that ensures there’s always a specified number of pods running based on a defined template, even if the pod crashes during the cluster’s lifetime. The above deployment will create a pod with one container from the Docker registry’s Nginx Docker Image.
Next, run the following command to create a service named nginx
that will expose the app publicly. It will do so through a NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster:
- kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort
Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external. They are also capable of load balancing requests to multiple pods, and are an integral component in Kubernetes, frequently interacting with other components.
Run the following command:
- kubectl get services
This will output text similar to the following:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m
From the third line of the above output, you can retrieve the port that Nginx is running on. Kubernetes will assign a random port that is greater than 30000
automatically, while ensuring that the port is not already bound by another service.
To test that everything is working, visit http://worker_1_ip:nginx_port
or http://worker_2_ip:nginx_port
through a browser on your local machine. You will see Nginx’s familiar welcome page.
If you would like to remove the Nginx application, first delete the nginx
service from the master node:
- kubectl delete service nginx
Run the following to ensure that the service has been deleted:
- kubectl get services
You will see the following output:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
Then delete the deployment:
- kubectl delete deployment nginx
Run the following to confirm that this worked:
- kubectl get deployments
OutputNo resources found.
In this guide, you’ve successfully set up a Kubernetes cluster on Debian 9 using Kubeadm and Ansible for automation.
If you’re wondering what to do with the cluster now that it’s set up, a good next step would be to get comfortable deploying your own applications and services onto the cluster. Here’s a list of links with further information that can guide you in the process:
Dockerizing applications - lists examples that detail how to containerize applications using Docker.
Pod Overview - describes in detail how Pods work and their relationship with other Kubernetes objects. Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work.
Deployments Overview - provides an overview of deployments. It is useful to understand how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.
Services Overview - covers services, another frequently used object in Kubernetes clusters. Understanding the types of services and the options they have is essential for running both stateless and stateful applications.
Other important concepts that you can look into are Volumes, Ingresses and Secrets, all of which come in handy when deploying production applications.
Kubernetes has a lot of functionality and features to offer. The Kubernetes Official Documentation is the best place to learn about concepts, find task-specific guides, and look up API references for various objects.
]]>nfw
and logging out, I can’t access the droplet both by ssh and the web console anymore. Fwiw, opening the web console gives me this repeatedly:
[UFW BLOCK] IN=eth0 OUT= MAC= ... SRC= ...
so it looks like the logging attempt has been blocked by ufw
.
I’m pretty sure that when I ran ufw allow 'Nginx HTTP'
and then ufw status
the output didn’t contain OpenSSH
. I should have noticed that :(
What should I do now?
]]>On the DigitalOcean control panel, the new droplet shows close to zero resources/cpu being used. I have an iptables firewall in place, and access to the new site is limited to myself.
In the 3 weeks that I’ve had the new droplet, this has happened 3 times that I’m aware of.
Installed on the droplet is Debian, Apache, PHP, and MariaDB.
Any ideas? Things to check?
]]>Backing up your Apache Kafka data is an important practice that will help you recover from unintended data loss or bad data added to the cluster due to user error. Data dumps of cluster and topic data are an efficient way to perform backups and restorations.
Importing and migrating your backed up data to a separate server is helpful in situations where your Kafka instance becomes unusable due to server hardware or networking failures and you need to create a new Kafka instance with your old data. Importing and migrating backed up data is also useful when you are moving the Kafka instance to an upgraded or downgraded server due to a change in resource usage.
In this tutorial, you will back up, import, and migrate your Kafka data on a single Debian 9 installation as well as on multiple Debian 9 installations on separate servers. ZooKeeper is a critical component of Kafka’s operation. It stores information about cluster state such as consumer data, partition data, and the state of other brokers in the cluster. As such, you will also back up ZooKeeper’s data in this tutorial.
To follow along, you will need:
A Kafka message is the most basic unit of data storage in Kafka and is the entity that you will publish to and subscribe from Kafka. A Kafka topic is like a container for a group of related messages. When you subscribe to a particular topic, you will receive only messages that were published to that particular topic. In this section you will log in to the server that you would like to back up (the source server) and add a Kafka topic and a message so that you have some data populated for the backup.
This tutorial assumes you have installed Kafka in the home directory of the kafka user (/home/kafka/kafka
). If your installation is in a different directory, modify the ~/kafka
part in the following commands with your Kafka installation’s path, and for the commands throughout the rest of this tutorial.
SSH into the source server by executing:
- ssh sammy@source_server_ip
Run the following command to log in as the kafka user:
- sudo -iu kafka
Create a topic named BackupTopic
using the kafka-topics.sh
shell utility file in your Kafka installation’s bin directory, by typing:
- ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic BackupTopic
Publish the string "Test Message 1"
to the BackupTopic
topic by using the ~/kafka/bin/kafka-console-producer.sh
shell utility script.
If you would like to add additional messages here, you can do so now.
- echo "Test Message 1" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic BackupTopic > /dev/null
The ~/kafka/bin/kafka-console-producer.sh
file allows you to publish messages directly from the command line. Typically, you would publish messages using a Kafka client library from within your program, but since that involves different setups for different programming languages, you can use the shell script as a language-independent way of publishing messages during testing or while performing administrative tasks. The --topic
flag specifies the topic that you will publish the message to.
Next, verify that the kafka-console-producer.sh
script has published the message(s) by running the following command:
- ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning
The ~/kafka/bin/kafka-console-consumer.sh
shell script starts the consumer. Once started, it will subscribe to messages from the topic that you published in the "Test Message 1"
message in the previous command. The --from-beginning
flag in the command allows consuming messages that were published before the consumer was started. Without the flag enabled, only messages published after the consumer was started will appear. On running the command, you will see the following output in the terminal:
OutputTest Message 1
Press CTRL+C
to stop the consumer.
You’ve created some test data and verified that it’s persisted. Now you can back up the state data in the next section.
Before backing up the actual Kafka data, you need to back up the cluster state stored in ZooKeeper.
ZooKeeper stores its data in the directory specified by the dataDir
field in the ~/kafka/config/zookeeper.properties
configuration file. You need to read the value of this field to determine the directory to back up. By default, dataDir
points to the /tmp/zookeeper
directory. If the value is different in your installation, replace /tmp/zookeeper
with that value in the following commands.
Here is an example output of the ~/kafka/config/zookeeper.properties
file:
...
...
...
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=0
...
...
...
Now that you have the path to the directory, you can create a compressed archive file of its contents. Compressed archive files are a better option over regular archive files to save disk space. Run the following command:
- tar -czf /home/kafka/zookeeper-backup.tar.gz /tmp/zookeeper/*
The command’s output tar: Removing leading / from member names
you can safely ignore.
The -c
and -z
flags tell tar
to create an archive and apply gzip compression to the archive. The -f
flag specifies the name of the output compressed archive file, which is zookeeper-backup.tar.gz
in this case.
You can run ls
in your current directory to see zookeeper-backup.tar.gz
as part of your output.
You have now successfully backed up the ZooKeeper data. In the next section, you will back up the actual Kafka data.
In this section, you will back up Kafka’s data directory into a compressed tar file like you did for ZooKeeper in the previous step.
Kafka stores topics, messages, and internal files in the directory that the log.dirs
field specifies in the ~/kafka/config/server.properties
configuration file. You need to read the value of this field to determine the directory to back up. By default and in your current installation, log.dirs
points to the /tmp/kafka-logs
directory. If the value is different in your installation, replace /tmp/kafka-logs
in the following commands with the correct value.
Here is an example output of the ~/kafka/config/server.properties
file:
...
...
...
############################# Log Basics #############################
# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
...
...
...
First, stop the Kafka service so that the data in the log.dirs
directory is in a consistent state when creating the archive with tar
. To do this, return to your server’s non-root user by typing exit
and then run the following command:
- sudo systemctl stop kafka
After stopping the Kafka service, log back in as your kafka user with:
- sudo -iu kafka
It is necessary to stop/start the Kafka and ZooKeeper services as your non-root sudo user because in the Apache Kafka installation prerequisite you restricted the kafka user as a security precaution. This step in the prerequisite disables sudo access for the kafka user, which leads to commands failing to execute.
Now, create a compressed archive file of the directory’s contents by running the following command:
- tar -czf /home/kafka/kafka-backup.tar.gz /tmp/kafka-logs/*
Once again, you can safely ignore the command’s output (tar: Removing leading / from member names
).
You can run ls
in the current directory to see kafka-backup.tar.gz
as part of the output.
You can start the Kafka service again — if you do not want to restore the data immediately — by typing exit
, to switch to your non-root sudo user, and then running:
- sudo systemctl start kafka
Log back in as your kafka user:
- sudo -iu kafka
You have successfully backed up the Kafka data. You can now proceed to the next section, where you will be restoring the cluster state data stored in ZooKeeper.
In this section you will restore the cluster state data that Kafka creates and manages internally when the user performs operations such as creating a topic, adding/removing additional nodes, and adding and consuming messages. You will restore the data to your existing source installation by deleting the ZooKeeper data directory and restoring the contents of the zookeeper-backup.tar.gz
file. If you want to restore data to a different server, see Step 7.
You need to stop the Kafka and ZooKeeper services as a precaution against the data directories receiving invalid data during the restoration process.
First, stop the Kafka service by typing exit
, to switch to your non-root sudo user, and then running:
- sudo systemctl stop kafka
Next, stop the ZooKeeper service:
- sudo systemctl stop zookeeper
Log back in as your kafka user:
- sudo -iu kafka
You can then safely delete the existing cluster data directory with the following command:
- rm -r /tmp/zookeeper/*
Now restore the data you backed up in Step 2:
- tar -C /tmp/zookeeper -xzf /home/kafka/zookeeper-backup.tar.gz --strip-components 2
The -C
flag tells tar
to change to the directory /tmp/zookeeper
before extracting the data. You specify the --strip 2
flag to make tar
extract the archive’s contents in /tmp/zookeeper/
itself and not in another directory (such as /tmp/zookeeper/tmp/zookeeper/
) inside of it.
You have restored the cluster state data successfully. Now, you can proceed to the Kafka data restoration process in the next section.
In this section you will restore the backed up Kafka data to your existing source installation (or the destination server if you have followed the optional Step 7) by deleting the Kafka data directory and restoring the compressed archive file. This will allow you to verify that restoration works successfully.
You can safely delete the existing Kafka data directory with the following command:
- rm -r /tmp/kafka-logs/*
Now that you have deleted the data, your Kafka installation resembles a fresh installation with no topics or messages present in it. To restore your backed up data, extract the files by running:
- tar -C /tmp/kafka-logs -xzf /home/kafka/kafka-backup.tar.gz --strip-components 2
The -C
flag tells tar
to change to the directory /tmp/kafka-logs
before extracting the data. You specify the --strip 2
flag to ensure that the archive’s contents are extracted in /tmp/kafka-logs/
itself and not in another directory (such as /tmp/kafka-logs/kafka-logs/
) inside of it.
Now that you have extracted the data successfully, you can start the Kafka and ZooKeeper services again by typing exit
, to switch to your non-root sudo user, and then executing:
- sudo systemctl start kafka
Start the ZooKeeper service with:
- sudo systemctl start zookeeper
Log back in as your kafka user:
- sudo -iu kafka
You have restored the kafka
data, you can move on to verifying that the restoration is successful in the next section.
To test the restoration of the Kafka data, you will consume messages from the topic you created in Step 1.
Wait a few minutes for Kafka to start up and then execute the following command to read messages from the BackupTopic
:
- ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic BackupTopic --from-beginning
If you get a warning like the following, you need to wait for Kafka to start fully:
Output[2018-09-13 15:52:45,234] WARN [Consumer clientId=consumer-1, groupId=console-consumer-87747] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Retry the previous command in another few minutes or run sudo systemctl restart kafka
as your non-root sudo user. If there are no issues in the restoration, you will see the following output:
OutputTest Message 1
If you do not see this message, you can check if you missed out any commands in the previous section and execute them.
Now that you have verified the restored Kafka data, this means you have successfully backed up and restored your data in a single Kafka installation. You can continue to Step 7 to see how to migrate the cluster and topics data to an installation in another server.
In this section, you will migrate the backed up data from the source Kafka server to the destination Kafka server. To do so, you will first use the scp
command to download the compressed tar.gz
files to your local system. You will then use scp
again to push the files to the destination server. Once the files are present in the destination server, you can follow the steps used previously to restore the backup and verify that the migration is successful.
You are downloading the backup files locally and then uploading them to the destination server, instead of copying it directly from your source to destination server, because the destination server will not have your source server’s SSH key in its /home/sammy/.ssh/authorized_keys
file and cannot connect to and from the source server. Your local machine can connect to both servers however, saving you an additional step of setting up SSH access from the source to destination server.
Download the zookeeper-backup.tar.gz
and kafka-backup.tar.gz
files to your local machine by executing:
- scp sammy@source_server_ip:/home/kafka/zookeeper-backup.tar.gz .
You will see output similar to:
Outputzookeeper-backup.tar.gz 100% 68KB 128.0KB/s 00:00
Now run the following command to download the kafka-backup.tar.gz
file to your local machine:
- scp sammy@source_server_ip:/home/kafka/kafka-backup.tar.gz .
You will see the following output:
Outputkafka-backup.tar.gz 100% 1031KB 488.3KB/s 00:02
Run ls
in the current directory of your local machine, you will see both of the files:
Outputkafka-backup.tar.gz zookeeper.tar.gz
Run the following command to transfer the zookeeper-backup.tar.gz
file to /home/kafka/
of the destination server:
- scp zookeeper-backup.tar.gz sammy@destination_server_ip:/home/sammy/zookeeper-backup.tar.gz
Now run the following command to transfer the kafka-backup.tar.gz
file to /home/kafka/
of the destination server:
- scp kafka-backup.tar.gz sammy@destination_server_ip:/home/sammy/kafka-backup.tar.gz
You have uploaded the backup files to the destination server successfully. Since the files are in the /home/sammy/
directory and do not have the correct permissions for access by the kafka user, you can move the files to the /home/kafka/
directory and change their permissions.
SSH into the destination server by executing:
- ssh sammy@destination_server_ip
Now move zookeeper-backup.tar.gz
to /home/kafka/
by executing:
- sudo mv zookeeper-backup.tar.gz /home/sammy/zookeeper-backup.tar.gz
Similarly, run the following command to copy kafka-backup.tar.gz
to /home/kafka/
:
- sudo mv kafka-backup.tar.gz /home/kafka/kafka-backup.tar.gz
Change the owner of the backup files by running the following command:
- sudo chown kafka /home/kafka/zookeeper-backup.tar.gz /home/kafka/kafka-backup.tar.gz
The previous mv
and chown
commands will not display any output.
Now that the backup files are present in the destination server at the correct directory, follow the commands listed in Steps 4 to 6 of this tutorial to restore and verify the data for your destination server.
In this tutorial, you backed up, imported, and migrated your Kafka topics and messages from both the same installation and installations on separate servers. If you would like to learn more about other useful administrative tasks in Kafka, you can consult the operations section of Kafka’s official documentation.
To store backed up files such as zookeeper-backup.tar.gz
and kafka-backup.tar.gz
remotely, you can explore DigitalOcean Spaces. If Kafka is the only service running on your server, you can also explore other backup methods such as full instance backups.
apt update
on my Debian:
Hit:1 http://security.debian.org stretch/updates InRelease
Ign:2 http://mirrors.digitalocean.com/debian stretch InRelease
Hit:3 http://mirrors.digitalocean.com/debian stretch-updates InRelease
Hit:4 http://mirrors.digitalocean.com/debian stretch Release
Ign:5 http://ftp.debian.org/debian jessie-backports InRelease
Err:6 http://ftp.debian.org/debian jessie-backports Release
404 Not Found [IP: 130.89.148.12 80]
Get:7 http://deb.goaccess.io stretch InRelease [2,520 B]
Get:8 https://packages.sury.org/php stretch InRelease [6,760 B]
Hit:9 https://download.docker.com/linux/debian stretch InRelease
Get:10 https://deb.nodesource.com/node_6.x stretch InRelease [4,635 B]
Hit:11 https://repos.sonar.digitalocean.com/apt main InRelease
Err:8 https://packages.sury.org/php stretch InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B188E2B695BD4743
Reading package lists... Done
E: The repository 'http://ftp.debian.org/debian jessie-backports Release' does no longer have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.sury.org/php stretch InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B188E2B695BD4743
Any help will be much appreciated :)
]]>ClickHouse is an open-source, column-oriented analytics database created by Yandex for OLAP and big data use cases. ClickHouse’s support for real-time query processing makes it suitable for applications that require sub-second analytical results. ClickHouse’s query language is a dialect of SQL that enables powerful declarative querying capabilities while offering familiarity and a smaller learning curve for the end user.
Column-oriented databases store records in blocks grouped by columns instead of rows. By not loading data for columns absent in the query, column-oriented databases spend less time reading data while completing queries. As a result, these databases can compute and return results much faster than traditional row-based systems for certain workloads, such as OLAP.
Online Analytics Processing (OLAP) systems allow for organizing large amounts of data and performing complex queries. They are capable of managing petabytes of data and returning query results quickly. In this way, OLAP is useful for work in areas like data science and business analytics.
In this tutorial, you’ll install the ClickHouse database server and client on your machine. You’ll use the DBMS for typical tasks and optionally enable remote access from another server so that you’ll be able to connect to the database from another machine. Then you’ll test ClickHouse by modeling and querying example website-visit data.
sudo
enabled non-root user and firewall setup. You can follow the initial server setup tutorial to create the user and set up the firewall.sudo
enabled non-root user and firewall setup. You can follow the initial server setup tutorial.In this section, you will install the ClickHouse server and client programs using apt-get
.
First, SSH into your server by running:
- ssh sammy@your_server_ip
dirmngr
is a server for managing certificates and keys. It is required for adding and verifying remote repository keys, install it by running:
- sudo apt-get install -y dirmngr
Yandex maintains an APT repository that has the latest version of ClickHouse. Add the repository’s GPG key so that you’ll be able to securely download validated ClickHouse packages:
- sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E0C56BD4
You will see output similar to the following:
Output
Executing: /tmp/apt-key-gpghome.JkkcKnBAFY/gpg.1.sh --keyserver keyserver.ubuntu.com --recv E0C56BD4
gpg: key C8F1E19FE0C56BD4: public key "ClickHouse Repository Key <milovidov@yandex-team.ru>" imported
gpg: Total number processed: 1
gpg: imported: 1
The output confirms it has successfully verified and added the key.
Add the repository to your APT repositories list by executing:
- echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" | sudo tee /etc/apt/sources.list.d/clickhouse.list
Here you’ve piped the output of echo
to sudo tee
so that this output can print to a root-owned file.
Now, run apt-get update
to update your packages:
- sudo apt-get update
The clickhouse-server
and clickhouse-client
packages will now be available for installation. Install them with:
- sudo apt-get install -y clickhouse-server clickhouse-client
You’ve installed the ClickHouse server and client successfully. You’re now ready to start the database service and ensure that it’s running correctly.
The clickhouse-server
package that you installed in the previous section creates a systemd
service, which performs actions such as starting, stopping, and restarting the database server. systemd
is an init system for Linux to initialize and manage services. In this section you’ll start the service and verify that it is running successfully.
Start the clickhouse-server
service by running:
- sudo service clickhouse-server start
The previous command will not display any output. To verify that the service is running successfully, execute:
- sudo service clickhouse-server status
You’ll see output similar to the following:
Output
● clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-12-22 07:23:20 UTC; 1h 9min ago
Main PID: 27101 (ClickHouse-serv)
Tasks: 34 (limit: 1152)
CGroup: /system.slice/ClickHouse-server.service
└─27101 /usr/bin/ClickHouse-server --config=/etc/ClickHouse-server/config.xml
The output denotes that the server is running.
You have successfully started the ClickHouse server and will now be able to use the clickhouse-client
CLI program to connect to the server.
In ClickHouse, you can create and delete databases by executing SQL statements directly in the interactive database prompt. Statements consist of commands following a particular syntax that tell the database server to perform a requested operation along with any data required. You create databases by using the CREATE DATABASE table_name
syntax. To create a database, first start a client session by running the following command:
- clickhouse-client
This command will log you into the client prompt where you can run ClickHouse SQL statements to perform actions such as:
Creating, updating, and deleting databases, tables, indexes, partitions, and views.
Executing queries to retrieve data that is optionally filtered and grouped using various conditions.
In this step, with the ClickHouse client ready for inserting data, you’re going to create a database and table. For the purposes of this tutorial, you’ll create a database named test
, and inside that you’ll create a table named visits
that tracks website-visit durations.
Now that you’re inside the ClickHouse command prompt, create your test
database by executing:
- CREATE DATABASE test;
You’ll see the following output that shows that you have created the database:
Output
CREATE DATABASE test
Ok.
0 rows in set. Elapsed: 0.003 sec.
A ClickHouse table is similar to tables in other relational databases; it holds a collection of related data in a structured format. You can specify columns along with their types, add rows of data, and execute different kinds of queries on tables.
The syntax for creating tables in ClickHouse follows this example structure:
CREATE TABLE table_name
(
column_name1 column_type [options],
column_name2 column_type [options],
...
) ENGINE = engine
The table_name
and column_name
values can be any valid ASCII identifiers. ClickHouse supports a wide range of column types; some of the most popular are:
UInt64
: used for storing integer values in the range 0 to 18446744073709551615.
Float64
: used for storing floating point numbers such as 2039.23, 10.5, etc.
String
: used for storing variable length characters. It does not require a max length attribute since it can store arbitrary lengths.
Date
: used for storing dates that follow the YYYY-MM-DD
format.
DateTime
: used for storing dates coupled with time and follows the YYYY-MM-DD HH:MM:SS
format.
After the column definitions, you specify the engine used for the table. In ClickHouse, Engines determine the physical structure of the underlying data, the table’s querying capabilities, its concurrent access modes, and support for indexes. Different engine types are suitable for different application requirements. The most commonly used and widely applicable engine type is MergeTree
.
Now that you have an overview of table creation, you’ll create a table. Start by confirming the database you’ll be modifying:
- USE test;
You will see the following output showing that you have switched to the test
database from the default
database:
Output
USE test
Ok.
0 rows in set. Elapsed: 0.001 sec.
The remainder of this guide will assume that you are executing statements within this database’s context.
Create your visits
table by running this command:
- CREATE TABLE visits (
- id UInt64,
- duration Float64,
- url String,
- created DateTime
- ) ENGINE = MergeTree()
- PRIMARY KEY id
- ORDER BY id;
Here’s a breakdown of what the command does. You create a table named visits
that has four columns:
id
: The primary key column. Similarly to other RDBMS systems, a primary key column in ClickHouse uniquely identifies a row; each row should have a unique value for this column.
duration
: A float column used to store the duration of each visit in seconds. float
columns can store decimal values such as 12.50.
url
: A string column that stores the URL visited, such as http://example.com
.
created
: A date and time column that tracks when the visit occurred.
After the column definitions, you specify MergeTree
as the storage engine for the table. The MergeTree family of engines is recommended for production databases due to its optimized support for large real-time inserts, overall robustness, and query support. Additionally, MergeTree engines support sorting of rows by primary key, partitioning of rows, and replicating and sampling data.
If you intend to use ClickHouse for archiving data that is not queried often or for storing temporary data, you can use the Log family of engines to optimize for that use-case.
After the column definitions, you’ll define other table-level options. The PRIMARY KEY
clause sets id
as the primary key column and the ORDER BY
clause will store values sorted by the id
column. A primary key uniquely identifies a row and is used for efficiently accessing a single row and efficient colocation of rows.
On executing the create statement, you will see the following output:
OutputCREATE TABLE visits
(
id UInt64,
duration Float64,
url String,
created DateTime
)
ENGINE = MergeTree()
PRIMARY KEY id
ORDER BY id
Ok.
0 rows in set. Elapsed: 0.010 sec.
In this section, you’ve created a database and a table to track website-visits data. In the next step, you’ll insert data into the table, update existing data, and delete that data.
In this step, you’ll use your visits
table to insert, update, and delete data. The following command is an example of the syntax for inserting rows into a ClickHouse table:
INSERT INTO table_name VALUES (column_1_value, column_2_value, ....);
Now, insert a few rows of example website-visit data into your visits
table by running each of the following statements:
- INSERT INTO visits VALUES (1, 10.5, 'http://example.com', '2019-01-01 00:01:01');
- INSERT INTO visits VALUES (2, 40.2, 'http://example1.com', '2019-01-03 10:01:01');
- INSERT INTO visits VALUES (3, 13, 'http://example2.com', '2019-01-03 12:01:01');
- INSERT INTO visits VALUES (4, 2, 'http://example3.com', '2019-01-04 02:01:01');
You’ll see the following output repeated for each insert statement.
Output
INSERT INTO visits VALUES
Ok.
1 rows in set. Elapsed: 0.004 sec.
The output for each row shows that you’ve inserted it successfully into the visits
table.
Now you’ll add an additional column to the visits
table. When adding or deleting columns from existing tables, ClickHouse supports the ALTER
syntax.
For example, the basic syntax for adding a column to a table is as follows:
ALTER TABLE table_name ADD COLUMN column_name column_type;
Add a column named location
that will store the location of the visits to a website by running the following statement:
- ALTER TABLE visits ADD COLUMN location String;
You’ll see output similar to the following:
OutputALTER TABLE visits
ADD COLUMN
location String
Ok.
0 rows in set. Elapsed: 0.014 sec.
The output shows that you have added the location
column successfully.
As of version 19.3.6, ClickHouse doesn’t support updating and deleting individual rows of data due to implementation constraints. ClickHouse has support for bulk updates and deletes, however, and has a distinct SQL syntax for these operations to highlight their non-standard usage.
The following syntax is an example for bulk updating rows:
ALTER TABLE table_name UPDATE column_1 = value_1, column_2 = value_2 ... WHERE filter_conditions;
You’ll run the following statement to update the url
column of all rows that have a duration
of less than 15. Enter it into the database prompt to execute:
- ALTER TABLE visits UPDATE url = 'http://example2.com' WHERE duration < 15;
The output of the bulk update statement will be as follows:
Output
ALTER TABLE visits
UPDATE url = 'http://example2.com' WHERE duration < 15
Ok.
0 rows in set. Elapsed: 0.003 sec.
The output shows that your update query completed successfully. The 0 rows in set
in the output denotes that the query did not return any rows; this will be the case for any update and delete queries.
The example syntax for bulk deleting rows is similar to updating rows and has the following structure:
ALTER TABLE table_name DELETE WHERE filter_conditions;
To test deleting data, run the following statement to remove all rows that have a duration
of less than 5:
- ALTER TABLE visits DELETE WHERE duration < 5;
The output of the bulk delete statement will be similar to:
Output
ALTER TABLE visits
DELETE WHERE duration < 5
Ok.
0 rows in set. Elapsed: 0.003 sec.
The output confirms that you have deleted the rows with a duration of less than five seconds.
To delete columns from your table, the syntax would follow this example structure:
ALTER TABLE table_name DROP COLUMN column_name;
Delete the location
column you added previously by running the following:
- ALTER TABLE visits DROP COLUMN location;
The DROP COLUMN
output confirming that you have deleted the column will be as follows:
OutputALTER TABLE visits
DROP COLUMN
location String
Ok.
0 rows in set. Elapsed: 0.010 sec.
Now that you’ve successfully inserted, updated, and deleted rows and columns in your visits
table, you’ll move on to query data in the next step.
ClickHouse’s query language is a custom dialect of SQL with extensions and functions suited for analytics workloads. In this step, you’ll run selection and aggregation queries to retrieve data and results from your visits
table.
Selection queries allow you to retrieve rows and columns of data filtered by conditions that you specify, along with options such as the number of rows to return. You can select rows and columns of data using the SELECT
syntax. The basic syntax for SELECT
queries is:
SELECT func_1(column_1), func_2(column_2) FROM table_name WHERE filter_conditions row_options;
Execute the following statement to retrieve url
and duration
values for rows where the url
is http://example.com
.
- SELECT url, duration FROM visits WHERE url = 'http://example2.com' LIMIT 2;
You will see the following output:
OutputSELECT
url,
duration
FROM visits
WHERE url = 'http://example2.com'
LIMIT 2
┌─url─────────────────┬─duration─┐
│ http://example2.com │ 10.5 │
└─────────────────────┴──────────┘
┌─url─────────────────┬─duration─┐
│ http://example2.com │ 13 │
└─────────────────────┴──────────┘
2 rows in set. Elapsed: 0.013 sec.
The output has returned two rows that match the conditions you specified. Now that you’ve selected values, you can move to executing aggregation queries.
Aggregation queries are queries that operate on a set of values and return single output values. In analytics databases, these queries are run frequently and are well optimized by the database. Some aggregate functions supported by ClickHouse are:
count
: returns the count of rows matching the conditions specified.
sum
: returns the sum of selected column values.
avg
: returns the average of selected column values.
Some ClickHouse-specific aggregate functions include:
uniq
: returns an approximate number of distinct rows matched.
topK
: returns an array of the most frequent values of a specific column using an approximation algorithm.
To demonstrate the execution of aggregation queries, you’ll calculate the total duration of visits by running the sum
query:
- SELECT SUM(duration) FROM visits;
You will see output similar to the following:
Output
SELECT SUM(duration)
FROM visits
┌─SUM(duration)─┐
│ 63.7 │
└───────────────┘
1 rows in set. Elapsed: 0.010 sec.
Now, calculate the top two URLs by executing:
- SELECT topK(2)(url) FROM visits;
You will see output similar to the following:
OutputSELECT topK(2)(url)
FROM visits
┌─topK(2)(url)──────────────────────────────────┐
│ ['http://example2.com','http://example1.com'] │
└───────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.010 sec.
Now that you have successfully queried your visits
table, you’ll delete tables and databases in the next step.
In this section, you’ll delete your visits
table and test
database.
The syntax for deleting tables follows this example:
DROP TABLE table_name;
To delete the visits
table, run the following statement:
- DROP TABLE visits;
You will see the following output declaring that you’ve deleted the table successfully:
outputDROP TABLE visits
Ok.
0 rows in set. Elapsed: 0.005 sec.
You can delete databases using the DROP database table_name
syntax. To delete the test
database, execute the following statement:
- DROP DATABASE test;
The resulting output shows that you’ve deleted the database successfully.
Output
DROP DATABASE test
Ok.
0 rows in set. Elapsed: 0.003 sec.
You’ve deleted tables and databases in this step. Now that you’ve created, updated, and deleted databases, tables, and data in your ClickHouse instance, you’ll enable remote access to your database server in the next section.
If you intend to only use ClickHouse locally with applications running on the same server, or do not have a firewall enabled on your server, you don’t need to complete this section. If instead, you’ll be connecting to the ClickHouse database server remotely, you should follow this step.
Currently your server has a firewall enabled that disables your public IP address accessing all ports. You’ll complete the following two steps to allow remote access:
Modify ClickHouse’s configuration and allow it to listen on all interfaces.
Add a firewall rule allowing incoming connections to port 8123
, which is the HTTP port that ClickHouse server runs.
If you are inside the database prompt, exit it by typing CTRL+D
.
Edit the configuration file by executing:
- sudo nano /etc/clickhouse-server/config.xml
Then uncomment the line containing <!-- <listen_host>0.0.0.0</listen_host> -->
, like the following file:
...
<interserver_http_host>example.yandex.ru</interserver_http_host>
-->
<!-- Listen specified host. use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. -->
<!-- <listen_host>::</listen_host> -->
<!-- Same for hosts with disabled ipv6: -->
<listen_host>0.0.0.0</listen_host>
<!-- Default values - try listen localhost on ipv4 and ipv6: -->
<!--
<listen_host>::1</listen_host>
<listen_host>127.0.0.1</listen_host>
-->
...
Save the file and exit nano
. For the new configuration to apply restart the service by running:
- sudo service clickhouse-server restart
You won’t see any output from this command. ClickHouse’s server listens on port 8123
for HTTP connections and port 9000
for connections from clickhouse-client
. Allow access to both ports for your second server’s IP address with the following command:
- sudo ufw allow from second_server_ip/32 to any port 8123
- sudo ufw allow from second_server_ip/32 to any port 9000
You will see the following output for both commands that shows that you’ve enabled access to both ports:
OutputRule added
ClickHouse will now be accessible from the IP that you added. Feel free to add additional IPs such as your local machine’s address if required.
To verify that you can connect to the ClickHouse server from the remote machine, first follow the steps in Step 1 of this tutorial on the second server and ensure that you have the clickhouse-client
installed on it.
Now that you have logged into the second server, start a client session by executing:
- clickhouse-client --host your_server_ip
You will see the following output that shows that you have connected successfully to the server:
OutputClickHouse client version 19.3.6.
Connecting to your_server_ip:9000 as user default.
Connected to ClickHouse server version 19.3.6 revision 54415.
hostname :)
In this step, you’ve enabled remote access to your ClickHouse database server by adjusting your firewall rules.
You have successfully set up a ClickHouse database instance on your server and created a database and table, added data, performed queries, and deleted the database. Within ClickHouse’s documentation you can read about their benchmarks against other open-source and commercial analytics databases and general reference documents. Further features ClickHouse offers include distributed query processing across multiple servers to improve performance and protect against data loss by storing data over different shards.
]]>I use cload fleayer CDN for my site seo optimize in googl
i am new webmaster
some time i have problem with Cload fleyr and i want to change or use to anoter CDN company
my site refer to detector or فلزیاب
plese link me new CDN Company for better than cload fleyer.
TNX.
]]>Suddenly, MariaDB stopped working and was unable to start. The logs reffer to some memory issue related to InnoDB. After several hours trying to solve it reading some forums and sources of information, the mysql sever is still down, unable to start.
Maybe this terminal output can help.
mariadb.service - MariaDB 10.1.37 database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/mariadb.service.d
└─oom.conf
Active: failed (Result: exit-code) since Fri 2019-03-08 19:35:41 UTC; 18s ago
Docs: man:mysqld(8)
https://mariadb.com/kb/en/library/systemd/
Process: 14298 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION
Process: 14186 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=`/usr/bi
Process: 14182 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exi
Process: 14178 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exit
Main PID: 14298 (code=exited, status=1/FAILURE)
Status: "MariaDB server is down"
Mar 08 19:35:40 blaudat-debian-s-3vcpu-1gb-fra1-01 systemd[1]: Starting MariaDB 10.1.37 database ser
Mar 08 19:35:41 blaudat-debian-s-3vcpu-1gb-fra1-01 mysqld[14298]: 2019-03-08 19:35:41 13967806592249
Mar 08 19:35:41 blaudat-debian-s-3vcpu-1gb-fra1-01 systemd[1]: mariadb.service: Main process exited,
Mar 08 19:35:41 blaudat-debian-s-3vcpu-1gb-fra1-01 systemd[1]: Failed to start MariaDB 10.1.37 datab
Mar 08 19:35:41 blaudat-debian-s-3vcpu-1gb-fra1-01 systemd[1]: mariadb.service: Unit entered failed
Mar 08 19:35:41 blaudat-debian-s-3vcpu-1gb-fra1-01 systemd[1]: mariadb.service: Failed with result
Thank you very much in advance.
]]>Quotas are used to limit the amount of disk space a user or group can use on a filesystem. Without such limits, a user could fill up the machine’s disk and cause problems for other users and services.
In this tutorial we will install command line tools to create and inspect disk quotas, then set a quota for an example user.
This tutorial assumes you are logged into a Debian 9 server, with a non-root, sudo-enabled user, as described in Initial Server Setup with Debian 9.
The techniques in this tutorial should generally work on Linux distributions other than Debian, but may require some adaptation.
To set and check quotas, we first need to install the quota command line tools using apt
. Let’s update our package list, then install the package:
sudo apt update
sudo apt install quota
You can verify that the tools are installed by running the quota
command and asking for its version information:
quota --version
OutputQuota utilities version 4.03.
. . .
It’s fine if your output shows a slightly different version number.
Next we will update our filesystem’s mount
options to enable quotas on our root filesystem.
To activate quotas on a particular filesystem, we need to mount it with a few quota-related options specified. We do this by updating the filesystem’s entry in the /etc/fstab
configuration file. Open that file in your favorite text editor now:
sudo nano /etc/fstab
The file’s contents will be similar to the following:
# /etc/fstab: static file system information.
UUID=06b2aae3-b525-4a4c-9549-0fc6045bd08e / ext4 errors=remount-ro 0 1
This fstab
file is from a virtual server. A desktop or laptop computer will probably have a slightly different looking fstab
, but in most cases you’ll have a /
or root filesystem that represents all of your disk space.
Update the line pointing to the root filesystem by adding options as follows:
# /etc/fstab: static file system information.
UUID=06b2aae3-b525-4a4c-9549-0fc6045bd08e / ext4 errors=remount-ro,usrquota,grpquota 0 1
You will add the new options to the end of any existing options, being sure to separate them all with a comma and no spaces. The above change will allow us to enable both user- (usrquota
) and group-based (grpquota
) quotas on the filesystem. If you only need one or the other, you may leave out the unused option.
Remount the filesystem to make the new options take effect:
sudo mount -o remount /
Note: Be certain there are no spaces between the options listed in your /etc/fstab
file. If you put a space after the ,
comma, you will see an error like the following:
Outputmount: /etc/fstab: parse error at line 2 -- ignored
If you see this message after running the previous mount
command, reopen the fstab
file, correct any errors, and repeat the mount
command before continuing.
We can verify that the new options were used to mount the filesystem by looking at the /proc/mounts
file. Here, we use grep
to show only the root filesystem entry in that file:
cat /proc/mounts | grep ' / '
Output/dev/vda1 / ext4 rw,relatime,quota,usrquota,grpquota,errors=remount-ro,data=ordered 0 0
Note the two options that we specified. Now that we’ve installed our tools and updated our filesystem options, we can turn on the quota system.
Before finally turning on the quota system, we need to manually run the quotacheck
command once:
sudo quotacheck -ugm /
This command creates the files /aquota.user
and /aquota.group
. These files contain information about the limits and usage of the filesystem, and they need to exist before we turn on quota monitoring. The quotacheck
parameters we’ve used are:
u
: specifies that a user-based quota file should be createdg
: indicates that a group-based quota file should be createdm
: disables remounting the filesystem as read-only while performing the initial tallying of quotas. Remounting the filesystem as read-only will give more accurate results in case a user is actively saving files during the process, but is not necessary during this initial setup.If you don’t need to enable user- or group-based quotas, you can leave off the corresponding quotacheck
option.
We can verify that the appropriate files were created by listing the root directory:
ls /
Outputaquota.group bin dev home initrd.img.old lib64 media opt root sbin sys usr vmlinuz
aquota.user boot etc initrd.img lib lost+found mnt proc run srv tmp var vmlinuz.old
If you didn’t include the u
or g
options in the quotacheck
command, the corresponding file will be missing. Now we’re ready to turn on the quota system:
sudo quotaon -v /
Output/dev/vda1 [/]: group quotas turned on
/dev/vda1 [/]: user quotas turned on
Our server is now monitoring and enforcing quotas, but we’ve not set any yet! Next we’ll set a disk quota for a single user.
There are a few ways we can set quotas for users or groups. Here, we’ll go over how to set quotas with both the edquota
and setquota
commands.
edquota
to Set a User QuotaWe use the edquota
command to edit quotas. Let’s edit our example sammy user’s quota:
sudo edquota -u sammy
The -u
option specifies that this is a user
quota we’ll be editing. If you’d like to edit a group’s quota instead, use the -g
option in its place.
This will open up a file in the default text editor, similar to how crontab -e
opens a temporary file for you to edit. The file will look similar to this:
Disk quotas for user sammy (uid 1001):
Filesystem blocks soft hard inodes soft hard
/dev/vda1 24 0 0 7 0 0
This lists the username and uid
, the filesystems that have quotas enabled on them, and the block- and inode-based usage and limits. Setting an inode-based quota would limit how many files and directories a user can create, regardless of the amount of disk space they use. Most people will want block-based quotas, which specifically limit disk space usage. This is what we will configure.
Note: The concept of a block is poorly specified and can change depending on many factors, including which command line tool is reporting them. In the context of setting quotas on Debian, it’s fairly safe to assume that 1 block equals 1 kilobyte of disk space.
In the above listing, our user sammy is using 24 blocks, or 24KB of space on the /dev/vda1
drive. The soft
and hard
limits are both disabled with a 0
value.
Each type of quota allows you to set both a soft limit and a hard limit. When a user exceeds the soft limit, they are over quota, but they are not immediately prevented from consuming more space or inodes. Instead, some leeway is given: the user has – by default – seven days to get their disk use back under the soft limit. At the end of the seven day grace period, if the user is still over the soft limit it will be treated as a hard limit. A hard limit is less forgiving: all creation of new blocks or inodes is immediately halted when you hit the specified hard limit. This behaves as if the disk is completely out of space: writes will fail, temporary files will fail to be created, and the user will start to see warnings and errors while performing common tasks.
Let’s update our sammy user to have a block quota with a 100MB soft limit, and a 110MB hard limit:
Disk quotas for user sammy (uid 1001):
Filesystem blocks soft hard inodes soft hard
/dev/vda1 24 100M 110M 7 0 0
Save and close the file. To check the new quota we can use the quota
command:
sudo quota -vs sammy
OutputDisk quotas for user sammy (uid 1001):
Filesystem space quota limit grace files quota limit grace
/dev/vda1 24K 100M 110M 7 0 0
The command outputs our current quota status, and shows that our quota is 100M
while our limit is 110M
. This corresponds to the soft and hard limits respectively.
Note: If you want your users to be able to check their own quotas without having sudo
access, you’ll need to give them permission to read the quota files we created in Step 4. One way to do this would be to make a users
group, make those files readable by the users
group, and then make sure all your users are also placed in the group.
To learn more about Linux permissions, including user and group ownership, please read An Introduction to Linux Permissions
setquota
to Set a User QuotaUnlike edquota
, setquota
will update our user’s quota information in a single command, without an interactive editing step. We will specify the username and the soft and hard limits for both block- and inode-based quotas, and finally the filesystem to apply the quota to:
sudo setquota -u sammy 200M 220M 0 0 /
The above command will double sammy’s block-based quota limits to 200 megabytes and 220 megabytes. The 0 0
for inode-based soft and hard limits indicates that they remain unset. This is required even if we’re not setting any inode-based quotas.
Once again, use the quota
command to check our work:
sudo quota -vs sammy
OutputDisk quotas for user sammy (uid 1001):
Filesystem space quota limit grace files quota limit grace
/dev/vda1 24K 200M 220M 7 0 0
Now that we have set some quotas, let’s find out how to generate a quota report.
To generate a report on current quota usage for all users on a particular filesystem, use the repquota
command:
sudo repquota -s /
Output*** Report for user quotas on device /dev/vda1
Block grace time: 7days; Inode grace time: 7days
Space limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 981M 0K 0K 35234 0 0
nobody -- 7664K 0K 0K 3 0 0
ntp -- 12K 0K 0K 3 0 0
_apt -- 8K 0K 0K 2 0 0
debian -- 16K 0K 0K 4 0 0
sammy -- 24K 200M 220M 7 0 0
In this instance we’re generating a report for the /
root filesystem. The -s
command tells repquota
to use human-readable numbers when possible. There are a few system users listed, which probably have no quotas set by default. Our user sammy is listed at the bottom, with the amounts used and soft and hard limits.
Also note the Block grace time: 7days
callout, and the grace
column. If our user was over the soft limit, the grace
column would show how much time they had left to get back under the limit.
In the next step we’ll update the grace periods for our quota system.
We can configure the period of time where a user is allowed to float above the soft limit. We use the setquota
command to do so:
sudo setquota -t 864000 864000 /
The above command sets both the block and inode grace times to 864000 seconds, or 10 days. This setting applies to all users, and both values must be provided even if you don’t use both types of quota (block vs. inode).
Note that the values must be specified in seconds.
Run repquota
again to check that the changes took effect:
sudo repquota -s /
OutputBlock grace time: 10days; Inode grace time: 10days
. . .
The changes should be reflected immediately in the repquota
output.
In this tutorial we installed the quota
command line tools, set up a block-based quota for one user, and generated a report on our filesystem’s quota usage.
The following are some common errors you may see when setting up and manipulating filesystem quotas.
quotaon Outputquotaon: cannot find //aquota.group on /dev/vda1 [/]
quotaon: cannot find //aquota.user on /dev/vda1 [/]
This is an error you might see if you tried to turn on quotas (using quotaon
) before running the initial quotacheck
command. The quotacheck
command creates the aquota
or quota
files needed to turn on the quota system. See Step 3 for more information.
quota Outputquota: Cannot open quotafile //aquota.user: Permission denied
quota: Cannot open quotafile //aquota.user: Permission denied
quota: Cannot open quotafile //quota.user: No such file or directory
This is the error you’ll see if you run quota
and your current user does not have permission to read the quota files for your filesystem. You (or your system administrator) will need to adjust the file permissions appropriately, or use sudo
when running commands that require access to the quota file.
To learn more about Linux permissions, including user and group ownership, please read An Introduction to Linux Permissions
]]>This tutorial will be for the ones that want a teamspeak server fast & easy without any extra’s. I will however also explain all the lovely options withing teamspeak servers.
Old tutorials: Setting up on Ubuntu 15.04 Setting up on Debian 9.3
If you got any problems after using this tutorial, please make a comment. I made this tutorial in a way I think is the best, if you got any improvements or want to use something else please tell me.
A droplet with Debian 9.x ( $5 is enough for a ts server) --> Use my referral link for a free $10
An SSH client / SFTP client ( I am using PuTTY and WinSCP )
First we need to connect to the droplet we created. Once the droplet is created, you will get an email with all the credentials in it. Open PuTTY and use these credentials to login; the first time you login you need to change the password. To make this server secure, I always like to use a RSA key. We will look into setting that up now.
Once you are logged in, there are multiple small things you can change to harden your system:
1. Securing the SSH service
You can secure your droplet by hardening the SSH service. The SSH service is used to connect to your droplet with an ssh client (like Putty) and runs on port 22. Hackers are known to scan the internet for port 22 and breach your server. Because of that it is important to always secure your server with just a few steps.
Let’s first setup SSH keys. When you are not using an SSH key, it means you are using a plain password. Once a hacker has found your new server with SSH running on port 22, it will try to crack this plain password.
An SSH key will ensure the connection between the client - server is encrypted with a specific key. Only with this key you will be able to login. To setup SSH keys check out my post here: Securing a Debian Server 9.X – Hardening SSH with keys
Also disabling the root login and changing the SSH daemon port will help to atleast slow down / limit the availablity for hackers. Since the default SSH port is 22, scripts will search the internet for that port first. Changing this will slow down this script. As well as disabling the root user to login. Check out this post here: Disabling root login and changing the SSH port
2. Updating your remote server
The first thing in the IT world: always update your servers --> *However you have to make sure the new updates are compatible with anything running on your server. The most feared problem of an IT administrator, is those services suddenly not working anymore after an update. So yes, I do recommend you update your servers, but make sure it’s all tested & working.
apt-get update && apt-get upgrade
3. Setting up a firewall
A firewall could mean the world to you, but it could also give you alot of pain. I have seen alot of problems where you might think a service or configuration is the problem, but actually the firewall was actually blocking data. So I would recommend you:
The most easy and fast Firewall I think there is, is UFW. UFW, or Uncomplicated Firewall, is an interface to iptables that is geared towards simplifying the process of configuring a firewall. While iptables is a solid and flexible tool, it can be difficult for beginners to learn how to use it to properly configure a firewall.
If you’re looking to get started securing your network, and you’re not sure which tool to use, UFW may be the right choice for you. Take a look at this post to setup UFW: Setting up UFW on Debian 9.X
4. Scanning our server with Rkhunter Rkhunter (Rootkit Hunter) is an open source Unix/Linux based scanner tool for Linux systems that scans backdoors, rootkits and local exploits on your systems.
It scans hidden files, wrong permissions set on binaries, suspicious strings in kernel etc. To know more about Rkhunter and its features visit http://www.rootkit.nl/. To setup Rkhunter and start scanning your system, check out this post here: Securing a Debian Server 9.X – Scanning for malicious items (Rkhunter)
For this tutorial I would like to keep the installation of teamspeak quick and simple. You will have your teamspeak up and running within 5 minutes!
Because of the fast setup, we will use SQLite as a database and no configuration file. I will however explain later on how to use MariaDB as a database and edit your teamspeak configuration.
Downloading & installing Download the latest teamspeak server you can find on the teamspeak website.
cd /tmp
wget https://files.teamspeak-services.com/releases/server/3.6.1/teamspeak3-server_linux_amd64-3.6.1.tar.bz2
tar -xf teamspeak3-server_linux_amd64-3.6.1.tar.bz2
Now let’s move it somewhere on a safe and proper location.
mkdir /opt/teamspeak-server
mv teamspeak3-server_linux_amd64/* /opt/teamspeak-server
cd /opt/teamspeak-server
To keep our server secure, we will create another user and let it run the server.
useradd -d /opt/teamspeak-server teamspeak-user
chown -R teamspeak-user:teamspeak-user /opt/teamspeak-server
Thats the installation, now we can move on to starting the Teamspeak server.
Accepting the license and starting teamspeak
Before you run the teamspeak 3 server it is required that you agree to our license. This license can be found in the file “license.txt” or “LICENSE” (depending on your platform), which is located in the same location as the ts3server binary (the main folder).
We can do this by creating a specific file.
su teamspeak-user
touch .ts3server_license_accepted
Thats it, we should now be able to start the Teamspeak server with a free license.
./ts3server_startscript.sh start
The output should give you a token, that will looke something like this:
token=HhPRunideGIrFTClyxR6dn0N9fzLX6jFvOZjRJqa
The token you see above can be used once, and will give you administration access. You can now connect to your teamspeak server by using the IP of your server.
###Creating an auto(re)start script
For all that want a auto-restart script using systemd
To create a script that auto-restarts, you could setup a systemd script. systemd is a system and service manager for Linux which has become the de facto initialization daemon for most new Linux distributions. First implemented in Fedora, systemd now comes with RHEL 7 and its derivatives like CentOS 7. Ubuntu 15.04 ships with native systemd as well. Other distributions have either incorporated systemd, or announced they will soon.
Follow the next steps to create this script. Make sure you are root user!! To exit as the teamspeak-user user, hit CTRL + D
Create the new script:
cat > /etc/systemd/system/teamspeak.service << EOF
[Unit]
Description=TeamSpeak3 Server
Wants=network-online.target
After=syslog.target network.target
[Service]
WorkingDirectory=/opt/teamspeak-server
User=teamspeak-user
Type=forking
Restart=always
ExecStart=/opt/teamspeak-server/ts3server_startscript.sh start
ExecStop=/opt/teamspeak-server/ts3server_startscript.sh stop
ExecReload=/opt/teamspeak-server/ts3server_startscript.sh reload
PIDFile=/opt/teamspeak-server/ts3server.pid
[Install]
WantedBy=multi-user.target
EOF
Hit Enter
Now stop the current server and enable the systemd script
./ts3server_startscript.sh stop
systemctl enable teamspeak
systemctl start teamspeak
###Troubleshooting
In some cases, the server process terminates on startup and the error message reads
Server() error while starting servermanager, error: instance check error
As long as you do not use a license key Teamspeak makes sure you only run exactly one instance of the TS3 server free unregistered version. It uses shared memory to facilitate the communication to detect other running instances, which requires tmpfs to be mounted at /dev/shm.
If you (for whatever reason) do not have this mounted, the above error will occur. To fix this problem, the following commands or file edits need to be done as root user (or using something like sudo). This is a temporary fix until your next reboot.
mount -t tmpfs tmpfs /dev/shm
Now, to make sure this mount is done automatically upon reboot edit the file /etc/fstab and add the line:
tmpfs /dev/shm tmpfs defaults 0 0
###Upgrading
When a new version comes out, you can easily upgrade the Teamspeak server. The steps that are needed:
1 Shutdown the server:
systemctl stop teamspeak
2 Download the latest version (which you can find on their website), extract it, move it and change the permissions:
cd /tmp
wget https://files.teamspeak-services.com/releases/server/3.6.1/teamspeak3-server_linux_amd64-3.6.1.tar.bz2
tar -xf teamspeak3-server_linux_amd64-3.6.1.tar.bz2
You can easily find the proper name of the tarbal, by typing the first 2-3 characters of the name and then hitting TAB. Debian will search the rest for you.
Now lets move the files to the opt directory, and remove all the downloaded files:
cp -rf teamspeak3-server_linux_amd64/* /opt/teamspeak-server/
chown -R teamspeak-user:teamspeak-user /opt/teamspeak-server
rm -rf teamspeak3-server*
3 Start the server again:
systemctl start teamspeak
###Other Teamspeak options
If you would like to play a bit more with Teamspeak, you could change the setup a bit and edit some configurations. For example, you can use a MariaDB database instead of a sqlite file. Or you can customize your teamspeak configuration.
Install MariaDB
MariaDB is a replacement for MySQL with a better performance. The database will hold all users/settings etc. of the Teamspeak server instead of SQLlite. Check out this great article about why and if.
If you already have a SQL database running, skip the first few steps and continue on making a new user for the teamspeak server. Before we can install MariaDB, we need to update & upgrade the packages. So run the following:
apt-get update && apt-get upgrade
Now thats done, we can install MariaDB:
apt-get install mariadb-client mariadb-server
Hit Y when they want you to confirm
Once the install process is finished, you have to setup your MariaDB install with new root password (default is blank). Issue this command:
/usr/bin/mysql_secure_installation
Enter current password for root (enter for none): Enter
Set root password? [Y/n] y
New password: PassWordGoesHere
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
Now the MariaDB service should be running with a new root password.
Lets configure the database now. We will create a new user and database for the Teamspeak server. Create the database with your own password:
mysql -u root -p
Enter the root user password
create database teamspeak;
GRANT ALL PRIVILEGES ON teamspeak.* TO 'teamspeak'@'localhost' IDENTIFIED BY 'TeamspeakUserPasswordGoesHere';
flush privileges;
quit
Change TeamspeakUserPasswordGoesHere with a secure password
Configuring teamspeak to use MariaDB
To get the MariaDB working we need to do some extra work. First we have to add a library for teamspeak to use. Symlink the libmariadb.so.2 library from /redist folder to TeamSpeak3 server directory.
ln -s /opt/teamspeak-server/redist/libmariadb.so.2 /opt/teamspeak-server/libmariadb.so.2
Run ldd to prints the shared libraries required by TeamSpeak3 server.
ldd /opt/teamspeak3-server/libts3db_mariadb.so
If libmariadb.so.2 ==> not found shows, use the following command:
apt-get install libmariadb2
Configure Teamspeak
We are going to configure TeamSpeak3 server with the MySQL-MariaDB database, We have to manually create configfiles:
ts3server.ini
ts3db_mariadb.ini
Create the configfile with the MySQL-MariaDB database option.
nano /opt/teamspeak-server/ts3server.ini
With the following inside of it:
machine_id=
default_voice_port=9987
voice_ip=0.0.0.0
licensepath=
filetransfer_port=30033
filetransfer_ip=0.0.0.0
query_port=10011
query_ip=0.0.0.0
query_ip_whitelist=query_ip_whitelist.txt
query_ip_blacklist=query_ip_blacklist.txt
dbsqlpath=sql/
dbplugin=ts3db_mariadb
dbsqlcreatepath=create_mariadb/
dbpluginparameter=ts3db_mariadb.ini
dbconnections=10
logpath=logs
logquerycommands=0
dbclientkeepdays=30
logappend=0
query_skipbruteforcecheck=0
To save: Hit CTRL + X --> Y
If you like to know more about possible options for the teamspeak server, check out the server_quickstart.txt inside the doc folder.
Creating the configfile for the database for the TeamSpeak3 server. Change PASSWORD to the same password you have created configuring the MySQL database.
nano /opt/teamspeak-server/ts3db_mariadb.ini
[config]
host=127.0.0.1
port=3306
username=teamspeak3
password=PASSWORD
database=teamspeak3
socket=
Now you need to change permissions of the new config files:
sudo chown -R teamspeak-user:teamspeak-user /opt/teamspeak-server
That’s it!
If you have any questions, please do not hesistate to ask them.
]]>Or how can i download all content so i can upload it into the new space?
]]>I am looking at my Droplet’s monitoring page on the digitalocean site, and the box “Bandwidth public” shows a near constant full line around 100Mbps (jittering around +/-10%). When hovering over the box, it says it is Outbound bandwidth that is being used.
I’m no Linux novice, but definitely no expert, so I rely a lot on Google.
I tried “netstat -nputw” which only showed my SSH connection. I tried nload which showed very little traffic. I tried iftop which again showed very little traffic.
Is this an issue with digitalocean’s monitoring system?
Should I be worried or look into it further? If yes, how?
Thanks!
]]>I have a requirement to set up IPsec VPN between my company’s droplet and a network owned by partner company that uses another provider. Tooling:
169.22.231.13
and local address e.g 10.22.0.50
I’ve created a test environment in order to try out the tooling and feasibility of the task, consisting of 2 Droplets that I managed to connect according to the points above, and managed to achieve what I wanted (while testing with my own droplets).
Onto the real case - here’s the description of the remote server (owned by the partner company):
153.132.142.123
10.100.232.11
I have managed to set up a VPN tunnel between my droplet and the remote network, according to:
racoonctl show-sa ipsec
showing both in and out directions of the tunnel, with esp mode=tunnel
and state=mature
racoonctl -l show-sa isakmp
is showing correct destination and Phase 2 = 1
However, when I try to ping the 10.100.232.11
address, it hangs, and when partner service pings my internal IP (that I mapped in Security Association Database) they tell me this IP is unreachable.
I have following suspicions:
NAT
, while we both configured our VPNs with NAT Traversal = OFF
;Can someone point me in the right direction? I would be most grateful to whomever could share some knowledge on this topic with me.
Thanks & Regards
]]>I have followed this tutorial and I am having trouble getting SSL to work.
Normal HTTP works fine, but when I try to use HTTPS I get an ERR_ADDRESS_UNREACHABLE
error, or via curl: curl: (7) Failed to connect to mydomain.com port 443: No route to host
. The HTTPS requests don’t even come up /var/log/nginx/access.log .
Here are the relevant files:
# /etc/nginx/sites-available/mydomain.com
server {
listen 80;
listen [::]:80;
root /var/www/mydomain.com/html;
index index.html;
server_name mydomain.com www.mydomain.com;
location / {
root /var/www/mydomain.com/html;
try_files $uri $uri/ =404;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
# /var/www/mydomain.com/html/index.html
<html>
<head>
testing
</head>
</html>
My firewall (ufw) is inactive, port 443 is forwarded on my router, nginx -t runs fine. Here are my dns (GoDaddy) records. Both point to my ip.
I’m running Debian 9.6 (stretch) and Nginx 1.10.3.
]]>Seafile is an open-source, self-hosted, file synchronization and sharing platform. Users can store and optionally encrypt data on their own servers with storage space as the only limitation. With Seafile you can share files and folders using cross-platform syncing and password-protected links to files with expiration dates. A file-versioning feature means that users can restore deleted and modified files or folders.
In this tutorial, you will install and configure Seafile on a Debian 9 server. You will use MariaDB (the default MySQL variant on Debian 9) to store data for the different components of Seafile, and Apache as the proxy server to handle the web traffic. After completing this tutorial, you will be able use the web interface to access Seafile from desktop or mobile clients, allowing you to sync and share your files with other users or groups on the server or with the public.
Before you begin this guide, you’ll need the following:
One Debian 9 server with a minimum of 2GB of RAM set up by following this Initial Server Setup with Debian 9 tutorial, including a sudo non-root user and a firewall.
An Apache web server with a virtual host configured for the registered domain by following How To Install the Apache Web Server on Debian 9.
An SSL certificate installed on your server by following this How To Secure Apache with Let’s Encrypt on Debian 9 tutorial.
A fully registered domain name. This tutorial will use example.com
throughout.
Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
example.com
pointing to your server’s public IP address.www.example.com
pointing to your server’s public IP address.A MariaDB database server installed and configured. Follow the steps in the How To Install MariaDB on Debian 9 tutorial. Skip Step 3 of this tutorial — “(Optional) Adjusting User Authentication and Privileges”. You will only be making local connections to the database server, so changing the authentication method for the root user is not necessary.
Seafile requires three components in order to work properly. These three components are:
Each of these components stores its data separately in its own database. In this step you will create the three MariaDB databases and a user before proceeding to set up the server.
First, log in to the server using SSH with your username and IP address:
ssh sammy@your_server_ip
Connect to the MariaDB database server as administrator (root):
- sudo mysql
At the MariaDB prompt, use the following SQL command to create the database user:
- CREATE USER 'sammy'@'localhost' IDENTIFIED BY 'password';
Next, you will create the following databases to store the data of the three Seafile components:
ccnet-db
for the Ccnet server.seahub-db
for the Seahub web frontend.seafile-db
for the Seafile file server.At the MariaDB prompt, create your databases:
- CREATE DATABASE `ccnet-db` CHARACTER SET = 'utf8';
- CREATE DATABASE `seafile-db` CHARACTER SET = 'utf8';
- CREATE DATABASE `seahub-db` CHARACTER SET = 'utf8';
Then, grant all privileges to the Seafile database user to access and make changes in these databases:
- GRANT ALL PRIVILEGES ON `ccnet-db`.* to `sammy`@localhost;
- GRANT ALL PRIVILEGES ON `seafile-db`.* to `sammy`@localhost;
- GRANT ALL PRIVILEGES ON `seahub-db`.* to `sammy`@localhost;
Exit the MariaDB prompt by typing exit
:
- exit
Now that you have created a user and the databases required to store the data for each of the Seafile components, you will install dependencies to download the Seafile server package.
Some parts of Seafile are written in Python and therefore require additional Python modules and programs to work. In this step, you will install these required dependencies before downloading and extracting the Seafile server package.
To install the dependencies using apt
run the following command:
- sudo apt install python-setuptools python-pip python-urllib3 python-requests python-mysqldb ffmpeg
The python-setuptools
and python-pip
dependencies oversee installing and managing Python packages. The python-urllib3
and python-requests
packages make requests to websites. Finally, the python-mysqldb
is a library for using MariaDB from Python and ffmpeg
handles multimedia files.
Seafile requires Pillow
, a python library for image processing, and moviepy
to handle movie file thumbnails. These modules are not available in the Debian package repository. You will install them manually using pip
:
- sudo pip install Pillow==4.3.0 moviepy
Now that you have installed the necessary dependencies, you can download the Seafile server package.
Seafile creates additional directories during setup. To keep them all organized, create a new directory and change into it:
- mkdir seafile
- cd seafile
You can now download the latest version (6.3.4
as of this writing) of the Seafile server from the website by running the following command:
- wget https://download.seadrive.org/seafile-server_6.3.4_x86-64.tar.gz
Seafile distributes the download as a compressed tar archive, which means you will need to extract it before proceeding. Extract the archive using tar
:
- tar -zxvf seafile-server_6.3.4_x86-64.tar.gz
Now change into the extracted directory:
- cd seafile-server-6.3.4
At this stage, you have downloaded and extracted the Seafile server package and have also installed the necessary dependencies. You are now ready to configure the Seafile server.
Seafile needs some information about your setup before you start the services for the first time. This includes details like the domain name, the database configuration, and the path where it will store data. To initiate the series of question prompts to provide this information, you can run the script setup_seafile_mysql.sh
, which is included in the archive you extracted in the previous step.
Run the script using bash
:
- bash setup-seafile-mysql.sh
Press ENTER
to continue.
The script will now prompt you with a series of questions. Wherever defaults are mentioned, pressing the ENTER
key will use that value.
This tutorial uses Seafile
as the server name, but you can change it if necessary.
Question 1
What is the name of the server?
It will be displayed on the client. 3 - 15 letters or digits
[ server name ] Seafile
Enter the domain name for this Seafile instance.
Question 2
What is the ip or domain of the server?.
For example: www.mycompany.com, 192.168.1.101
[ This server's ip or domain ] example.com
For Question 3
press ENTER
to accept the default value. If you have set up external storage, for example, using NFS or block storage, you will need to specify the path to that location here instead.
Question 3
Where do you want to put your seafile data?
Please use a volume with enough free space
[ default "/home/sammy/seafile/seafile-data" ]
For Question 4
press ENTER
to accept the default value.
Question 4
Which port do you want to use for the seafile fileserver?
[ default "8082" ]
The next prompt allows you to confirm the database configuration. You can create new databases or use existing databases for setup. For this tutorial you have created the necessary databases in Step 1, so select option 2
here.
-------------------------------------------------------
Please choose a way to initialize seafile databases:
-------------------------------------------------------
[1] Create new ccnet/seafile/seahub databases
[2] Use existing ccnet/seafile/seahub databases
[ 1 or 2 ] 2
Questions 6–9 relate to the MariaDB database server. You will only need to provide the username and password of the mysql user that you created in Step 1. Press ENTER
to accept the default values for host
and port
.
What is the host of mysql server?
[ default "localhost" ]
What is the port of mysql server?
[ default "3306" ]
Which mysql user to use for seafile?
[ mysql user for seafile ] sammy
What is the password for mysql user "seafile"?
[ password for seafile ] password
After providing the password, the script will request the names of the Seafile databases. Use ccnet-db
, seafile-db
, and seahub-db
for this tutorial. The script will then verify if there is a successful connection to the databases before proceeding to display a summary of the initial configuration.
Enter the existing database name for ccnet:
[ ccnet database ] ccnet-db
verifying user "sammy" access to database ccnet-db ... done
Enter the existing database name for seafile:
[ seafile database ] seafile-db
verifying user "sammy" access to database seafile-db ... done
Enter the existing database name for seahub:
[ seahub database ] seahub-db
verifying user "sammy" access to database seahub-db ... done
---------------------------------
This is your configuration
---------------------------------
server name: Seafile
server ip/domain: example.com
seafile data dir: /home/sammy/seafile/seafile-data
fileserver port: 8082
database: use existing
ccnet database: ccnet-db
seafile database: seafile-db
seahub database: seahub-db
database user: sammy
--------------------------------
Press ENTER to continue, or Ctrl-C to abort
---------------------------------
Press ENTER
to confirm.
OutputGenerating ccnet configuration ...
done
Successly create configuration dir /home/sammy/seafile/ccnet.
Generating seafile configuration ...
done
Generating seahub configuration ...
----------------------------------------
Now creating seahub database tables ...
----------------------------------------
creating seafile-server-latest symbolic link ... done
-----------------------------------------------------------------
Your seafile server configuration has been finished successfully.
-----------------------------------------------------------------
run seafile server: ./seafile.sh { start | stop | restart }
run seahub server: ./seahub.sh { start <port> | stop | restart <port> }
-----------------------------------------------------------------
If you are behind a firewall, remember to allow input/output of these tcp ports:
-----------------------------------------------------------------
port of seafile fileserver: 8082
port of seahub: 8000
When problems occur, Refer to
https://github.com/haiwen/seafile/wiki
for information.
As you will be running Seafile behind Apache, opening ports 8082
and 8000
in the firewall is not necessary, so you can ignore this part of the output.
You have completed the initial configuration of the server. In the next step, you will configure the Apache web server before starting the Seafile services.
In this step, you will configure the Apache web server to forward all requests to Seafile. Using Apache in this manner allows you to use a URL without a port number, enable HTTPS connections to Seafile, and make use of the caching functionality that Apache provides for better performance.
To begin forwarding requests, you will need to enable the proxy_http
module in the Apache configuration. This module provides features for proxying HTTP and HTTPS requests. The following command will enable the module:
- sudo a2enmod proxy_http
Note: The Apache rewrite and ssl modules are also required for this setup. You have already enabled these modules as part of configuring Let’s Encrypt in the second Apache tutorial listed in the prerequisites section.
Next, update the virtual host configuration of example.com
to forward requests to the Seafile file server and to the Seahub web interface.
Open the configuration file in a text editor:
- sudo nano /etc/apache2/sites-enabled/example.com-le-ssl.conf
The lines from ServerAdmin
to SSLCertificateKeyFile
are part of the initial Apache and Let’s Encrypt configuration that you set up as part of the prerequisite. Add the highlighted content, beginning at Alias
and ending with the ProxyPassReverse
directive:
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAdmin admin@example.com
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com/html
ErrorLog ${APACHE_LOG_DIR}/example.com-error.log
CustomLog ${APACHE_LOG_DIR}/example.com-access.log combined
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Alias /media /home/sammy/seafile/seafile-server-latest/seahub/media
<Location /media>
Require all granted
</Location>
# seafile fileserver
ProxyPass /seafhttp http://127.0.0.1:8082
ProxyPassReverse /seafhttp http://127.0.0.1:8082
RewriteEngine On
RewriteRule ^/seafhttp - [QSA,L]
# seahub web interface
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
ProxyPass / http://127.0.0.1:8000/
ProxyPassReverse / http://127.0.0.1:8000/
</VirtualHost>
</IfModule>
The Alias directive maps the URL path example.com/media
to a local path in the file system that Seafile uses. The following Location
directive enables access to content in this directory. The ProxyPass
and ProxyPassReverse
directives make Apache act as a reverse proxy for this host, forwarding requests to /
and /seafhttp
to the Seafile web interface and file server running on local host ports 8000
and 8082
respectively. The RewriteRule
directive passes all requests to /seafhttp
unchanged and stops processing further rules ([QSA,L]
).
Save and exit the file.
Test if there are any syntax errors in the virtual host configuration:
- sudo apache2ctl configtest
If it reports Syntax OK
, then there are no issues with your configuration. Restart Apache for the changes to take effect:
- sudo systemctl restart apache2
You have now configured Apache to act as a reverse proxy for the Seafile file server and Seahub. Next, you will update the URLs in Seafile’s configuration before starting the services.
As you are now using Apache to proxy all requests to Seafile, you will need to update the URLs in Seafile’s configuration files in the conf
directory using a text editor before you start the Seafile service.
Open ccnet.conf
in a text editor:
- nano /home/sammy/seafile/conf/ccnet.conf
Modify the SERVICE_URL
setting in the file to point to the new HTTPS URL without the port number, for example:
SERVICE_URL = https://example.com
Save and exit the file once you have added the content.
Now open seahub_settings.py
in a text editor:
- nano /home/sammy/seafile/conf/seahub_settings.py
You can now add a FILE_SERVER_ROOT
setting in the file to specify the path where the file server is listening for file uploads and downloads:
# -*- coding: utf-8 -*-
SECRET_KEY = "..."
FILE_SERVER_ROOT = 'https://example.com/seafhttp'
# ...
Save and exit seahub_settings.py
.
Now you can start the Seafile service and the Seahub interface:
- cd /home/sammy/seafile/seafile-server-6.3.4
- ./seafile.sh start
- ./seahub.sh start
As this is the first time you have started the Seahub service, it will prompt you to create an admin account. Enter a valid email address and a password for this admin user:
OutputWhat is the email for the admin account?
[ admin email ] admin@example.com
What is the password for the admin account?
[ admin password ] password-here
Enter the password again:
[ admin password again ] password-here
----------------------------------------
Successfully created seafile admin
----------------------------------------
Seahub is started
Done.
Open https://example.com
in a web browser and log in using your Seafile admin email address and password.
Once logged in successfully, you can access the admin interface or create new users.
Now that you have verified the web interface is working correctly, you can enable these services to start automatically at system boot in the next step.
To enable the file server and the web interface to start automatically at boot, you can create the respective systemd
service files and activate them.
Create a systemd
service file for the Seafile file server:
- sudo nano /etc/systemd/system/seafile.service
Add the following content to the file:
[Unit]
Description=Seafile
After=network.target mysql.service
[Service]
Type=forking
ExecStart=/home/sammy/seafile/seafile-server-latest/seafile.sh start
ExecStop=/home/sammy/seafile/seafile-server-latest/seafile.sh stop
User=sammy
Group=sammy
[Install]
WantedBy=multi-user.target
Here, the ExectStart
and ExecStop
lines indicate the commands that run to start and stop the Seafile service. The service will run with sammy
as the User
and Group
. The After
line specifies that the Seafile service will start after the networking and MariaDB service has started.
Save seafile.service
and exit.
Create a systemd
service file for the Seahub web interface:
- sudo nano /etc/systemd/system/seahub.service
This is similar to the Seafile service. The only difference is that the web interface is started after the Seafile service. Add the following content to this file:
[Unit]
Description=Seafile hub
After=network.target seafile.service
[Service]
Type=forking
ExecStart=/home/sammy/seafile/seafile-server-latest/seahub.sh start
ExecStop=/home/sammy/seafile/seafile-server-latest/seahub.sh stop
User=sammy
Group=sammy
[Install]
WantedBy=multi-user.target
Save seahub.service
and exit.
You can learn more about systemd unit files in the Understanding Systemd Units and Unit Files tutorial.
Finally, to enable both the Seafile and Seahub services to start automatically at boot, run the following commands:
- sudo systemctl enable seafile.service
- sudo systemctl enable seahub.service
When the server is rebooted, Seafile will start automatically.
At this point, you have completed setting up the server, and can now test each of the services.
In this step, you will test the file synchronization and sharing functionality of the server you have set up and ensure they are working correctly. To do this, you will need to install the Seafile client program on a separate computer and/or a mobile device.
Visit the download page on the Seafile website and follow the instructions to install the latest version of the program on your computer. Seafile clients are available for the various distributions of Linux (Ubuntu, Debian, Fedora, Centos/RHEL, Arch Linux), MacOS, and Windows. Mobile clients are available for Android and iPhone/iPad devices from the respective app stores.
Once you have installed the Seafile client, you can test the file synchronization and sharing functionality.
Open the Seafile client program on your computer or device. Accept the default location for the Seafile folder and click Next.
In the next window, enter the server address, username, and password, then click Login.
At the home page, right click on My Library and click Sync this library. Accept the default value for the location on your computer or device.
Add a file, for example a document or a photo, into the My Library folder. After some time, the file will upload to the server. The following screenshot shows the file photo.jpg copied to the My Library folder.
Now, log in to the web interface at https://example.com
and verify that your file is present on the server.
Click on Share next to the file to generate a download link for this file that you can share.
You have verified that the file synchronization is working correctly and that you can use Seafile to sync and share files and folders from multiple devices.
In this tutorial you set up a private instance of a Seafile server. Now you can start using the server to synchronize files, add users and groups, and share files between them or with the public without relying on an external service.
When a new release of the server is available, please consult the upgrade section of the manual for steps to perform an upgrade.
]]>I have a site that is hosted on a droplet, however it gets very slow in a few moments.
The memory does not exceed 50% of consumption. The CPU is only 33%.
As far as processes are concerned, apache2 always consumes more than 100% of memory.
What can I do?
]]>YunoHost is an open-source platform that facilitates the seamless installation and configuration of self-hosted web applications, including webmail clients, password managers, and even WordPress sites. Self-hosting webmail and other applications provides privacy and control over your personal information. YunoHost allows you to configure settings, create users, and self-host your own applications from its graphical user interface. A marketplace of applications is available through YunoHost to add to your hosting environment. The frontend UI acts as a homepage for all of your applications.
In this tutorial, you will install and configure YunoHost on a server running Debian 9. To achieve this, you will configure your DNS records using DigitalOcean, secure your YunoHost instance with Let’s Encrypt, and install your chosen web applications.
One Debian 9 server with at least 1 GB of memory, with a sudo non-root user and firewall configured on your server following the Debian 9 Initial Server Setup tutorial.
A domain name configured to point to your server. You can learn how to point domains to DigitalOcean Droplets by following the How to Set Up a Host Name with DigitalOcean tutorial.
In this step, you will install YunoHost using the official installation script. YunoHost provides this open-source script that guides you through installing and configuring everything necessary for a YunoHost operation.
Before you download the install script, move into a temporary directory. Using the /tmp
directory will delete the script on reboot, which you will not need after you’ve installed YunoHost:
- cd /tmp
Next, run the following command to download the official install script from YunoHost:
- wget -O yunohost https://install.yunohost.org/
This command downloads the script and saves it to the current directory as a file called yunohost
.
Now you can run the script with sudo:
- sudo /bin/bash yunohost
When asked to overwrite configuration files, select yes.
You will then see a Post-installation screen confirming YunoHost’s installation.
Select Yes to proceed to the post-installation process.
When asked to enter the Main domain, enter the domain name you want to use to access your YunoHost instance. Then choose and enter a secure password for the administrator account.
You have now installed YunoHost on your server. In the next step, you will log in to your fresh YunoHost instance to configure and manage domains.
Now you have YunoHost installed, you can access the admin panel for the first time. You will set up the domain where you would like to host YunoHost by configuring your DNS records.
To start, type either the IP address of your server or the domain name you chose in the last step into your web browser. You’ll see a screen warning that your connection is not private.
The connection is not yet secure because YunoHost uses a self-signed certificate by default. You can visit the site anyway since you’ll secure your site with Let’s Encrypt in the next step.
Now, enter the admin password you set in the previous step to access YunoHost’s admin panel.
In order for YunoHost to function properly, you will configure the DNS settings for your domain name. From the admin panel, navigate to the Domains section and select your domain name. You’ll now see the Operations page where you can access the DNS configuration settings.
Select the DNS configuration button. YunoHost will display a sample zone file for your domain. You’ll use this file to configure the records for your domain.
To start configuring your DNS records, access your domain host. This tutorial walks through configuring DNS records via DigitalOcean’s control panel.
Log in to your DigitalOcean account and click on Networking in the menu. Enter your YunoHost domain in the Domain field and click Add Domain.
You’ll be taken to your domain name’s edit page. On this page, you’ll see the fields where you can add the YunoHost records.
There will be three NS records already set up that specify the DigitalOcean servers are providing DNS services for your domain. You can now add the following records using the sample file provided by YunoHost:
Create two new A records:
@
for the name and choose your Droplet or IP address in the Will Direct To box, leave the TTL at 3600.*
for the name and choose your Droplet or IP address in the Will Direct To box, leave the TTL at 3600.Create two new SRV records:
_xmpp-client._tcp
for the hostname, 5222
for the port, 0
priority, 5
for the weight, and change the TTL to 3600._xmpp-server._tcp
for the hostname, 5269
for the port, 0
priority, 5
for the weight, and change the TTL to 3600.Create three new CNAME records:
muc
for the hostname, @
in is an alias of, and set the TTL to 3600.pubsub
for the hostname, @
in is an alias of, and set the TTL to 3600.vjud
for the hostname, @
in is an alias of, and set the TTL to 3600.For your Mail configuration, create the following records:
@
for the hostname, your domain name for the mail server with a priority of 10
and the TTL at 3600."v=spf1"
, add @
to the hostname, and leave the TTL at 3600.mail._domainkey
to the hostname, and leave the TTL at 3600."v=DMARC1; p=none"
, add _dmarc
to the hostname, and leave the TTL at 3600.And finally, for Let’s Encrypt, configure the following record:
@
for the hostname, add letsencrypt.org to the authority granted for box, set tag to issue
, flags to 128
, and set the TTL to 3600.Once you have added all of the DNS records you’ll see a list on your domain’s control panel. You can also read this guide for more information on managing your records through the DigitalOcean control panel.
You have configured all the DNS records necessary for the YunoHost services to work. In the next step you’ll secure your connection by installing Let’s Encrypt.
In this step you will configure an SSL certificate via Let’s Encrypt to ensure that your connection is secured by encrypted HTTPS each time you or users log in to your site. YunoHost includes a function to install Let’s Encrypt to your domain through the user interface.
In the Domains section of the admin panel, select your domain name again. Navigate down to the Operations section. From here, under Manage SSL certificates, select SSL certificates. You’ll see an option to Install a Let’s Encrypt certificate, you can select this to install the certificate.
You will now have a Let’s Encrypt certificate installed for your domain. You will no longer see the warning messages when you visit your domain or IP address. Your Let’s Encrypt certificate will automatically renew by default. To manually renew your Let’s Encrypt certificate or revert to a self-signed certificate in the future, you can use this Operations page.
You have configured and secured your domain. In the next section you’ll set up a new user and email account to begin installing applications to your YunoHost operation.
YunoHost provides the ability to install a number of pre-packaged web applications alongside each other. To begin installing and using applications, you need to create a regular, non-admin user and email account. You can do this through the admin panel.
From the root of the admin panel, navigate to the Users section.
Select the green New user button to the right of your screen. Enter the desired credentials for the new user in the fields provided.
You’ve finished creating the user. By default, this user already has an associated email address, which you can access through any IMAP email client. Alternatively, you can install a webmail client on YunoHost to accomplish this, which you will do as part of this tutorial.
You have configured all of YunoHost’s basic functions and created a user, complete with an email account. You can now access the applications through the admin panel that are ready for installation. In this tutorial, you’ll install Rainloop, a lightweight webmail app, but you can follow these instructions to install any of the available applications.
Navigate to the Applications section of the admin panel. From here, you can select and install any of the official applications.
Select Rainloop from the list. You will see some configuration options for the application.
/rainloop
. If you’d like it to be at the root of the domain, simply enter /
. Keep in mind that if you do so, you will not be able to use any other applications with that domain.Once finished, click the green Install button.
You’ve installed Rainloop. Open a new browser tab and navigate to the path you chose for the application (example.com/rainloop). You will see the Rainloop main dashboard.
You can repeat Step 4 to create more users and install further applications as you wish.
In the Applications section of the admin panel, it is also possible to install custom applications from third parties by pulling from GitHub repositories.
You now have a secure YunoHost instance configured on your server.
In this tutorial you have installed YunoHost on your server, created an email account, and installed an application. You have a central place to host all your applications alongside each other, including a webmail client to check your email. See the YunoHost website for a full list of applications, both official and unofficial. Also see the official Troubleshooting guide that provides information on services, configuration, and upgrades to YunoHost.
]]>Docker can be an efficient way to run web applications in production, but you may want to run multiple applications on the same Docker host. In this situation, you’ll need to set up a reverse proxy since you only want to expose ports 80
and 443
to the rest of the world.
Traefik is a Docker-aware reverse proxy that includes its own monitoring dashboard. In this tutorial, you’ll use Traefik to route requests to two different web application containers: a Wordpress container and an Adminer container, each talking to a MySQL database. You’ll configure Traefik to serve everything over HTTPS using Let’s Encrypt.
To follow along with this tutorial, you will need the following:
db-admin
, blog
, and monitor
, that each point to the IP address of your server. You can learn how to point domains to DigitalOcean Droplets by reading through DigitalOcean’s Domains and DNS documentation. Throughout this tutorial, substitute your domain for your_domain
in the configuration files and examples.The Traefik project has an official Docker image, so we will use that to run Traefik in a Docker container.
Before we get our Traefik container up and running, though, we need to create a configuration file and set up an encrypted password so we can access the monitoring dashboard.
We’ll use the htpasswd
utility to create this encrypted password. First, install the utility, which is included in the apache2-utils
package:
- sudo apt install apache2-utils
Then generate the password with htpasswd
. Substitute secure_password
with the password you’d like to use for the Traefik admin user:
- htpasswd -nb admin secure_password
The output from the program will look like this:
Outputadmin:$apr1$ruca84Hq$mbjdMZBAG.KWn7vfN/SNK/
You’ll use this output in the Traefik configuration file to set up HTTP Basic Authentication for the Traefik health check and monitoring dashboard. Copy the entire output line so you can paste it later.
To configure the Traefik server, we’ll create a new configuration file called traefik.toml
using the TOML format. TOML is a configuration language similar to INI files, but standardized. This file lets us configure the Traefik server and various integrations, or providers, we want to use. In this tutorial, we will use three of Traefik’s available providers: api
, docker
, and acme
, which is used to support TLS using Let’s Encrypt.
Open up your new file in nano
or your favorite text editor:
- nano traefik.toml
First, add two named entry points, http
and https
, that all backends will have access to by default:
defaultEntryPoints = ["http", "https"]
We’ll configure the http
and https
entry points later in this file.
Next, configure the api
provider, which gives you access to a dashboard interface. This is where you’ll paste the output from the htpasswd
command:
...
[entryPoints]
[entryPoints.dashboard]
address = ":8080"
[entryPoints.dashboard.auth]
[entryPoints.dashboard.auth.basic]
users = ["admin:your_encrypted_password"]
[api]
entrypoint="dashboard"
The dashboard is a separate web application that will run within the Traefik container. We set the dashboard to run on port 8080
.
The entrypoints.dashboard
section configures how we’ll be connecting with the api
provider, and the entrypoints.dashboard.auth.basic
section configures HTTP Basic Authentication for the dashboard. Use the output from the htpasswd
command you just ran for the value of the users
entry. You could specify additional logins by separating them with commas.
We’ve defined our first entryPoint
, but we’ll need to define others for standard HTTP and HTTPS communication that isn’t directed towards the api
provider. The entryPoints
section configures the addresses that Traefik and the proxied containers can listen on. Add these lines to the file underneath the entryPoints
heading:
...
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
...
The http
entry point handles port 80
, while the https
entry point uses port 443
for TLS/SSL. We automatically redirect all of the traffic on port 80
to the https
entry point to force secure connections for all requests.
Next, add this section to configure Let’s Encrypt certificate support for Traefik:
...
[acme]
email = "your_email@your_domain"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
This section is called acme
because ACME is the name of the protocol used to communicate with Let’s Encrypt to manage certificates. The Let’s Encrypt service requires registration with a valid email address, so in order to have Traefik generate certificates for our hosts, set the email
key to your email address. We then specify that we will store the information that we will receive from Let’s Encrypt in a JSON file called acme.json
. The entryPoint
key needs to point to the entry point handling port 443
, which in our case is the https
entry point.
The key onHostRule
dictates how Traefik should go about generating certificates. We want to fetch our certificates as soon as our containers with specified hostnames are created, and that’s what the onHostRule
setting will do.
The acme.httpChallenge
section allows us to specify how Let’s Encrypt can verify that the certificate should be generated. We’re configuring it to serve a file as part of the challenge through the http
entrypoint.
Finally, let’s configure the docker
provider by adding these lines to the file:
...
[docker]
domain = "your_domain"
watch = true
network = "web"
The docker
provider enables Traefik to act as a proxy in front of Docker containers. We’ve configured the provider to watch
for new containers on the web
network (that we’ll create soon) and expose them as subdomains of your_domain
.
At this point, traefik.toml
should have the following contents:
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.dashboard]
address = ":8080"
[entryPoints.dashboard.auth]
[entryPoints.dashboard.auth.basic]
users = ["admin:your_encrypted_password"]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[api]
entrypoint="dashboard"
[acme]
email = "your_email@your_domain"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
[docker]
domain = "your_domain"
watch = true
network = "web"
Save the file and exit the editor. With all of this configuration in place, we can fire up Traefik.
Next, create a Docker network for the proxy to share with containers. The Docker network is necessary so that we can use it with applications that are run using Docker Compose. Let’s call this network web
.
- docker network create web
When the Traefik container starts, we will add it to this network. Then we can add additional containers to this network later for Traefik to proxy to.
Next, create an empty file which will hold our Let’s Encrypt information. We’ll share this into the container so Traefik can use it:
- touch acme.json
Traefik will only be able to use this file if the root user inside of the container has unique read and write access to it. To do this, lock down the permissions on acme.json
so that only the owner of the file has read and write permission.
- chmod 600 acme.json
Once the file gets passed to Docker, the owner will automatically change to the root user inside the container.
Finally, create the Traefik container with this command:
- docker run -d \
- -v /var/run/docker.sock:/var/run/docker.sock \
- -v $PWD/traefik.toml:/traefik.toml \
- -v $PWD/acme.json:/acme.json \
- -p 80:80 \
- -p 443:443 \
- -l traefik.frontend.rule=Host:monitor.your_domain \
- -l traefik.port=8080 \
- --network web \
- --name traefik \
- traefik:1.7.6-alpine
The command is a little long so let’s break it down.
We use the -d
flag to run the container in the background as a daemon. We then share our docker.sock
file into the container so that the Traefik process can listen for changes to containers. We also share the traefik.toml
configuration file and the acme.json
file we created into the container.
Next, we map ports 80
and 443
of our Docker host to the same ports in the Traefik container so Traefik receives all HTTP and HTTPS traffic to the server.
Then we set up two Docker labels that tell Traefik to direct traffic to the hostname monitor.your_domain
to port 8080
within the Traefik container, exposing the monitoring dashboard.
We set the network of the container to web
, and we name the container traefik
.
Finally, we use the traefik:1.7.6-alpine
image for this container, because it’s small.
A Docker image’s ENTRYPOINT
is a command that always runs when a container is created from the image. In this case, the command is the traefik
binary within the container. You can pass additional arguments to that command when you launch the container, but we’ve configured all of our settings in the traefik.toml
file.
With the container started, you now have a dashboard you can access to see the health of your containers. You can also use this dashboard to visualize the frontends and backends that Traefik has registered. Access the monitoring dashboard by pointing your browser to https://monitor.your_domain
. You will be prompted for your username and password, which are admin and the password you configured in Step 1.
Once logged in, you’ll see an interface similar to this:
There isn’t much to see just yet, but leave this window open, and you will see the contents change as you add containers for Traefik to work with.
We now have our Traefik proxy running, configured to work with Docker, and ready to monitor other Docker containers. Let’s start some containers for Traefik to act as a proxy for.
With the Traefik container running, you’re ready to run applications behind it. Let’s launch the following containers behind Traefik:
We’ll manage both of these applications with Docker Compose using a docker-compose.yml
file. Open the docker-compose.yml
file in your editor:
- nano docker-compose.yml
Add the following lines to the file to specify the version and the networks we’ll use:
version: "3"
networks:
web:
external: true
internal:
external: false
We use Docker Compose version 3
because it’s the newest major version of the Compose file format.
For Traefik to recognize our applications, they must be part of the same network, and since we created the network manually, we pull it in by specifying the network name of web
and setting external
to true
. Then we define another network so that we can connect our exposed containers to a database container that we won’t expose through Traefik. We’ll call this network internal
.
Next, we’ll define each of our services
, one at a time. Let’s start with the blog
container, which we’ll base on the official WordPress image. Add this configuration to the file:
version: "3"
...
services:
blog:
image: wordpress:4.9.8-apache
environment:
WORDPRESS_DB_PASSWORD:
labels:
- traefik.backend=blog
- traefik.frontend.rule=Host:blog.your_domain
- traefik.docker.network=web
- traefik.port=80
networks:
- internal
- web
depends_on:
- mysql
The environment
key lets you specify environment variables that will be set inside of the container. By not setting a value for WORDPRESS_DB_PASSWORD
, we’re telling Docker Compose to get the value from our shell and pass it through when we create the container. We will define this environment variable in our shell before starting the containers. This way we don’t hard-code passwords into the configuration file.
The labels
section is where you specify configuration values for Traefik. Docker labels don’t do anything by themselves, but Traefik reads these so it knows how to treat containers. Here’s what each of these labels does:
traefik.backend
specifies the name of the backend service in Traefik (which points to the actual blog
container).traefik.frontend.rule=Host:blog.your_domain
tells Traefik to examine the host requested and if it matches the pattern of blog.your_domain
it should route the traffic to the blog
container.traefik.docker.network=web
specifies which network to look under for Traefik to find the internal IP for this container. Since our Traefik container has access to all of the Docker info, it would potentially take the IP for the internal
network if we didn’t specify this.traefik.port
specifies the exposed port that Traefik should use to route traffic to this container.With this configuration, all traffic sent to our Docker host’s port 80
will be routed to the blog
container.
We assign this container to two different networks so that Traefik can find it via the web
network and it can communicate with the database container through the internal
network.
Lastly, the depends_on
key tells Docker Compose that this container needs to start after its dependencies are running. Since WordPress needs a database to run, we must run our mysql
container before starting our blog
container.
Next, configure the MySQL service by adding this configuration to your file:
services:
...
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD:
networks:
- internal
labels:
- traefik.enable=false
We’re using the official MySQL 5.7 image for this container. You’ll notice that we’re once again using an environment
item without a value. The MYSQL_ROOT_PASSWORD
and WORDPRESS_DB_PASSWORD
variables will need to be set to the same value to make sure that our WordPress container can communicate with MySQL. We don’t want to expose the mysql
container to Traefik or the outside world, so we’re only assigning this container to the internal
network. Since Traefik has access to the Docker socket, the process will still expose a frontend for the mysql
container by default, so we’ll add the label traefik.enable=false
to specify that Traefik should not expose this container.
Finally, add this configuration to define the adminer
container:
services:
...
adminer:
image: adminer:4.6.3-standalone
labels:
- traefik.backend=adminer
- traefik.frontend.rule=Host:db-admin.your_domain
- traefik.docker.network=web
- traefik.port=8080
networks:
- internal
- web
depends_on:
- mysql
This container is based on the official Adminer image. The network
and depends_on
configuration for this container exactly matches what we’re using for the blog
container.
However, since we’re directing all of the traffic to port 80
on our Docker host directly to the blog
container, we need to configure this container differently in order for traffic to make it to our adminer
container. The line traefik.frontend.rule=Host:db-admin.your_domain
tells Traefik to examine the host requested. If it matches the pattern of db-admin.your_domain
, Traefik will route the traffic to the adminer
container.
At this point, docker-compose.yml
should have the following contents:
version: "3"
networks:
web:
external: true
internal:
external: false
services:
blog:
image: wordpress:4.9.8-apache
environment:
WORDPRESS_DB_PASSWORD:
labels:
- traefik.backend=blog
- traefik.frontend.rule=Host:blog.your_domain
- traefik.docker.network=web
- traefik.port=80
networks:
- internal
- web
depends_on:
- mysql
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD:
networks:
- internal
labels:
- traefik.enable=false
adminer:
image: adminer:4.6.3-standalone
labels:
- traefik.backend=adminer
- traefik.frontend.rule=Host:db-admin.your_domain
- traefik.docker.network=web
- traefik.port=8080
networks:
- internal
- web
depends_on:
- mysql
Save the file and exit the text editor.
Next, set values in your shell for the WORDPRESS_DB_PASSWORD
and MYSQL_ROOT_PASSWORD
variables before you start your containers:
- export WORDPRESS_DB_PASSWORD=secure_database_password
- export MYSQL_ROOT_PASSWORD=secure_database_password
Substitute secure_database_password
with your desired database password. Remember to use the same password for both WORDPRESS_DB_PASSWORD
and MYSQL_ROOT_PASSWORD
.
With these variables set, run the containers using docker-compose
:
- docker-compose up -d
Now take another look at the Traefik admin dashboard. You’ll see that there is now a backend
and a frontend
for the two exposed servers:
Navigate to blog.your_domain
, substituting your_domain
with your domain. You’ll be redirected to a TLS connection and can now complete the Wordpress setup:
Now access Adminer by visiting db-admin.your_domain
in your browser, again substituting your_domain
with your domain. The mysql
container isn’t exposed to the outside world, but the adminer
container has access to it through the internal
Docker network that they share using the mysql
container name as a host name.
On the Adminer login screen, use the username root, use mysql
for the server, and use the value you set for MYSQL_ROOT_PASSWORD
for the password. Once logged in, you’ll see the Adminer user interface:
Both sites are now working, and you can use the dashboard at monitor.your_domain
to keep an eye on your applications.
In this tutorial, you configured Traefik to proxy requests to other applications in Docker containers.
Traefik’s declarative configuration at the application container level makes it easy to configure more services, and there’s no need to restart the traefik
container when you add new applications to proxy traffic to since Traefik notices the changes immediately through the Docker socket file it’s monitoring.
To learn more about what you can do with Traefik, head over to the official Traefik documentation. If you’d like to explore Docker containers further, check out How To Set Up a Private Docker Registry on Ubuntu 18.04 or How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose.
]]>I am unable to create any size Volumes in NYC3. I keep getting error-slider:
“The backend responded with an error”
Screenshot: https://www.entangledweb.com/do-screenshots/Error-slider-when-creating-NYC3-Volume.png
When I try to create a NYC3 Droplet + Volumes, I get a bunch of error-sliders with these messages:
0 v. 1 o. 2 l. 3 u. 4 m. 5 e. . 7 d. etc.
Screenshot: https://www.entangledweb.com/do-screenshots/Error-slider-when-creating-NYC3-Droplet-with-Volume.png
The sliders stay on long enough to take a screenshot then disappear … as I type this question, it appears the error-sliders are attempting to write the message.
So far, I am not experiencing this create Volumes problem in any other Region. I am able to detach NYC3 Volumes and attach them to other Droplets … I am also able to resize Volumes … only the Create NYC3 Volumes fails.
As anyone had the same experience? If so, how did you fix the issue? Support seems to suggest this is my issue, when in fact it a DO Backend issue on my Account for creating NYC3 Volumes … like some Volume create process is stuck.
Appreciate any help you can provide to Support or me to get this problem resolved.
Thank you.
Craig
]]>When via terminal I use droplet-a$ ping <droplet-b>
I am not receiving packets. What am I missing here?
usermod -d /var/www/ unstoppz
after that i couldnt log anymore to sftp with that user , how can i fix that ? Thanks and sry for bad english.
]]>root@ohmygodzilla:~# systemctl status mariadb.service
● mariadb.service - MariaDB database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2018-10-31 02:25:05 UTC; 3min 32s ago
Process: 9847 ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION (code=exited, status=1/FAILURE)
Process: 9758 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=/usr/bin/galera_recovery
; [ $? -eq 0 ] && s
Process: 9754 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS)
Process: 9751 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysqld (code=exited, status=0/SUCCESS)
Main PID: 9847 (code=exited, status=1/FAILURE)
Status: “MariaDB server is down”
Oct 31 02:25:02 ohmygodzilla systemd[1]: Starting MariaDB database server… Oct 31 02:25:02 ohmygodzilla mysqld[9847]: 2018-10-31 2:25:02 140680916444096 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) s Oct 31 02:25:05 ohmygodzilla systemd[1]: mariadb.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:25:05 ohmygodzilla systemd[1]: Failed to start MariaDB database server. Oct 31 02:25:05 ohmygodzilla systemd[1]: mariadb.service: Unit entered failed state. Oct 31 02:25:05 ohmygodzilla systemd[1]: mariadb.service: Failed with result ‘exit-code’.
]]>Droplet IP :188.166.87.227
]]>Under certain circumstances, my load balancers are partly losing connectivity with droplets. They end up being able to run their health checks only from one IP address, where normally with a healthy droplet, they use two. This means the droplet is being flagged as “unhealthy” and “down” when it is in fact up, and responding correctly. It is the load balancer that seems to be faulty.
Has anyone else seen this? Or, better, have an idea what to do about it? For me, load balancers are not proving stable enough for production use.
Once this has happened there seems to be no resolution, short of re-provisioning the entire load balancer, which of course makes them a bit pointless. Removing and adding a droplet again has no effect, they remain 50% unhealthy (aka “down”).
See droplet: aps1.staging.turalt.com as an example. It is attached a load balancer, and is correctly responding to heath tests, e.g.,:
10.137.232.60 - - [26/Oct/2018:14:41:05 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:12 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:12 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:15 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:22 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:22 +0000] “GET /health HTTP/1.0” 200 71 “-” “-”
On aps2.staging.turalt.com, by contrast the logs are:
10.137.240.198 - - [26/Oct/2018:14:41:56 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.240.198 - - [26/Oct/2018:14:41:56 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:57 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:57 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.240.198 - - [26/Oct/2018:14:41:57 +0000] “GET /health HTTP/1.0” 200 71 “-” “-” 10.137.232.60 - - [26/Oct/2018:14:41:58 +0000] “GET /health HTTP/1.0” 200 71 “-” “-”
I am using the API to update software by temporarily removing a droplet and then adding it again, so that might be a factor, but I have no evidence for it.
This isn’t happening with all droplets, but I haven’t found a pattern yet.
]]> $client = new S3Client([
'credentials' => [
'key' => '',
'secret' => '/pn8IBKqObu7TAM',
],
'region' => '',
'version' => 'latest',
'endpoint' => 'https://ams3.digitaloceanspaces.com',
]);
$adapter = new AwsS3Adapter($client, '');
$filename = 'toto.png';
$fs = new Filesystem($adapter);
$downloadable_file_stream = $fs->readStream('sounds/toto.png');
$downloadable_file_stream_contents = stream_get_contents($downloadable_file_stream);
$response = new StreamedResponse();
$response->setCallback(function () use ($downloadable_file_stream_contents) {
echo $downloadable_file_stream_contents;
flush();
});
return $response->send();
Thank you very much for your helping. Guillaume
]]>WordPress is the most popular CMS (content management system) on the internet. It allows you to easily set up flexible blogs and websites on top of a MySQL backend with PHP processing. WordPress has seen incredible adoption and is a great choice for getting a website up and running quickly. After setup, almost all administration can be done through the web frontend.
In this guide, we’ll focus on getting a WordPress instance set up on a LEMP stack (Linux, Nginx, MySQL, and PHP) on a Debian 9 server.
In order to complete this tutorial, you will need access to a Debian 9 server.
You will need to perform the following tasks before you can start this guide:
sudo
user on your server: We will be completing the steps in this guide using a non-root user with sudo
privileges. You can create a user with sudo
privileges by following our Debian 9 initial server setup guide.When you are finished the setup steps, log into your server as your sudo
user and continue below.
The first step that we will take is a preparatory one. WordPress uses MySQL to manage and store site and user information. We have MySQL installed already, but we need to make a database and a user for WordPress to use.
To get started, log into the MySQL root (administrative) account. If MySQL is configured to use the auth_socket
authentication plugin (the default), you can log into the MySQL administrative account using sudo
:
- sudo mysql
If you changed the authentication method to use a password for the MySQL root account, use the following format instead:
- mysql -u root -p
You will be prompted for the password you set for the MySQL root account.
First, we can create a separate database that WordPress can control. You can call this whatever you would like, but we will be using wordpress
in this guide to keep it simple. You can create the database for WordPress by typing:
- CREATE DATABASE your_domain DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
Note: Every MySQL statement must end in a semi-colon (;). Check to make sure this is present if you are running into any issues.
Next, we are going to create a separate MySQL user account that we will use exclusively to operate on our new database. Creating one-function databases and accounts is a good idea from a management and security standpoint. We will use the name wordpressuser
in this guide. Feel free to change this if you’d like.
We are going to create this account, set a password, and grant access to the database we created. We can do this by typing the following command. Remember to choose a strong password here for your database user:
- GRANT ALL ON your_domain.* TO 'wordpressuser'@'localhost' IDENTIFIED BY 'password';
You now have a database and user account, each made specifically for WordPress. We need to flush the privileges so that the current instance of MySQL knows about the recent changes we’ve made:
- FLUSH PRIVILEGES;
Exit out of MySQL by typing:
- EXIT;
The MySQL session will exit, returning you to the regular Linux shell.
When setting up our LEMP stack, we only required a very minimal set of extensions in order to get PHP to communicate with MySQL. WordPress and many of its plugins leverage additional PHP extensions.
We can download and install some of the most popular PHP extensions for use with WordPress by typing:
- sudo apt update
- sudo apt install php-curl php-gd php-intl php-mbstring php-soap php-xml php-xmlrpc php-zip
Note: Each WordPress plugin has its own set of requirements. Some may require additional PHP packages to be installed. Check your plugin documentation to discover its PHP requirements. If they are available, they can be installed with apt
as demonstrated above.
When you are finished installing the extensions, restart the PHP-FPM process so that the running PHP processor can leverage the newly installed features:
- sudo systemctl restart php7.0-fpm
We now have all of the necessary PHP extensions installed on the server.
Next, we will be making a few minor adjustments to our Nginx server block files. Based on the prerequisite tutorials, you should have a configuration file for your site in the /etc/nginx/sites-available/
directory configured to respond to your server’s domain name and protected by a TLS/SSL certificate. We’ll use /etc/nginx/sites-available/your_domain
as an example here, but you should substitute the path to your configuration file where appropriate.
Additionally, we will use /var/www/your_domain
as the root directory of our WordPress install. You should use the web root specified in your own configuration.
Note: It’s possible you are using the /etc/nginx/sites-available/default
default configuration (with /var/www/html
as your web root). This is fine to use if you’re only going to host one website on this server. If not, it’s best to split the necessary configuration into logical chunks, one file per site.
Open your site’s Nginx configuration file with sudo
privileges to begin:
- sudo nano /etc/nginx/sites-available/your_domain
We need to add a few location
directives within our main server
block. After adding SSL certificates your config may have two server
blocks. If so, find the one that contains root /var/www/your_domain
and your other location
directives and implement your changes there.
Start by creating exact-matching location blocks for requests to /favicon.ico
and /robots.txt
, both of which we do not want to log requests for.
We will use a regular expression location to match any requests for static files. We will again turn off the logging for these requests and will mark them as highly cacheable since these are typically expensive resources to serve. You can adjust this static files list to contain any other file extensions your site may use:
server {
. . .
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { log_not_found off; access_log off; allow all; }
location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
expires max;
log_not_found off;
}
. . .
}
Inside of the existing location /
block, we need to adjust the try_files
list so that instead of returning a 404 error as the default option, control is passed to the index.php
file with the request arguments.
This should look something like this:
server {
. . .
location / {
#try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php$is_args$args;
}
. . .
}
When you are finished, save and close the file.
Now, we can check our configuration for syntax errors by typing:
- sudo nginx -t
If no errors were reported, reload Nginx by typing:
- sudo systemctl reload nginx
Next, we will download and set up WordPress itself.
Now that our server software is configured, we can download and set up WordPress. For security reasons in particular, it is always recommended to get the latest version of WordPress from their site.
Change into a writable directory and then download the compressed release by typing:
- cd /tmp
- curl -LO https://wordpress.org/latest.tar.gz
Extract the compressed file to create the WordPress directory structure:
- tar xzvf latest.tar.gz
We will be moving these files into our document root momentarily. Before we do that, we can copy over the sample configuration file to the filename that WordPress actually reads:
- cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php
Now, we can copy the entire contents of the directory into our document root. We are using the -a
flag to make sure our permissions are maintained. We are using a dot at the end of our source directory to indicate that everything within the directory should be copied, including any hidden files:
- sudo cp -a /tmp/wordpress/. /var/www/your_domain
Now that our files are in place, we’ll assign ownership them to the www-data
user and group. This is the user and group that Nginx runs as, and Nginx will need to be able to read and write WordPress files in order to serve the website and perform automatic updates.
- sudo chown -R www-data:www-data /var/www/your_domain
Our files are now in our server’s document root and have the correct ownership, but we still need to complete some more configuration.
Next, we need to make some changes to the main WordPress configuration file.
When we open the file, our first order of business will be to adjust some secret keys to provide some security for our installation. WordPress provides a secure generator for these values so that you do not have to try to come up with good values on your own. These are only used internally, so it won’t hurt usability to have complex, secure values here.
To grab secure values from the WordPress secret key generator, type:
- curl -s https://api.wordpress.org/secret-key/1.1/salt/
You will get back unique values that look something like this:
Warning: It is important that you request unique values each time. Do NOT copy the values shown below!
Outputdefine('AUTH_KEY', '1jl/vqfs<XhdXoAPz9 DO NOT COPY THESE VALUES c_j{iwqD^<+c9.k<J@4H');
define('SECURE_AUTH_KEY', 'E2N-h2]Dcvp+aS/p7X DO NOT COPY THESE VALUES {Ka(f;rv?Pxf})CgLi-3');
define('LOGGED_IN_KEY', 'W(50,{W^,OPB%PB<JF DO NOT COPY THESE VALUES 2;y&,2m%3]R6DUth[;88');
define('NONCE_KEY', 'll,4UC)7ua+8<!4VM+ DO NOT COPY THESE VALUES #`DXF+[$atzM7 o^-C7g');
define('AUTH_SALT', 'koMrurzOA+|L_lG}kf DO NOT COPY THESE VALUES 07VC*Lj*lD&?3w!BT#-');
define('SECURE_AUTH_SALT', 'p32*p,]z%LZ+pAu:VY DO NOT COPY THESE VALUES C-?y+K0DK_+F|0h{!_xY');
define('LOGGED_IN_SALT', 'i^/G2W7!-1H2OQ+t$3 DO NOT COPY THESE VALUES t6**bRVFSD[Hi])-qS`|');
define('NONCE_SALT', 'Q6]U:K?j4L%Z]}h^q7 DO NOT COPY THESE VALUES 1% ^qUswWgn+6&xqHN&%');
These are configuration lines that we can paste directly in our configuration file to set secure keys. Copy the output you received now.
Now, open the WordPress configuration file:
- sudo nano /var/www/your_domain/wp-config.php
Find the section that contains the dummy values for those settings. It will look something like this:
. . .
define('AUTH_KEY', 'put your unique phrase here');
define('SECURE_AUTH_KEY', 'put your unique phrase here');
define('LOGGED_IN_KEY', 'put your unique phrase here');
define('NONCE_KEY', 'put your unique phrase here');
define('AUTH_SALT', 'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT', 'put your unique phrase here');
define('NONCE_SALT', 'put your unique phrase here');
. . .
Delete those lines and paste in the values you copied from the command line:
. . .
define('AUTH_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('SECURE_AUTH_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('LOGGED_IN_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('NONCE_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('AUTH_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('SECURE_AUTH_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('LOGGED_IN_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('NONCE_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
. . .
Next, we need to modify some of the database connection settings at the beginning of the file. You need to adjust the database name, the database user, and the associated password that we configured within MySQL.
The other change we need to make is to set the method that WordPress should use to write to the filesystem. Since we’ve given the web server permission to write where it needs to, we can explicitly set the filesystem method to “direct”. Failure to set this with our current settings would result in WordPress prompting for FTP credentials when we perform some actions. This setting can be added below the database connection settings, or anywhere else in the file:
. . .
define('DB_NAME', 'your_domain');
/** MySQL database username */
define('DB_USER', 'wordpressuser');
/** MySQL database password */
define('DB_PASSWORD', 'password');
. . .
define('FS_METHOD', 'direct');
Save and close the file when you are finished.
Now that the server configuration is complete, we can finish up the installation through the web interface.
In your web browser, navigate to your server’s domain name or public IP address:
http://server_domain_or_IP
Select the language you would like to use:
Next, you will come to the main setup page.
Select a name for your WordPress site and choose a username (it is recommended not to choose something like “admin” for security purposes). A strong password is generated automatically. Save this password or select an alternative strong password.
Enter your email address and select whether you want to discourage search engines from indexing your site:
When you click ahead, you will be taken to a page that prompts you to log in:
Once you log in, you will be taken to the WordPress administration dashboard:
WordPress should be installed and ready to use! Some common next steps are to choose the permalinks setting for your posts (can be found in Settings > Permalinks
) or to select a new theme (in Appearance > Themes
). If this is your first time using WordPress, explore the interface a bit to get acquainted with your new CMS.
The LEMP software stack is a group of software that can be used to serve dynamic web pages and web applications. This is an acronym that describes a Linux operating system, with an Nginx web server. The backend data is stored in the MySQL database and the dynamic processing is handled by PHP.
In this guide, you’ll install a LEMP stack on a Debian server using the packages provided by the operating system.
To complete this guide, you will need a Debian 9 server with a non-root user with sudo
privileges. You can set up a user with these privileges in our Initial Server Setup with Debian 9 guide.
In order to display web pages to our site visitors, we are going to employ Nginx, a modern, efficient web server.
All of the software we will be using for this procedure will come directly from Debian’s default package repositories. This means we can use the apt
package management suite to complete the installation.
Since this is our first time using apt
for this session, we should start off by updating our local package index. We can then install the server:
- sudo apt update
- sudo apt install nginx
On Debian 9, Nginx is configured to start running upon installation.
If you have the ufw
firewall running, you will need to allow connections to Nginx. You should enable the most restrictive profile that will still allow the traffic you want. Since we haven’t configured SSL for our server yet, in this guide, we will only need to allow traffic on port 80
.
You can enable this by typing:
- sudo ufw allow 'Nginx HTTP'
You can verify the change by typing:
- sudo ufw status
You should see HTTP traffic allowed in the displayed output:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
Now, test if the server is up and running by accessing your server’s domain name or public IP address in your web browser. If you do not have a domain name pointed at your server and you do not know your server’s public IP address, you can find it by typing one of the following into your terminal:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
This will print out a few IP addresses. You can try each of them in turn in your web browser.
Type one of the addresses that you receive in your web browser. It should take you to Nginx’s default landing page:
http://your_domain_or_IP
If you see the above page, you have successfully installed Nginx.
Now that we have a web server, we need to install MySQL, a database management system, to store and manage the data for our site.
You can install this easily by typing:
- sudo apt install mysql-server
Note: In Debian 9 a community fork of the MySQL project – MariaDB – is packaged as the default MySQL variant. While, MariaDB works well in most cases, if you need features found only in Oracle’s MySQL, you can install and use packages from a repository maintained by the MySQL developers. To install the official MySQL server, use our tutorial How To Install the Latest MySQL on Debian 9.
The MySQL database software is now installed, but its configuration is not complete.
To secure the installation, we can run a security script that will ask whether we want to modify some insecure defaults. Begin the script by typing:
- sudo mysql_secure_installation
You will be asked to enter the password for the MySQL root account. We haven’t set this yet, so just hit ENTER
. Then you’ll be asked you if you want to set that password. You should type y
then set a root password.
For the rest of the questions the script asks, you should press y
, followed by the ENTER
key at each prompt. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MySQL immediately respects the changes you have made.
At this point, your database system is now set up and secured. Let’s set up PHP.
We now have Nginx installed to serve our pages and MySQL installed to store and manage our data. However, we still don’t have anything that can generate dynamic content. That’s where PHP comes in.
Since Nginx does not contain native PHP processing like some other web servers, we will need to install fpm
, which stands for “fastCGI process manager”. We will tell Nginx to pass PHP requests to this software for processing. We’ll also install an additional helper package that will allow PHP to communicate with our MySQL database backend. The installation will pull in the necessary PHP core files to make that work.
Then install the php-fpm
and php-mysql
packages:
- sudo apt install php-fpm php-mysql
We now have our PHP components installed. Next we’ll configure Nginx to use them.
Now we have all of the required components installed. The only configuration change we still need is to tell Nginx to use our PHP processor for dynamic content.
We do this on the server block level (server blocks are similar to Apache’s virtual hosts). We’re going to leave the default Nginx configuration alone and instead create a new configuration file and new web root directory to hold our PHP files. We’ll name the configuration file and the directory after the domain name or hostname that the server should respond to.
First, create a new directory in /var/www
to hold the PHP site:
- sudo mkdir /var/www/your_domain
Then, open a new configuration file in Nginx’s sites-available
directory:
- sudo nano /etc/nginx/sites-available/your_domain
This will create a new blank file. Paste in the following bare-bones configuration:
server {
listen 80;
listen [::]:80;
root /var/www/your_domain;
index index.php index.html index.htm;
server_name your_domain;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
}
}
This is a very basic configuration that listens on port 80 and serves files from the web root we just created. It will only respond to requests to the name provided after server_name
, and any files ending in .php
will be processed by the php-fpm
process before Nginx sends the results to the user.
Save and close the file when you’re done customizing it.
Activate your configuration by linking to the config file from Nginx’s sites-enabled
directory:
- sudo ln -s /etc/nginx/sites-available/your_domain.conf /etc/nginx/sites-enabled/
This will tell Nginx to use the configuration next time it is reloaded. First, test your configuration for syntax errors by typing:
- sudo nginx -t
If any errors are reported, go back and recheck your file before continuing.
When you are ready, reload Nginx to make the changes:
- sudo systemctl reload nginx
Next we’ll create a file in our new web root directory to test out PHP processing.
Your LEMP stack should now be completely set up. We can test it to validate that Nginx can correctly hand .php
files off to our PHP processor.
We can do this by creating a test PHP file in our document root. Open a new file called info.php
within your document root in your text editor:
- sudo nano /var/www/your_domain/info.php
Type or paste the following lines into the new file. This is valid PHP code that will return information about our server:
<?php
phpinfo();
?>
When you are finished, save and close the file.
Now, you can visit this page in your web browser by visiting your server’s domain name or public IP address followed by /info.php
:
http://your_domain/info.php
You should see a web page that has been generated by PHP with information about your server:
If you see a page that looks like this, you’ve set up PHP processing with Nginx successfully.
After verifying that Nginx renders the page correctly, it’s best to remove the file you created as it can actually give unauthorized users some hints about your configuration that may help them try to break in.
For now, remove the file by typing:
- sudo rm /var/www/html/info.php
You can always regenerate this file if you need it later.
You should now have a LEMP stack configured on your Debian server. This gives you a very flexible foundation for serving web content to your visitors.
]]>R is an open-source programming language that specializes in statistical computing and graphics. Supported by the R Foundation for Statistical Computing, it is widely used for developing statistical software and performing data analysis. An increasingly popular and extensible language with an active community, R offers many user-generated packages for specific areas of study, which makes it applicable to many fields.
In this tutorial, we will install R and show how to add packages from the official Comprehensive R Archive Network (CRAN).
To follow along with this tutorial, you will need a Debian 9 server with:
sudo
privilegesTo learn how to achieve this setup, follow our Debian 9 initial server setup guide.
Once these prerequisites are in place, you’re ready to begin.
Because R is a fast-moving project, the latest stable version isn’t always available from Debian’s repositories, so we’ll need to add the external repository maintained by CRAN. In order to do this, we’ll need to install some dependencies for the Debian 9 cloud image.
To perform network operations that manage and download certificates, we need to install dirmngr
so that we can add the external repository.
- sudo apt install dirmngr --install-recommends
To add a PPA reference to Debian, we’ll need to use the add-apt-repository
command. For installations where this command may not available, you can add this utility to your system by installing software-properties-common
:
- sudo apt install software-properties-common
Finally, to ensure that we have HTTPS support for secure protocols, we’ll install the following tool:
- sudo apt install apt-transport-https
With these dependencies in place, we’re ready to install R.
For the most recent version of R, we’ll be installing from the CRAN repositories.
Note: CRAN maintains the repositories within their network, but not all external repositories are reliable. Be sure to install only from trusted sources.
Let’s first add the relevant GPG key.
- sudo apt-key adv --keyserver keys.gnupg.net --recv-key 'E19F5F87128899B192B1A2C2AD5F960A256A04AF'
When we run the command, we’ll receive the following output:
OutputExecuting: /tmp/apt-key-gpghome.k3UoM7WQGq/gpg.1.sh --keyserver keys.gnupg.net --recv-key E19F5F87128899B192B1A2C2AD5F960A256A04AF
gpg: key AD5F960A256A04AF: public key "Johannes Ranke (Wissenschaftlicher Berater) <johannes.ranke@jrwb.de>" imported
gpg: Total number processed: 1
gpg: imported: 1
Once we have the trusted key, we can add the repository. Note that if you’re not using Debian 9 (Stretch), you can look at the supported R Project Debian branches, named for each release.
- sudo add-apt-repository 'deb https://cloud.r-project.org/bin/linux/debian stretch-cran35/'
Now, we’ll need to run update
after this in order to include package manifests from the new repository.
- sudo apt update
Among the output that displays, you should identify lines similar to the following:
Output...
Get:6 https://cloud.r-project.org/bin/linux/debian stretch-cran35/ InRelease [4,371 B]
Get:7 https://cloud.r-project.org/bin/linux/debian stretch-cran35/ Packages [50.1 kB]
...
If the lines above appear in the output from the update
command, we’ve successfully added the repository. We can be sure we won’t accidentally install an older version.
At this point, we’re ready to install R with the following command.
- sudo apt install r-base
If prompted to confirm installation, press y
to continue.
As of the time of writing, the latest stable version of R from CRAN is 3.5.1, which is displayed when you start R.
Since we’re planning to install an example package for every user on the system, we’ll start R as root so that the libraries will be available to all users automatically. Alternatively, if you run the R
command without sudo
, a personal library can be set up for your user.
- sudo -i R
Output
R version 3.5.1 (2018-07-02) -- "Feather Spray"
Copyright (C) 2018 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)
...
Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.
>
This confirms that we’ve successfully installed R and entered its interactive shell.
Part of R’s strength is its available abundance of add-on packages. For demonstration purposes, we’ll install txtplot
, a library that outputs ASCII graphs that include scatterplot, line plot, density plot, acf and bar charts:
- install.packages('txtplot')
Note: The following output shows where the package will be installed.
Output...
Installing package into ‘/usr/local/lib/R/site-library’
(as ‘lib’ is unspecified)
. . .
This site-wide path is available because we ran R as root. This is the correct location to make the package available to all users.
When the installation is complete, we can load txtplot
:
- library('txtplot')
If there are no error messages, the library has successfully loaded. Let’s put it in action now with an example which demonstrates a basic plotting function with axis labels. The example data, supplied by R’s datasets
package, contains the speed of cars and the distance required to stop based on data from the 1920s:
- txtplot(cars[,1], cars[,2], xlab = 'speed', ylab = 'distance')
Output +----+-----------+------------+-----------+-----------+--+
120 + * +
| |
d 100 + * +
i | * * |
s 80 + * * +
t | * * * * |
a 60 + * * * * * +
n | * * * * * |
c 40 + * * * * * * * +
e | * * * * * * * |
20 + * * * * * +
| * * * |
0 +----+-----------+------------+-----------+-----------+--+
5 10 15 20 25
speed
If you are interested to learn more about txtplot
, use help(txtplot)
from within the R interpreter.
Any precompiled package can be installed from CRAN with install.packages()
. To learn more about what’s available, you can find a listing of official packages organized by name via the Available CRAN Packages By Name list.
To exit R, you can type q()
. Unless you want to save the workspace image, you can press n
.
With R successfully installed on your server, you may be interested in this guide on installing the RStudio Server to bring an IDE to the server-based deployment you just completed. You can also learn how to set up a Shiny server to convert your R code into interactive web pages.
For more information on how to install R packages by leveraging different tools, you can read about how to install directly from GitHub, BitBucket or other locations. This will allow you to take advantage of the very latest work from the active community.
]]>Webmin is a modern web control panel for any Linux machine that allows you to administer your server through a simple interface. With Webmin, you can change settings for common packages on the fly.
In this tutorial, you’ll install and configure Webmin on your server and secure access to the interface with a valid certificate using Let’s Encrypt. You’ll then use Webmin to add new user accounts, and update all packages on your server from the dashboard.
To complete this tutorial, you will need:
First, we need to add the Webmin repository so that we can easily install and update Webmin using our package manager. We do this by adding the repository to the /etc/apt/sources.list
file.
Open the file in your editor:
- sudo nano /etc/apt/sources.list
Then add this line to the bottom of the file to add the new repository:
. . .
deb http://download.webmin.com/download/repository sarge contrib
Save the file and exit the editor.
Next, add the Webmin PGP key so that your system will trust the new repository:
wget http://www.webmin.com/jcameron-key.asc
sudo apt-key add jcameron-key.asc
Next, update the list of packages to include the Webmin repository:
- sudo apt update
Then install Webmin:
- sudo apt install webmin
Once the installation finishes, you’ll be presented with the following output:
OutputWebmin install complete. You can now login to
https://your_server_ip:10000 as root with your
root password, or as any user who can use `sudo`.
Please copy down this information, as you will need it for the next step.
Note: If you installed ufw
during the prerequisite step, you will need to run the command sudo ufw allow 10000
in order to allow Webmin through the firewall. For extra security, you may want to configure your firewall to only allow access to this port from certain IP ranges.
Let’s secure access to Webmin by adding a valid certificate.
Webmin is already configured to use HTTPS, but it uses a self-signed, untrusted certificate. Let’s replace it with a valid certificate from Let’s Encrypt.
Navigate to https://your_domain:10000
in your web browser, replacing your_domain
with the domain name you pointed at your server.
Note: When logging in for the first time, you will see an “Invalid SSL” error. This is because the server has generated a self-signed certificate. Allow the exception to continue so you can replace the self-signed certificate with one from Let’s Encrypt.
You’ll be presented with a login screen. Sign in with the non-root user you created while fulfilling the prerequisites for this tutorial.
Once you log in, the first screen you will see is the Webmin dashboard. Before you can apply a valid certificate, you have to set the server’s hostname. Look for the System hostname field and click on the link to the right, as shown in the following figure:
This wil take you to the Hostname and DNS Client page. Locate the Hostname field, and enter your Fully-Qualified Domain Name into the field. Then press the Save button at the bottom of the page to apply the setting.
After you’ve set your hostname, click on Webmin on the left navigation bar, and then click on Webmin Configuration.
Then, select SSL Encryption from the list of icons, and then select the Let’s Encrypt tab. You’ll see a screen like the following figure:
Using this screen, you’ll tell Webmin how to obtain and renew your certificate. Let’s Encrypt certificates expire after 3 months, but we can instruct Webmin to automatically attempt to renew the Let’s Encrypt certificate every month. Let’s Encrypt looks for a verification file on our server, so we’ll configure Webmin to place the verification file inside the folder /var/www/html
, which is the folder that the Apache web server you configured in the prerequisites uses. Follow these steps to set up your certificate:
/var/www/html
.1
into the input box, and selecting the radio button to the left of the input box.To use the new certificate, restart Webmin by clicking the back arrow in your browser, and clicking the Restart Webmin button. Wait around 30 seconds, and then reload the page and log in again. Your browser should now indicate that the certificate is valid.
You’ve now set up a secured working instance of Webmin. Let’s look at how to use it.
Webmin has many different modules that can control everything from the BIND DNS Server to something as simple as adding users to the system. Let’s look at how to create a new user, and then explore how to update the operating system using Webmin.
Let’s explore how to manage the users and groups on your server.
First, click the System tab, and then click the Users and Groups button. Then, from here, you can either add a user, manage a user, or add or manage a group.
Let’s create a new user called deploy which can be used for hosting web applications. To add a user, click Create a new user, which is located at the top of the users table. This displays the Create User screen, where you can supply the username, password, groups and other options. Follow these instructions to create the user:
deploy
.Deployment user
.When creating a user, you can set options for password expiry, the user’s shell, and whether or not they are allowed a home directory.
Next, let’s look at how to install updates to our system.
Webmin lets you update all of your packages through its user interface. To update all of your packages, first, go to the Dashboard link, and then locate the Package updates field. If there are updates available, you’ll see a link that states the number of available updates, as shown in the following figure:
Click this link, and then press Update selected packages to start the update. You may be asked to reboot the server, which you can also do through the Webmin interface.
You now have a secured working instance of Webmin and you’ve used the interface to create a user and update packages. Webmin gives you access to many things you’d normally need to access through the console, and it organizes them in an intuitive way. For example, if you have Apache installed, you would find the configuration tab for it under Servers, and then Apache.
Explore the interface, or read the Official Webmin wiki to learn more about managing your system with Webmin.
]]>Software version control systems enable you to keep track of your software at the source level. With versioning tools, you can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
Git is one of the most popular version control systems currently available. Many projects’ files are maintained in a Git repository, and sites like GitHub, GitLab, and Bitbucket help to facilitate software development project sharing and collaboration.
In this tutorial, we’ll install and configure Git on a Debian 9 server. We will cover how to install the software in two different ways, each of which have their own benefits depending on your specific needs.
In order to complete this tutorial, you should have a non-root user with sudo
privileges on an Debian 9 server. To learn how to achieve this setup, follow our Debian 9 initial server setup guide.
With your server and user set up, you are ready to begin.
Debian’s default repositories provide you with a fast method to install Git. Note that the version you install via these repositories may be older than the newest version currently available. If you need the latest release, consider moving to the next section of this tutorial to learn how to install and compile Git from source.
First, use the apt package management tools to update your local package index. With the update complete, you can download and install Git:
- sudo apt update
- sudo apt install git
You can confirm that you have installed Git correctly by running the following command:
- git --version
Outputgit version 2.11.0
With Git successfully installed, you can now move on to the Setting Up Git section of this tutorial to complete your setup.
A more flexible method of installing Git is to compile the software from source. This takes longer and will not be maintained through your package manager, but it will allow you to download the latest release and will give you some control over the options you include if you wish to customize.
Before you begin, you need to install the software that Git depends on. This is all available in the default repositories, so we can update our local package index and then install the packages.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
After you have installed the necessary dependencies, you can go ahead and get the version of Git you want by visiting the Git project’s mirror on GitHub, available via the following URL:
https://github.com/git/git
From here, be sure that you are on the master
branch. Click on the Tags link and select your desired Git version. Unless you have a reason for downloading a release candidate (marked as rc) version, try to avoid these as they may be unstable.
Next, on the right side of the page, click on the Clone or download button, then right-click on Download ZIP and copy the link address that ends in .zip
.
Back on your Debian 9 server, move into the tmp
directory to download temporary files.
- cd /tmp
From there, you can use the wget
command to install the copied zip file link. We’ll specify a new name for the file: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Unzip the file that you downloaded and move into the resulting directory by typing:
- unzip git.zip
- cd git-*
Now, you can make the package and install it by typing these two commands:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
To ensure that the install was successful, you can type git --version
and you should receive relevant output that specifies the current installed version of Git.
Now that you have Git installed, if you want to upgrade to a later version, you can clone the repository, and then build and install. To find the URL to use for the clone operation, navigate to the branch or tag that you want on the project’s GitHub page and then copy the clone URL on the right side:
At the time of writing, the relevant URL is:
https://github.com/git/git.git
Change to your home directory, and use git clone
on the URL you just copied:
- cd ~
- git clone https://github.com/git/git.git
This will create a new directory within your current directory where you can rebuild the package and reinstall the newer version, just like you did above. This will overwrite your older version with the new version:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
With this complete, you can be sure that your version of Git is up to date.
Now that you have Git installed, you should configure it so that the generated commit messages will contain your correct information.
This can be achieved by using the git config
command. Specifically, we need to provide our name and email address because Git embeds this information into each commit we do. We can go ahead and add this information by typing:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
We can see all of the configuration items that have been set by typing:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
The information you enter is stored in your Git configuration file, which you can optionally edit by hand with a text editor like this:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
There are many other options that you can set, but these are the two essential ones needed. If you skip this step, you’ll likely see warnings when you commit to Git. This makes more work for you because you will then have to revise the commits you have done with the corrected information.
You should now have Git installed and ready to use on your system.
To learn more about how to use Git, check out these articles and series:
]]>Jupyter Notebook offers a command shell for interactive computing as a web application. The tool can be used with several languages, including Python, Julia, R, Haskell, and Ruby. It is often used for working with data, statistical modeling, and machine learning.
This tutorial will walk you through setting up Jupyter Notebook to run from a Debian 9 server, as well as teach you how to connect to and use the notebook. Jupyter notebooks (or simply notebooks) are documents produced by the Jupyter Notebook app which contain both computer code and rich text elements (paragraph, equations, figures, links, etc.) which aid in presenting and sharing reproducible research.
By the end of this guide, you will be able to run Python 3 code using Jupyter Notebook running on a remote server.
In order to complete this guide, you should have a fresh Debian 9 server instance with a basic firewall and a non-root user with sudo privileges configured. You can learn how to set this up by running through our Initial Server Setup with Debian 9 guide.
To begin the process, we’ll download and install all of the items we need from the Debian repositories. We will use the Python package manager pip
to install additional components a bit later.
We first need to update the local apt
package index and then download and install the packages:
- sudo apt update
Next, install pip
and the Python header files, which are used by some of Jupyter’s dependencies:
- sudo apt install python3-pip python3-dev
Debian 9 (“Stretch”) comes preinstalled with Python 3.5.
We can now move on to setting up a Python virtual environment into which we’ll install Jupyter.
Now that we have Python 3, its header files, and pip
ready to go, we can create a Python virtual environment for easier management. We will install Jupyter into this virtual environment.
To do this, we first need access to the virtualenv
command. We can install this with pip
.
Upgrade pip
and install the package by typing:
- sudo -H pip3 install --upgrade pip
- sudo -H pip3 install virtualenv
With virtualenv
installed, we can start forming our environment. Create and move into a directory where we can keep our project files:
- mkdir ~/myprojectdir
- cd ~/myprojectdir
Within the project directory, create a Python virtual environment by typing:
- virtualenv myprojectenv
This will create a directory called myprojectenv
within your myprojectdir
directory. Inside, it will install a local version of Python and a local version of pip
. We can use this to install and configure an isolated Python environment for Jupyter.
Before we install Jupyter, we need to activate the virtual environment. You can do that by typing:
- source myprojectenv/bin/activate
Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (myprojectenv)user@host:~/myprojectdir$
.
You’re now ready to install Jupyter into this virtual environment.
With your virtual environment active, install Jupyter with the local instance of pip
:
Note: When the virtual environment is activated (when your prompt has (myprojectenv)
preceding it), use pip
instead of pip3
, even if you are using Python 3. The virtual environment’s copy of the tool is always named pip
, regardless of the Python version.
- pip install jupyter
At this point, you’ve successfully installed all the software needed to run Jupyter. We can now start the notebook server.
You now have everything you need to run Jupyter Notebook! To run it, execute the following command:
- jupyter notebook
A log of the activities of the Jupyter Notebook will be printed to the terminal. When you run Jupyter Notebook, it runs on a specific port number. The first notebook you run will usually use port 8888
. To check the specific port number Jupyter Notebook is running on, refer to the output of the command used to start it:
Output[I 21:23:21.198 NotebookApp] Writing notebook server cookie secret to /run/user/1001/jupyter/notebook_cookie_secret
[I 21:23:21.361 NotebookApp] Serving notebooks from local directory: /home/sammy/myprojectdir
[I 21:23:21.361 NotebookApp] The Jupyter Notebook is running at:
[I 21:23:21.361 NotebookApp] http://localhost:8888/?token=1fefa6ab49a498a3f37c959404f7baf16b9a2eda3eaa6d72
[I 21:23:21.361 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 21:23:21.361 NotebookApp] No web browser found: could not locate runnable browser.
[C 21:23:21.361 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=1fefa6ab49a498a3f37c959404f7baf16b9a2eda3eaa6d72
If you are running Jupyter Notebook on a local Debian computer (not on a Droplet), you can simply navigate to the displayed URL to connect to Jupyter Notebook. If you are running Jupyter Notebook on a Droplet, you will need to connect to the server using SSH tunneling as outlined in the next section.
At this point, you can keep the SSH connection open and keep Jupyter Notebook running or can exit the app and re-run it once you set up SSH tunneling. Let’s keep it simple and stop the Jupyter Notebook process. We will run it again once we have SSH tunneling working. To stop the Jupyter Notebook process, press CTRL+C
, type Y
, and hit ENTER
to confirm. The following will be displayed:
Output[C 21:28:28.512 NotebookApp] Shutdown confirmed
[I 21:28:28.512 NotebookApp] Shutting down 0 kernels
We’ll now set up an SSH tunnel so that we can access the notebook.
In this section we will learn how to connect to the Jupyter Notebook web interface using SSH tunneling. Since Jupyter Notebook will run on a specific port on the server (such as :8888
, :8889
etc.), SSH tunneling enables you to connect to the server’s port securely.
The next two subsections describe how to create an SSH tunnel from 1) a Mac or Linux and 2) Windows. Please refer to the subsection for your local computer.
If you are using a Mac or Linux, the steps for creating an SSH tunnel are similar to using SSH to log in to your remote server, except that there are additional parameters in the ssh
command. This subsection will outline the additional parameters needed in the ssh
command to tunnel successfully.
SSH tunneling can be done by running the following SSH command in a new local terminal window:
- ssh -L 8888:localhost:8888 your_server_username@your_server_ip
The ssh
command opens an SSH connection, but -L
specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side (server). This means that whatever is running on the second port number (e.g. 8888
) on the server will appear on the first port number (e.g. 8888
) on your local computer.
Optionally change port 8888
to one of your choosing to avoid using a port already in use by another process.
server_username
is your username (e.g. sammy) on the server which you created and your_server_ip
is the IP address of your server.
For example, for the username sammy
and the server address 203.0.113.0
, the command would be:
- ssh -L 8888:localhost:8888 sammy@203.0.113.0
If no error shows up after running the ssh -L
command, you can move into your programming environment and run Jupyter Notebook:
- jupyter notebook
You’ll receive output with a URL. From a web browser on your local machine, open the Jupyter Notebook web interface with the URL that starts with http://localhost:8888
. Ensure that the token number is included, or enter the token number string when prompted at http://localhost:8888
.
If you are using Windows, you can create an SSH tunnel using Putty.
First, enter the server URL or IP address as the hostname as shown:
Next, click SSH on the bottom of the left pane to expand the menu, and then click Tunnels. Enter the local port number to use to access Jupyter on your local machine. Choose 8000
or greater to avoid ports used by other services, and set the destination as localhost:8888
where :8888
is the number of the port that Jupyter Notebook is running on.
Now click the Add button, and the ports should appear in the Forwarded ports list:
Finally, click the Open button to connect to the server via SSH and tunnel the desired ports. Navigate to http://localhost:8000
(or whatever port you chose) in a web browser to connect to Jupyter Notebook running on the server. Ensure that the token number is included, or enter the token number string when prompted at http://localhost:8000
.
This section goes over the basics of using Jupyter Notebook. If you don’t currently have Jupyter Notebook running, start it with the jupyter notebook
command.
You should now be connected to it using a web browser. Jupyter Notebook is very powerful and has many features. This section will outline a few of the basic features to get you started using the notebook. Jupyter Notebook will show all of the files and folders in the directory it is run from, so when you’re working on a project make sure to start it from the project directory.
To create a new notebook file, select New > Python 3 from the top right pull-down menu:
This will open a notebook. We can now run Python code in the cell or change the cell to markdown. For example, change the first cell to accept Markdown by clicking Cell > Cell Type > Markdown from the top navigation bar. We can now write notes using Markdown and even include equations written in LaTeX by putting them between the $$
symbols. For example, type the following into the cell after changing it to markdown:
# Simple Equation
Let us now implement the following equation:
$$ y = x^2$$
where $x = 2$
To turn the markdown into rich text, press CTRL+ENTER
, and the following should be the results:
You can use the markdown cells to make notes and document your code. Let’s implement that simple equation and print the result. Click on the top cell, then press ALT+ENTER
to add a cell below it. Enter the following code in the new cell.
x = 2
y = x**2
print(y)
To run the code, press CTRL+ENTER
. You’ll receive the following results:
You now have the ability to import modules and use the notebook as you would with any other Python development environment!
Congratulations! You should now be able to write reproducible Python code and notes in Markdown using Jupyter Notebook. To get a quick tour of Jupyter Notebook from within the interface, select Help > User Interface Tour from the top navigation menu to learn more.
From here, you may be interested to read our series on Time Series Visualization and Forecasting.
]]>Want to access the Internet safely and securely from your smartphone or laptop when connected to an untrusted network such as the WiFi of a hotel or coffee shop? A Virtual Private Network (VPN) allows you to traverse untrusted networks privately and securely as if you were on a private network. The traffic emerges from the VPN server and continues its journey to the destination.
When combined with HTTPS connections, this setup allows you to secure your wireless logins and transactions. You can circumvent geographical restrictions and censorship, and shield your location and any unencrypted HTTP traffic from the untrusted network.
OpenVPN is a full-featured, open-source Secure Socket Layer (SSL) VPN solution that accommodates a wide range of configurations. In this tutorial, you will set up an OpenVPN server on a Debian 9 server and then configure access to it from Windows, macOS, iOS and/or Android. This tutorial will keep the installation and configuration steps as simple as possible for each of these setups.
Note: If you plan to set up an OpenVPN server on a DigitalOcean Droplet, be aware that we, like many hosting providers, charge for bandwidth overages. For this reason, please be mindful of how much traffic your server is handling.
See this page for more info.
To complete this tutorial, you will need access to a Debian 9 server to host your OpenVPN service. You will need to configure a non-root user with sudo
privileges before you start this guide. You can follow our Debian 9 initial server setup guide to set up a user with appropriate permissions. The linked tutorial will also set up a firewall, which is assumed to be in place throughout this guide.
Additionally, you will need a separate machine to serve as your certificate authority (CA). While it’s technically possible to use your OpenVPN server or your local machine as your CA, this is not recommended as it opens up your VPN to some security vulnerabilities. Per the official OpenVPN documentation, you should place your CA on a standalone machine that’s dedicated to importing and signing certificate requests. For this reason, this guide assumes that your CA is on a separate Debian 9 server that also has a non-root user with sudo
privileges and a basic firewall.
Please note that if you disable password authentication while configuring these servers, you may run into difficulties when transferring files between them later on in this guide. To resolve this issue, you could re-enable password authentication on each server. Alternatively, you could generate an SSH keypair for each server, then add the OpenVPN server’s public SSH key to the CA machine’s authorized_keys
file and vice versa. See How to Set Up SSH Keys on Debian 9 for instructions on how to perform either of these solutions.
When you have these prerequisites in place, you can move on to Step 1 of this tutorial.
To start off, update your VPN server’s package index and install OpenVPN. OpenVPN is available in Debian’s default repositories, so you can use apt
for the installation:
- sudo apt update
- sudo apt install openvpn
OpenVPN is a TLS/SSL VPN. This means that it utilizes certificates in order to encrypt traffic between the server and clients. To issue trusted certificates, you will set up your own simple certificate authority (CA). To do this, we will download the latest version of EasyRSA, which we’ll use to build our CA public key infrastructure (PKI), from the project’s official GitHub repository.
As mentioned in the prerequisites, we will build the CA on a standalone server. The reason for this approach is that, if an attacker were able to infiltrate your server, they would be able to access your CA private key and use it to sign new certificates, giving them access to your VPN. Accordingly, managing the CA from a standalone machine helps to prevent unauthorized users from accessing your VPN. Note, as well, that it’s recommended that you keep the CA server turned off when not being used to sign keys as a further precautionary measure.
To begin building the CA and PKI infrastructure, use wget
to download the latest version of EasyRSA on both your CA machine and your OpenVPN server. To get the latest version, go to the Releases page on the official EasyRSA GitHub project, copy the download link for the file ending in .tgz
, and then paste it into the following command:
- wget -P ~/ https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.4/EasyRSA-3.0.4.tgz
Then extract the tarball:
- cd ~
- tar xvf EasyRSA-3.0.4.tgz
You have successfully installed all the required software on your server and CA machine. Continue on to configure the variables used by EasyRSA and to set up a CA directory, from which you will generate the keys and certificates needed for your server and clients to access the VPN.
EasyRSA comes installed with a configuration file which you can edit to define a number of variables for your CA.
On your CA machine, navigate to the EasyRSA directory:
- cd ~/EasyRSA-3.0.4/
Inside this directory is a file named vars.example
. Make a copy of this file, and name the copy vars
without a file extension:
- cp vars.example vars
Open this new file using your preferred text editor:
- nano vars
Find the settings that set field defaults for new certificates. It will look something like this:
. . .
#set_var EASYRSA_REQ_COUNTRY "US"
#set_var EASYRSA_REQ_PROVINCE "California"
#set_var EASYRSA_REQ_CITY "San Francisco"
#set_var EASYRSA_REQ_ORG "Copyleft Certificate Co"
#set_var EASYRSA_REQ_EMAIL "me@example.net"
#set_var EASYRSA_REQ_OU "My Organizational Unit"
. . .
Uncomment these lines and update the highlighted values to whatever you’d prefer, but do not leave them blank:
. . .
set_var EASYRSA_REQ_COUNTRY "US"
set_var EASYRSA_REQ_PROVINCE "NewYork"
set_var EASYRSA_REQ_CITY "New York City"
set_var EASYRSA_REQ_ORG "DigitalOcean"
set_var EASYRSA_REQ_EMAIL "admin@example.com"
set_var EASYRSA_REQ_OU "Community"
. . .
When you are finished, save and close the file.
Within the EasyRSA directory is a script called easyrsa
which is called to perform a variety of tasks involved with building and managing the CA. Run this script with the init-pki
option to initiate the public key infrastructure on the CA server:
- ./easyrsa init-pki
Output. . .
init-pki complete; you may now create a CA or requests.
Your newly created PKI dir is: /home/sammy/EasyRSA-3.0.4/pki
After this, call the easyrsa
script again, following it with the build-ca
option. This will build the CA and create two important files — ca.crt
and ca.key
— which make up the public and private sides of an SSL certificate.
ca.crt
is the CA’s public certificate file which, in the context of OpenVPN, the server and the client use to inform one another that they are part of the same web of trust and not someone performing a man-in-the-middle attack. For this reason, your server and all of your clients will need a copy of the ca.crt
file.ca.key
is the private key which the CA machine uses to sign keys and certificates for servers and clients. If an attacker gains access to your CA and, in turn, your ca.key
file, they will be able to sign certificate requests and gain access to your VPN, impeding its security. This is why your ca.key
file should only be on your CA machine and that, ideally, your CA machine should be kept offline when not signing certificate requests as an extra security measure.If you don’t want to be prompted for a password every time you interact with your CA, you can run the build-ca
command with the nopass
option, like this:
- ./easyrsa build-ca nopass
In the output, you’ll be asked to confirm the common name for your CA:
Output. . .
Common Name (eg: your user, host, or server name) [Easy-RSA CA]:
The common name is the name used to refer to this machine in the context of the certificate authority. You can enter any string of characters for the CA’s common name but, for simplicity’s sake, press ENTER
to accept the default name.
With that, your CA is in place and it’s ready to start signing certificate requests.
Now that you have a CA ready to go, you can generate a private key and certificate request from your server and then transfer the request over to your CA to be signed, creating the required certificate. You’re also free to create some additional files used during the encryption process.
Start by navigating to the EasyRSA directory on your OpenVPN server:
- cd EasyRSA-3.0.4/
From there, run the easyrsa
script with the init-pki
option. Although you already ran this command on the CA machine, it’s necessary to run it here because your server and CA will have separate PKI directories:
- ./easyrsa init-pki
Then call the easyrsa
script again, this time with the gen-req
option followed by a common name for the machine. Again, this could be anything you like but it can be helpful to make it something descriptive. Throughout this tutorial, the OpenVPN server’s common name will simply be “server”. Be sure to include the nopass
option as well. Failing to do so will password-protect the request file which could lead to permissions issues later on:
Note: If you choose a name other than “server” here, you will have to adjust some of the instructions below. For instance, when copying the generated files to the /etc/openvpn
directory, you will have to substitute the correct names. You will also have to modify the /etc/openvpn/server.conf
file later to point to the correct .crt
and .key
files.
- ./easyrsa gen-req server nopass
This will create a private key for the server and a certificate request file called server.req
. Copy the server key to the /etc/openvpn/
directory:
- sudo cp ~/EasyRSA-3.0.4/pki/private/server.key /etc/openvpn/
Using a secure method (like SCP, in our example below), transfer the server.req
file to your CA machine:
- scp ~/EasyRSA-3.0.4/pki/reqs/server.req sammy@your_CA_ip:/tmp
Next, on your CA machine, navigate to the EasyRSA directory:
- cd EasyRSA-3.0.4/
Using the easyrsa
script again, import the server.req
file, following the file path with its common name:
- ./easyrsa import-req /tmp/server.req server
Then sign the request by running the easyrsa
script with the sign-req
option, followed by the request type and the common name. The request type can either be client
or server
, so for the OpenVPN server’s certificate request, be sure to use the server
request type:
- ./easyrsa sign-req server server
In the output, you’ll be asked to verify that the request comes from a trusted source. Type yes
then press ENTER
to confirm this:
You are about to sign the following certificate.
Please check over the details shown below for accuracy. Note that this request
has not been cryptographically verified. Please be sure it came from a trusted
source or that you have verified the request checksum with the sender.
Request subject, to be signed as a server certificate for 3650 days:
subject=
commonName = server
Type the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
If you encrypted your CA key, you’ll be prompted for your password at this point.
Next, transfer the signed certificate back to your VPN server using a secure method:
- scp pki/issued/server.crt sammy@your_server_ip:/tmp
Before logging out of your CA machine, transfer the ca.crt
file to your server as well:
- scp pki/ca.crt sammy@your_server_ip:/tmp
Next, log back into your OpenVPN server and copy the server.crt
and ca.crt
files into your /etc/openvpn/
directory:
- sudo cp /tmp/{server.crt,ca.crt} /etc/openvpn/
Then navigate to your EasyRSA directory:
- cd EasyRSA-3.0.4/
From there, create a strong Diffie-Hellman key to use during key exchange by typing:
- ./easyrsa gen-dh
This may take a few minutes to complete. Once it does, generate an HMAC signature to strengthen the server’s TLS integrity verification capabilities:
- sudo openvpn --genkey --secret ta.key
When the command finishes, copy the two new files to your /etc/openvpn/
directory:
- sudo cp ~/EasyRSA-3.0.4/ta.key /etc/openvpn/
- sudo cp ~/EasyRSA-3.0.4/pki/dh.pem /etc/openvpn/
With that, all the certificate and key files needed by your server have been generated. You’re ready to create the corresponding certificates and keys which your client machine will use to access your OpenVPN server.
Although you can generate a private key and certificate request on your client machine and then send it to the CA to be signed, this guide outlines a process for generating the certificate request on the server. The benefit of this is that we can create a script which will automatically generate client configuration files that contain all of the required keys and certificates. This lets you avoid having to transfer keys, certificates, and configuration files to clients and streamlines the process of joining the VPN.
We will generate a single client key and certificate pair for this guide. If you have more than one client, you can repeat this process for each one. Please note, though, that you will need to pass a unique name value to the script for every client. Throughout this tutorial, the first certificate/key pair is referred to as client1
.
Get started by creating a directory structure within your home directory to store the client certificate and key files:
- mkdir -p ~/client-configs/keys
Since you will store your clients’ certificate/key pairs and configuration files in this directory, you should lock down its permissions now as a security measure:
- chmod -R 700 ~/client-configs
Next, navigate back to the EasyRSA directory and run the easyrsa
script with the gen-req
and nopass
options, along with the common name for the client:
- cd ~/EasyRSA-3.0.4/
- ./easyrsa gen-req client1 nopass
Press ENTER
to confirm the common name. Then, copy the client1.key
file to the /client-configs/keys/
directory you created earlier:
- cp pki/private/client1.key ~/client-configs/keys/
Next, transfer the client1.req
file to your CA machine using a secure method:
- scp pki/reqs/client1.req sammy@your_CA_ip:/tmp
Log in to your CA machine, navigate to the EasyRSA directory, and import the certificate request:
- ssh sammy@your_CA_IP
- cd EasyRSA-3.0.4/
- ./easyrsa import-req /tmp/client1.req client1
Then sign the request as you did for the server in the previous step. This time, though, be sure to specify the client
request type:
- ./easyrsa sign-req client client1
At the prompt, enter yes
to confirm that you intend to sign the certificate request and that it came from a trusted source:
OutputType the word 'yes' to continue, or any other input to abort.
Confirm request details: yes
Again, if you encrypted your CA key, you’ll be prompted for your password here.
This will create a client certificate file named client1.crt
. Transfer this file back to the server:
- scp pki/issued/client1.crt sammy@your_server_ip:/tmp
SSH back into your OpenVPN server and copy the client certificate to the /client-configs/keys/
directory:
- cp /tmp/client1.crt ~/client-configs/keys/
Next, copy the ca.crt
and ta.key
files to the /client-configs/keys/
directory as well:
- sudo cp ~/EasyRSA-3.0.4/ta.key ~/client-configs/keys/
- sudo cp /etc/openvpn/ca.crt ~/client-configs/keys/
With that, your server and client’s certificates and keys have all been generated and are stored in the appropriate directories on your server. There are still a few actions that need to be performed with these files, but those will come in a later step. For now, you can move on to configuring OpenVPN on your server.
Now that both your client and server’s certificates and keys have been generated, you can begin configuring the OpenVPN service to use these credentials.
Start by copying a sample OpenVPN configuration file into the configuration directory and then extract it in order to use it as a basis for your setup:
- sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
- sudo gzip -d /etc/openvpn/server.conf.gz
Open the server configuration file in your preferred text editor:
- sudo nano /etc/openvpn/server.conf
Find the HMAC section by looking for the tls-auth
directive. This line should already be uncommented, but if isn’t then remove the “;” to uncomment it:
tls-auth ta.key 0 # This file is secret
Next, find the section on cryptographic ciphers by looking for the commented out cipher
lines. The AES-256-CBC
cipher offers a good level of encryption and is well supported. Again, this line should already be uncommented, but if it isn’t then just remove the “;” preceding it:
cipher AES-256-CBC
Below this, add an auth
directive to select the HMAC message digest algorithm. For this, SHA256
is a good choice:
auth SHA256
Next, find the line containing a dh
directive which defines the Diffie-Hellman parameters. Because of some recent changes made to EasyRSA, the filename for the Diffie-Hellman key may be different than what is listed in the example server configuration file. If necessary, change the file name listed here by removing the 2048
so it aligns with the key you generated in the previous step:
dh dh.pem
Finally, find the user
and group
settings and remove the “;” at the beginning of each to uncomment these lines:
user nobody
group nogroup
The changes you’ve made to the sample server.conf
file up to this point are necessary in order for OpenVPN to function. The changes outlined below are optional, though they too are needed for many common use cases.
The settings above will create the VPN connection between the two machines, but will not force any connections to use the tunnel. If you wish to use the VPN to route all of your traffic, you will likely want to push the DNS settings to the client computers.
There are a few directives in the server.conf
file which you must change in order to enable this functionality. Find the redirect-gateway
section and remove the semicolon “;” from the beginning of the redirect-gateway
line to uncomment it:
push "redirect-gateway def1 bypass-dhcp"
Just below this, find the dhcp-option
section. Again, remove the “;” from in front of both of the lines to uncomment them:
push "dhcp-option DNS 208.67.222.222"
push "dhcp-option DNS 208.67.220.220"
This will assist clients in reconfiguring their DNS settings to use the VPN tunnel for as the default gateway.
By default, the OpenVPN server uses port 1194
and the UDP protocol to accept client connections. If you need to use a different port because of restrictive network environments that your clients might be in, you can change the port
option. If you are not hosting web content on your OpenVPN server, port 443
is a popular choice since it is usually allowed through firewall rules.
# Optional!
port 443
Oftentimes, the protocol is restricted to that port as well. If so, change proto
from UDP to TCP:
# Optional!
proto tcp
If you do switch the protocol to TCP, you will need to change the explicit-exit-notify
directive’s value from 1
to 0
, as this directive is only used by UDP. Failing to do so while using TCP will cause errors when you start the OpenVPN service:
# Optional!
explicit-exit-notify 0
If you have no need to use a different port and protocol, it is best to leave these two settings as their defaults.
If you selected a different name during the ./build-key-server
command earlier, modify the cert
and key
lines that you see to point to the appropriate .crt
and .key
files. If you used the default name, “server”, this is already set correctly:
cert server.crt
key server.key
When you are finished, save and close the file.
After going through and making whatever changes to your server’s OpenVPN configuration are required for your specific use case, you can begin making some changes to your server’s networking.
There are some aspects of the server’s networking configuration that need to be tweaked so that OpenVPN can correctly route traffic through the VPN. The first of these is IP forwarding, a method for determining where IP traffic should be routed. This is essential to the VPN functionality that your server will provide.
Adjust your server’s default IP forwarding setting by modifying the /etc/sysctl.conf
file:
- sudo nano /etc/sysctl.conf
Inside, look for the commented line that sets net.ipv4.ip_forward
. Remove the “#” character from the beginning of the line to uncomment this setting:
net.ipv4.ip_forward=1
Save and close the file when you are finished.
To read the file and adjust the values for the current session, type:
- sudo sysctl -p
Outputnet.ipv4.ip_forward = 1
If you followed the Debian 9 initial server setup guide listed in the prerequisites, you should have a UFW firewall in place. Regardless of whether you use the firewall to block unwanted traffic (which you almost always should do), for this guide you need a firewall to manipulate some of the traffic coming into the server. Some of the firewall rules must be modified to enable masquerading, an iptables concept that provides on-the-fly dynamic network address translation (NAT) to correctly route client connections.
Before opening the firewall configuration file to add the masquerading rules, you must first find the public network interface of your machine. To do this, type:
- ip route | grep default
Your public interface is the string found within this command’s output that follows the word “dev”. For example, this result shows the interface named eth0
, which is highlighted below:
Outputdefault via 203.0.113.1 dev eth0 onlink
When you have the interface associated with your default route, open the /etc/ufw/before.rules
file to add the relevant configuration:
- sudo nano /etc/ufw/before.rules
UFW rules are typically added using the ufw
command. Rules listed in the before.rules
file, though, are read and put into place before the conventional UFW rules are loaded. Towards the top of the file, add the highlighted lines below. This will set the default policy for the POSTROUTING
chain in the nat
table and masquerade any traffic coming from the VPN. Remember to replace eth0
in the -A POSTROUTING
line below with the interface you found in the above command:
#
# rules.before
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
# ufw-before-input
# ufw-before-output
# ufw-before-forward
#
# START OPENVPN RULES
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
# Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!)
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
# END OPENVPN RULES
# Don't delete these required lines, otherwise there will be errors
*filter
. . .
Save and close the file when you are finished.
Next, you need to tell UFW to allow forwarded packets by default as well. To do this, open the /etc/default/ufw
file:
- sudo nano /etc/default/ufw
Inside, find the DEFAULT_FORWARD_POLICY
directive and change the value from DROP
to ACCEPT
:
DEFAULT_FORWARD_POLICY="ACCEPT"
Save and close the file when you are finished.
Next, adjust the firewall itself to allow traffic to OpenVPN. If you did not change the port and protocol in the /etc/openvpn/server.conf
file, you will need to open up UDP traffic to port 1194
. If you modified the port and/or protocol, substitute the values you selected here.
In case you forgot to add the SSH port when following the prerequisite tutorial, add it here as well:
- sudo ufw allow 1194/udp
- sudo ufw allow OpenSSH
After adding those rules, disable and re-enable UFW to restart it and load the changes from all of the files you’ve modified:
- sudo ufw disable
- sudo ufw enable
Your server is now configured to correctly handle OpenVPN traffic.
You’re finally ready to start the OpenVPN service on your server. This is done using the systemd utility systemctl
.
Start the OpenVPN server by specifying your configuration file name as an instance variable after the systemd unit file name. The configuration file for your server is called /etc/openvpn/server.conf
, so add @server
to end of your unit file when calling it:
- sudo systemctl start openvpn@server
Double-check that the service has started successfully by typing:
- sudo systemctl status openvpn@server
If everything went well, your output will look something like this:
Output● openvpn@server.service - OpenVPN connection to server
Loaded: loaded (/lib/systemd/system/openvpn@.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2016-05-03 15:30:05 EDT; 47s ago
Docs: man:openvpn(8)
https://community.openvpn.net/openvpn/wiki/Openvpn23ManPage
https://community.openvpn.net/openvpn/wiki/HOWTO
Process: 5852 ExecStart=/usr/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pid (code=exited, sta
Main PID: 5856 (openvpn)
Tasks: 1 (limit: 512)
CGroup: /system.slice/system-openvpn.slice/openvpn@server.service
└─5856 /usr/sbin/openvpn --daemon ovpn-server --status /run/openvpn/server.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/server.conf --writepid /run/openvpn/server.pid
You can also check that the OpenVPN tun0
interface is available by typing:
- ip addr show tun0
This will output a configured interface:
Output4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 100
link/none
inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
valid_lft forever preferred_lft forever
After starting the service, enable it so that it starts automatically at boot:
- sudo systemctl enable openvpn@server
Your OpenVPN service is now up and running. Before you can start using it, though, you must first create a configuration file for the client machine. This tutorial already went over how to create certificate/key pairs for clients, and in the next step we will demonstrate how to create an infrastructure that will generate client configuration files easily.
Creating configuration files for OpenVPN clients can be somewhat involved, as every client must have its own config and each must align with the settings outlined in the server’s configuration file. Rather than writing a single configuration file that can only be used on one client, this step outlines a process for building a client configuration infrastructure which you can use to generate config files on-the-fly. You will first create a “base” configuration file then build a script which will allow you to generate unique client config files, certificates, and keys as needed.
Get started by creating a new directory where you will store client configuration files within the client-configs
directory you created earlier:
- mkdir -p ~/client-configs/files
Next, copy an example client configuration file into the client-configs
directory to use as your base configuration:
- cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf ~/client-configs/base.conf
Open this new file in your text editor:
- nano ~/client-configs/base.conf
Inside, locate the remote
directive. This points the client to your OpenVPN server address — the public IP address of your OpenVPN server. If you decided to change the port that the OpenVPN server is listening on, you will also need to change 1194
to the port you selected:
. . .
# The hostname/IP and port of the server.
# You can have multiple remote entries
# to load balance between the servers.
remote your_server_ip 1194
. . .
Be sure that the protocol matches the value you are using in the server configuration:
proto udp
Next, uncomment the user
and group
directives by removing the “;” at the beginning of each line:
# Downgrade privileges after initialization (non-Windows only)
user nobody
group nogroup
Find the directives that set the ca
, cert
, and key
. Comment out these directives since you will add the certs and keys within the file itself shortly:
# SSL/TLS parms.
# See the server config file for more
# description. It's best to use
# a separate .crt/.key file pair
# for each client. A single ca
# file can be used for all clients.
#ca ca.crt
#cert client.crt
#key client.key
Similarly, comment out the tls-auth
directive, as you will add ta.key
directly into the client configuration file:
# If a tls-auth key is used on the server
# then every client must also have the key.
#tls-auth ta.key 1
Mirror the cipher
and auth
settings that you set in the /etc/openvpn/server.conf
file:
cipher AES-256-CBC
auth SHA256
Next, add the key-direction
directive somewhere in the file. You must set this to “1” for the VPN to function correctly on the client machine:
key-direction 1
Finally, add a few commented out lines. Although you can include these directives in every client configuration file, you only need to enable them for Linux clients that ship with an /etc/openvpn/update-resolv-conf
file. This script uses the resolvconf
utility to update DNS information for Linux clients.
# script-security 2
# up /etc/openvpn/update-resolv-conf
# down /etc/openvpn/update-resolv-conf
If your client is running Linux and has an /etc/openvpn/update-resolv-conf
file, uncomment these lines from the client’s configuration file after it has been generated.
Save and close the file when you are finished.
Next, create a simple script that will compile your base configuration with the relevant certificate, key, and encryption files and then place the generated configuration in the ~/client-configs/files
directory. Open a new file called make_config.sh
within the ~/client-configs
directory:
- nano ~/client-configs/make_config.sh
Inside, add the following content, making sure to change sammy
to that of your server’s non-root user account:
#!/bin/bash
# First argument: Client identifier
KEY_DIR=/home/sammy/client-configs/keys
OUTPUT_DIR=/home/sammy/client-configs/files
BASE_CONFIG=/home/sammy/client-configs/base.conf
cat ${BASE_CONFIG} \
<(echo -e '<ca>') \
${KEY_DIR}/ca.crt \
<(echo -e '</ca>\n<cert>') \
${KEY_DIR}/${1}.crt \
<(echo -e '</cert>\n<key>') \
${KEY_DIR}/${1}.key \
<(echo -e '</key>\n<tls-auth>') \
${KEY_DIR}/ta.key \
<(echo -e '</tls-auth>') \
> ${OUTPUT_DIR}/${1}.ovpn
Save and close the file when you are finished.
Before moving on, be sure to mark this file as executable by typing:
- chmod 700 ~/client-configs/make_config.sh
This script will make a copy of the base.conf
file you made, collect all the certificate and key files you’ve created for your client, extract their contents, append them to the copy of the base configuration file, and export all of this content into a new client configuration file. This means that, rather than having to manage the client’s configuration, certificate, and key files separately, all the required information is stored in one place. The benefit of this is that if you ever need to add a client in the future, you can just run this script to quickly create the config file and ensure that all the important information is stored in a single, easy-to-access location.
Please note that any time you add a new client, you will need to generate new keys and certificates for it before you can run this script and generate its configuration file. You will get some practice using this script in the next step.
If you followed along with the guide, you created a client certificate and key named client1.crt
and client1.key
, respectively, in Step 4. You can generate a config file for these credentials by moving into your ~/client-configs
directory and running the script you made at the end of the previous step:
- cd ~/client-configs
- sudo ./make_config.sh client1
This will create a file named client1.ovpn
in your ~/client-configs/files
directory:
- ls ~/client-configs/files
Outputclient1.ovpn
You need to transfer this file to the device you plan to use as the client. For instance, this could be your local computer or a mobile device.
While the exact applications used to accomplish this transfer will depend on your device’s operating system and your personal preferences, a dependable and secure method is to use SFTP (SSH file transfer protocol) or SCP (Secure Copy) on the backend. This will transport your client’s VPN authentication files over an encrypted connection.
Here is an example SFTP command using the client1.ovpn
example which you can run from your local computer (macOS or Linux). It places the .ovpn
file in your home directory:
- sftp sammy@your_server_ip:client-configs/files/client1.ovpn ~/
Here are several tools and tutorials for securely transferring files from the server to a local computer:
This section covers how to install a client VPN profile on Windows, macOS, Linux, iOS, and Android. None of these client instructions are dependent on one another, so feel free to skip to whichever is applicable to your device.
The OpenVPN connection will have the same name as whatever you called the .ovpn
file. In regards to this tutorial, this means that the connection is named client1.ovpn
, aligning with the first client file you generated.
Installing
Download the OpenVPN client application for Windows from OpenVPN’s Downloads page. Choose the appropriate installer version for your version of Windows.
Note
OpenVPN needs administrative privileges to install.
After installing OpenVPN, copy the .ovpn
file to:
C:\Program Files\OpenVPN\config
When you launch OpenVPN, it will automatically see the profile and make it available.
You must run OpenVPN as an administrator each time it’s used, even by administrative accounts. To do this without having to right-click and select Run as administrator every time you use the VPN, you must preset this from an administrative account. This also means that standard users will need to enter the administrator’s password to use OpenVPN. On the other hand, standard users can’t properly connect to the server unless the OpenVPN application on the client has admin rights, so the elevated privileges are necessary.
To set the OpenVPN application to always run as an administrator, right-click on its shortcut icon and go to Properties. At the bottom of the Compatibility tab, click the button to Change settings for all users. In the new window, check Run this program as an administrator.
Connecting
Each time you launch the OpenVPN GUI, Windows will ask if you want to allow the program to make changes to your computer. Click Yes. Launching the OpenVPN client application only puts the applet in the system tray so that you can connect and disconnect the VPN as needed; it does not actually make the VPN connection.
Once OpenVPN is started, initiate a connection by going into the system tray applet and right-clicking on the OpenVPN applet icon. This opens the context menu. Select client1 at the top of the menu (that’s your client1.ovpn
profile) and choose Connect.
A status window will open showing the log output while the connection is established, and a message will show once the client is connected.
Disconnect from the VPN the same way: Go into the system tray applet, right-click the OpenVPN applet icon, select the client profile and click Disconnect.
Installing
Tunnelblick is a free, open source OpenVPN client for macOS. You can download the latest disk image from the Tunnelblick Downloads page. Double-click the downloaded .dmg
file and follow the prompts to install.
Towards the end of the installation process, Tunnelblick will ask if you have any configuration files. For simplicity, answer No and let Tunnelblick finish. Open a Finder window and double-click client1.ovpn
. Tunnelblick will install the client profile. Administrative privileges are required.
Connecting
Launch Tunnelblick by double-clicking Tunnelblick in the Applications folder. Once Tunnelblick has been launched, there will be a Tunnelblick icon in the menu bar at the top right of the screen for controlling connections. Click on the icon, and then the Connect menu item to initiate the VPN connection. Select the client1 connection.
If you are using Linux, there are a variety of tools that you can use depending on your distribution. Your desktop environment or window manager might also include connection utilities.
The most universal way of connecting, however, is to just use the OpenVPN software.
On Ubuntu or Debian, you can install it just as you did on the server by typing:
- sudo apt update
- sudo apt install openvpn
On CentOS you can enable the EPEL repositories and then install it by typing:
- sudo yum install epel-release
- sudo yum install openvpn
Check to see if your distribution includes an /etc/openvpn/update-resolv-conf
script:
- ls /etc/openvpn
Outputupdate-resolv-conf
Next, edit the OpenVPN client configuration file you transfered:
- nano client1.ovpn
If you were able to find an update-resolv-conf
file, uncomment the three lines you added to adjust the DNS settings:
script-security 2
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
If you are using CentOS, change the group
directive from nogroup
to nobody
to match the distribution’s available groups:
group nobody
Save and close the file.
Now, you can connect to the VPN by just pointing the openvpn
command to the client configuration file:
- sudo openvpn --config client1.ovpn
This should connect you to your VPN.
Installing
From the iTunes App Store, search for and install OpenVPN Connect, the official iOS OpenVPN client application. To transfer your iOS client configuration onto the device, connect it directly to a computer.
The process of completing the transfer with iTunes is outlined here. Open iTunes on the computer and click on iPhone > apps. Scroll down to the bottom to the File Sharing section and click the OpenVPN app. The blank window to the right, OpenVPN Documents, is for sharing files. Drag the .ovpn
file to the OpenVPN Documents window.
Now launch the OpenVPN app on the iPhone. You will receive a notification that a new profile is ready to import. Tap the green plus sign to import it.
Connecting
OpenVPN is now ready to use with the new profile. Start the connection by sliding the Connect button to the On position. Disconnect by sliding the same button to Off.
Note
The VPN switch under Settings cannot be used to connect to the VPN. If you try, you will receive a notice to only connect using the OpenVPN app.
Installing
Open the Google Play Store. Search for and install Android OpenVPN Connect, the official Android OpenVPN client application.
You can transfer the .ovpn
profile by connecting the Android device to your computer by USB and copying the file over. Alternatively, if you have an SD card reader, you can remove the device’s SD card, copy the profile onto it and then insert the card back into the Android device.
Start the OpenVPN app and tap the menu to import the profile.
Then navigate to the location of the saved profile (the screenshot uses /sdcard/Download/
) and select the file. The app will make a note that the profile was imported.
Connecting
To connect, simply tap the Connect button. You’ll be asked if you trust the OpenVPN application. Choose OK to initiate the connection. To disconnect from the VPN, go back to the OpenVPN app and choose Disconnect.
Note: This method for testing your VPN connection will only work if you opted to route all your traffic through the VPN in Step 5.
Once everything is installed, a simple check confirms everything is working properly. Without having a VPN connection enabled, open a browser and go to DNSLeakTest.
The site will return the IP address assigned by your internet service provider and as you appear to the rest of the world. To check your DNS settings through the same website, click on Extended Test and it will tell you which DNS servers you are using.
Now connect the OpenVPN client to your server’s VPN and refresh the browser. A completely different IP address (that of your VPN server) should now appear, and this is how you appear to the world. Again, DNSLeakTest’s Extended Test will check your DNS settings and confirm you are now using the DNS resolvers pushed by your VPN.
Occasionally, you may need to revoke a client certificate to prevent further access to the OpenVPN server.
To do so, navigate to the EasyRSA directory on your CA machine:
- cd EasyRSA-3.0.4/
Next, run the easyrsa
script with the revoke
option, followed by the client name you wish to revoke:
- ./easyrsa revoke client2
This will ask you to confirm the revocation by entering yes
:
OutputPlease confirm you wish to revoke the certificate with the following subject:
subject=
commonName = client2
Type the word 'yes' to continue, or any other input to abort.
Continue with revocation: yes
After confirming the action, the CA will fully revoke the client’s certificate. However, your OpenVPN server currently has no way to check whether any clients’ certificates have been revoked and the client will still have access to the VPN. To correct this, create a certificate revocation list (CRL) on your CA machine:
- ./easyrsa gen-crl
This will generate a file called crl.pem
. Securely transfer this file to your OpenVPN server:
- scp ~/EasyRSA-3.0.4/pki/crl.pem sammy@your_server_ip:/tmp
On your OpenVPN server, copy this file into your /etc/openvpn/
directory:
- sudo cp /tmp/crl.pem /etc/openvpn
Next, open the OpenVPN server configuration file:
- sudo nano /etc/openvpn/server.conf
At the bottom of the file, add the crl-verify
option, which will instruct the OpenVPN server to check the certificate revocation list that we’ve created each time a connection attempt is made:
crl-verify crl.pem
Save and close the file.
Finally, restart OpenVPN to implement the certificate revocation:
- sudo systemctl restart openvpn@server
The client should no longer be able to successfully connect to the server using the old credential.
To revoke additional clients, follow this process:
./easyrsa revoke client_name
commandcrl.pem
file to your OpenVPN server and copy it to the /etc/openvpn
directory to overwrite the old list.You can use this process to revoke any certificates that you’ve previously issued for your server.
You are now securely traversing the internet protecting your identity, location, and traffic from snoopers and censors. If at this point you no longer need to issue certificates, it’s recommended that you turn off your CA machine or otherwise disconnect it from the internet until you need to add or revoke certificates. This will help to prevent attackers from gaining access to your VPN.
To configure more clients, you only need to follow steps 4 and 9-11 for each additional device. To revoke access to clients, just follow step 12.
]]>Nextcloud, a fork of ownCloud, is a file sharing server that permits you to store your personal content, like documents and pictures, in a centralized location, much like Dropbox. The difference with Nextcloud is that all of its features are open-source. It also returns the control and security of your sensitive data back to you, thus eliminating the use of a third-party cloud hosting service.
In this tutorial, we will install and configure a Nextcloud instance on a Debian 9 server.
In order to complete the steps in this guide, you will need the following:
sudo
privileges and set up a basic firewall by following the Debian 9 initial server setup guide.Once you have completed the above steps, continue on to learn how to set up Nextcloud on your server.
We will be installing Nextcloud using the snappy packaging system. This packaging system, installable on Debian 9 through the default repositories, allows organizations to ship software, along with all associated dependencies and configuration, in a self-contained unit with automatic updates. This means that instead of installing and configuring a web and database server and then configuring the Nextcloud app to run on it, we can install the snap
package which handles the underlying systems automatically.
To install and manage snap
packages, we first need to install the snapd
package on the server. Update the local package index for apt
and then install the software by typing:
- sudo apt update
- sudo apt install snapd
Next, either log out and log back in again, or source the /etc/profile.d/apps-bin-path.sh
script to add /snap/bin
to your session’s PATH
variable:
- source /etc/profile.d/apps-bin-path.sh
Once snapd
is installed, you can download the Nextcloud snap
package and install it on the system by typing:
- sudo snap install nextcloud
The Nextcloud package will be downloaded and installed on your server. You can confirm that the installation process was successful by listing the changes associated with the snap
:
- snap changes nextcloud
OutputID Status Spawn Ready Summary
1 Done today at 20:18 UTC today at 20:18 UTC Install "nextcloud" snap
The status and summary indicate that the installation was completed without any problems.
If you’d like some more information about the Nextcloud snap
, there are a few commands that can be helpful.
The snap info
command can show you the description, the Nextcloud management commands available, as well as the installed version and the snap channel being tracked:
- snap info nextcloud
Snaps can define interfaces they support, which consist of a slot and plug that, when hooked together, gives the snap access to certain capabilities or levels of access. For instance, snaps that need to act as a network client must have the network
interface. To see what snap “interfaces” this snap defines, type:
- snap interfaces nextcloud
OutputSlot Plug
:network nextcloud
:network-bind nextcloud
- nextcloud:removable-media
To learn about all of the specific services and apps that this snap provides, you can take a look at the snap definition file by typing:
- less /snap/nextcloud/current/meta/snap.yaml
This will allow you to see the individual components included within the snap, if you need help with debugging.
There are a few different ways you can configure the Nextcloud snap. In this guide, rather than creating an administrative user through the web interface, we will create one on the command line in order to avoid a small window where the administrator registration page would be accessible to anyone visiting your server’s IP address or domain name.
To configure Nextcloud with a new administrator account, use the nextcloud.manual-install
command. You must pass in a username and a password as arguments:
- sudo -i nextcloud.manual-install sammy password
The following message indicates that Nextcloud has been configured correctly:
OutputNextcloud is not installed - only a limited number of commands are available
Nextcloud was successfully installed
Now that Nextcloud is installed, we need to adjust the trusted domains so that Nextcloud will respond to requests using the server’s domain name or IP address.
When installing from the command line, Nextcloud restricts the host names that the instance will respond to. By default, the service only responds to requests made to the “localhost” hostname. We will be accessing Nextcloud through the server’s domain name or IP address, so we’ll need to adjust this setting to accept these type of requests.
You can view the current settings by querying the value of the trusted_domains
array:
- sudo -i nextcloud.occ config:system:get trusted_domains
Outputlocalhost
Currently, only localhost
is present as the first value in the array. We can add an entry for our server’s domain name or IP address by typing:
- sudo -i nextcloud.occ config:system:set trusted_domains 1 --value=example.com
OutputSystem config value trusted_domains => 1 set to string example.com
If we query the trusted domains again, we will see that we now have two entries:
- sudo -i nextcloud.occ config:system:get trusted_domains
Outputlocalhost
example.com
If you need to add another way of accessing the Nextcloud instance, you can add additional domains or addresses by rerunning the config:system:set
command with an incremented index number (the “1” in the first command) and adjusting the --value
.
Before we begin using Nextcloud, we need to secure the web interface.
If you have a domain name associated with your Nextcloud server, the Nextcloud snap can help you obtain and configure a trusted SSL certificate from Let’s Encrypt. If your Nextcloud server does not have a domain name, Nextcloud can configure a self-signed certificate which will encrypt your web traffic but won’t be able to verify the identity of your server.
With that in mind, follow the section below that matches your scenario.
If you have a domain name associated with your Nextcloud server, the best option for securing your web interface is to obtain a Let’s Encrypt SSL certificate.
Start by opening the ports in the firewall that Let’s Encrypt uses to validate domain ownership. This will make your Nextcloud login page publicly accessible, but since we already have an administrator account configured, no one will be able to hijack the installation:
- sudo ufw allow "WWW Full"
Next, request a Let’s Encrypt certificate by typing:
- sudo -i nextcloud.enable-https lets-encrypt
You will first be asked whether your server meets the conditions necessary to request a certificate from the Let’s Encrypt service:
OutputIn order for Let's Encrypt to verify that you actually own the
domain(s) for which you're requesting a certificate, there are a
number of requirements of which you need to be aware:
1. In order to register with the Let's Encrypt ACME server, you must
agree to the currently-in-effect Subscriber Agreement located
here:
https://letsencrypt.org/repository/
By continuing to use this tool you agree to these terms. Please
cancel now if otherwise.
2. You must have the domain name(s) for which you want certificates
pointing at the external IP address of this machine.
3. Both ports 80 and 443 on the external IP address of this machine
must point to this machine (e.g. port forwarding might need to be
setup on your router).
Have you met these requirements? (y/n)
Type y to continue.
Next, you will be asked to provide an email address to use for recovery operations:
OutputPlease enter an email address (for urgent notices or key recovery): your_email@domain.com
Finally, enter the domain name associated with your Nextcloud server:
OutputPlease enter your domain name(s) (space-separated): example.com
Your Let’s Encrypt certificate will be requested and, provided everything went well, the internal Apache instance will be restarted to immediately implement SSL:
OutputAttempting to obtain certificates... done
Restarting apache... done
You can now skip ahead to sign into Nextcloud for the first time.
If your Nextcloud server does not have a domain name, you can still secure the web interface by generating a self-signed SSL certificate. This certificate will allow access to the web interface over an encrypted connection, but will be unable to verify the identity of your server, so your browser will likely display a warning.
To generate a self-signed certificate and configure Nextcloud to use it, type:
- sudo nextcloud.enable-https self-signed
OutputGenerating key and self-signed certificate... done
Restarting apache... done
The above output indicates that Nextcloud generated and enabled a self-signed certificate.
Now that the interface is secure, open the web ports in the firewall to allow access to the web interface:
- sudo ufw allow "WWW Full"
You are now ready to log into Nextcloud for the first time.
Now that Nextcloud is configured, visit your server’s domain name or IP address in your web browser:
https://example.com
Note: If you set up a self-signed SSL certificate, your browser may display a warning that the connection is insecure because the server’s certificate is not signed by a recognized certificate authority. This is expected for self-signed certificates, so feel free to click through the warning to proceed to the site.
Since you have already configure an administrator account from the command line, you will be taken to the Nextcloud login page. Enter the credentials you created for the administrative user:
Click the Log in button to log in to the Nextcloud web interface.
The first time you enter, a window will be displayed with links to various Nextcloud clients that can be used to interact with and manage your Nextcloud instance:
Click through to download any clients you are interested in, or exit out of the window by clicking the X in the upper-right corner. You will be taken to the main Nextcloud interface, where you can begin to upload and manage files:
Your installation is now complete and secured. Feel free to explore the interface to get more familiarity with the features and functionality of your new system.
Nextcloud can replicate the capabilities of popular third-party cloud storage services. Content can be shared between users or externally with public URLs. The advantage of Nextcloud is that the information is stored securely in a place that you control.
Explore the interface and for additional functionality, install plugins using Nextcloud’s app store.
]]>TLS, or transport layer security, and its predecessor SSL, which stands for secure sockets layer, are web protocols used to wrap normal traffic in a protected, encrypted wrapper.
Using this technology, servers can send traffic safely between the server and clients without the possibility of the messages being intercepted by outside parties. The certificate system also assists users in verifying the identity of the sites that they are connecting with.
In this guide, we will show you how to set up a self-signed SSL certificate for use with an Nginx web server on a Debian 9 server.
Note: A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.
A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where the encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate. To learn how to set up a free trusted certificate with the Let’s Encrypt project, consult How to Secure Nginx with Let’s Encrypt on Debian 9.
Before you begin, you should have a non-root user configured with sudo
privileges. You can learn how to set up such a user account by following our initial server setup for Debian 9.
You will also need to have the Nginx web server installed. If you would like to install an entire LEMP (Linux, Nginx, MySQL, PHP) stack on your server, you can follow our guide on setting up LEMP on Debian 9.
If you just want the Nginx web server, you can instead follow our guide on installing Nginx on Debian 9.
When you have completed the prerequisites, continue below.
TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server. It is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.
We can create a self-signed key and certificate pair with OpenSSL in a single command:
- sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
You will be asked a series of questions. Before we go over that, let’s take a look at what is happening in the command we are issuing:
rsa:2048
portion tells it to make an RSA key that is 2048 bits long.As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.
Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name)
. You need to enter the domain name associated with your server or, more likely, your server’s public IP address.
The entirety of the prompts will look something like this:
OutputCountry Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:server_IP_address
Email Address []:admin@your_domain.com
Both of the files you created will be placed in the appropriate subdirectories of the /etc/ssl
directory.
While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.
We can do this by typing:
- sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096
This will take a while, but when it’s done you will have a strong DH group at /etc/nginx/dhparam.pem
that we can use in our configuration.
We have created our key and certificate files under the /etc/ssl
directory. Now we just need to modify our Nginx configuration to take advantage of these.
We will make a few adjustments to our configuration.
This method of configuring Nginx will allow us to keep clean server blocks and put common configuration segments into reusable modules.
First, let’s create a new Nginx configuration snippet in the /etc/nginx/snippets
directory.
To properly distinguish the purpose of this file, let’s call it self-signed.conf
:
- sudo nano /etc/nginx/snippets/self-signed.conf
Within this file, we need to set the ssl_certificate
directive to our certificate file and the ssl_certificate_key
to the associated key. In our case, this will look like this:
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
When you’ve added those lines, save and close the file.
Next, we will create another snippet that will define some SSL settings. This will set Nginx up with a strong SSL cipher suite and enable some advanced features that will help keep our server secure.
The parameters we will set can be reused in future Nginx configurations, so we will give the file a generic name:
- sudo nano /etc/nginx/snippets/ssl-params.conf
To set up Nginx SSL securely, we will be using the recommendations by Remy van Elst on the Cipherli.st site. This site is designed to provide easy-to-consume encryption settings for popular software.
The suggested settings on the site linked to above offer strong security. Sometimes, this comes at the cost of greater client compatibility. If you need to support older clients, there is an alternative list that can be accessed by clicking the link on the page labelled “Yes, give me a ciphersuite that works with legacy / old software.” That list can be substituted for the items copied below.
The choice of which config you use will depend largely on what you need to support. They both will provide great security.
For our purposes, we can copy the provided settings in their entirety. We just need to make a few small modifications.
First, we will add our preferred DNS resolver for upstream requests. We will use Google’s for this guide.
Second, we will comment out the line that sets the strict transport security header. Before uncommenting this line, you should take take a moment to read up on HTTP Strict Transport Security, or HSTS, specifically about the “preload” functionality. Preloading HSTS provides increased security, but can have far reaching consequences if accidentally enabled or enabled incorrectly.
Copy the following into your ssl-params.conf
snippet file:
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_timeout 10m;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# Disable strict transport security for now. You can uncomment the following
# line if you understand the implications.
# add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
Because we are using a self-signed certificate, SSL stapling will not be used. Nginx will output a warning but continue to operate correctly.
Save and close the file when you are finished.
Now that we have our snippets, we can adjust our Nginx configuration to enable SSL.
We will assume in this guide that you are using a custom server block configuration file in the /etc/nginx/sites-available
directory. We will use /etc/nginx/sites-available/example.com
for this example. Substitute your configuration filename as needed.
Before we go any further, let’s back up our current configuration file:
- sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/example.com.bak
Now, open the configuration file to make adjustments:
- sudo nano /etc/nginx/sites-available/example.com
Inside, your server block probably begins similar to this:
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
. . .
}
Your file may be in a different order, and instead of the root
and index
directives you may have some location
, proxy_pass
, or other custom configuration statements. This is ok, as we only need to update the listen
directives and include our SSL snippets. We will be modifying this existing server block to serve SSL traffic on port 443, then create a new server block to respond on port 80 and automatically redirect traffic to port 443.
Note: We will use a 302 redirect until we have verified that everything is working properly. Afterwards, we can change this to a permanent 301 redirect.
In your existing configuration file, update the two listen
statements to use port 443 and SSL, then include the two snippet files we created in previous steps:
server {
listen 443 ssl;
listen [::]:443 ssl;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
. . .
}
Next, paste a second server block into the configuration file, after the closing bracket (}
) of the first block:
. . .
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 302 https://$server_name$request_uri;
}
```
This is a bare-bones configuration that listens on port 80 and performs the redirect to HTTPS. Save and close the file when you are finished editing it.
## Step 3 — Adjusting the Firewall
If you have the `ufw` firewall enabled, as recommended by the prerequisite guides, you'll need to adjust the settings to allow for SSL traffic. Luckily, Nginx registers a few profiles with `ufw` upon installation.
We can see the available profiles by typing:
```command
sudo ufw app list
You should see a list like this:
OutputAvailable applications:
. . .
Nginx Full
Nginx HTTP
Nginx HTTPS
. . .
You can see the current setting by typing:
- sudo ufw status
It will probably look like this, meaning that only HTTP traffic is allowed to the web server:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
To additionally let in HTTPS traffic, we can allow the “Nginx Full” profile and then delete the redundant “Nginx HTTP” profile allowance:
- sudo ufw allow 'Nginx Full'
- sudo ufw delete allow 'Nginx HTTP'
Your status should look like this now:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
Now that we’ve made our changes and adjusted our firewall, we can restart Nginx to implement our new changes.
First, we should check to make sure that there are no syntax errors in our files. We can do this by typing:
- sudo nginx -t
If everything is successful, you will get a result that looks like this:
Outputnginx: [warn] "ssl_stapling" ignored, issuer certificate not found
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Notice the warning in the beginning. As noted earlier, this particular setting throws a warning since our self-signed certificate can’t use SSL stapling. This is expected and our server can still encrypt connections correctly.
If your output matches the above, your configuration file has no syntax errors. We can safely restart Nginx to implement our changes:
- sudo systemctl restart nginx
Now, we’re ready to test our SSL server.
Open your web browser and type https://
followed by your server’s domain name or IP into the address bar:
https://server_domain_or_IP
Because the certificate we created isn’t signed by one of your browser’s trusted certificate authorities, you will likely see a scary looking warning like the one below (the following appears when using Google Chrome) :
This is expected and normal. We are only interested in the encryption aspect of our certificate, not the third party validation of our host’s authenticity. Click “ADVANCED” and then the link provided to proceed to your host anyways:
You should be taken to your site. If you look in the browser address bar, you will see a lock with an “x” over it. In this case, this just means that the certificate cannot be validated. It is still encrypting your connection.
If you configured Nginx with two server blocks, automatically redirecting HTTP content to HTTPS, you can also check whether the redirect functions correctly:
http://server_domain_or_IP
If this results in the same icon, this means that your redirect worked correctly.
If your redirect worked correctly and you are sure you want to allow only encrypted traffic, you should modify the Nginx configuration to make the redirect permanent.
Open your server block configuration file again:
- sudo nano /etc/nginx/sites-available/example.com
Find the return 302
and change it to return 301
:
return 301 https://$server_name$request_uri;
Save and close the file.
Check your configuration for syntax errors:
- sudo nginx -t
When you’re ready, restart Nginx to make the redirect permanent:
- sudo systemctl restart nginx
You have configured your Nginx server to use strong encryption for client connections. This will allow you serve requests securely, and will prevent outside parties from reading your traffic.
]]>Postfix is a mail transfer agent (MTA), an application used to send and receive email. In this tutorial, you will install and configure Postfix so that it can be used to send emails by local applications only — that is, those installed on the same server as Postfix.
Why would you want to do that?
If you’re already using a third-party email provider for sending and receiving emails, you do not need to run your own mail server. However, if you manage a cloud server on which you have installed applications that need to send email notifications, running a local, send-only SMTP server is a good alternative to using a third-party email service provider or running a full-blown SMTP server.
In this tutorial, you’ll install and configure Postfix as a send-only SMTP server on Debian 9.
Note: As of June 22, 2022, DigitalOcean is blocking SMTP for all new accounts. As a part of this new policy, we have partnered with SendGrid so our customers can still send emails with ease. You can learn more about this partnership and get started using SendGrid by checking out our DigitalOcean’s SendGrid Marketplace App.
To follow this tutorial, you will need:
One Debian 9 server, set up with the Debian 9 initial server setup tutorial, and a sudo non-root user.
A valid domain name, like example.com, pointing to your server. You can set that up by following these guidelines on managing DNS hosting on DigitalOcean.
Note that your server’s hostname should match your domain or subdomain. You can verify the server’s hostname by typing hostname
at the command prompt. The output should match the name you gave the server when it was being created.
In this step, you’ll learn how to install Postfix. You will need two packages: mailutils
, which includes programs necessary for Postfix to function, and postfix
itself.
First, update the package database:
- sudo apt update
Next, install mailtuils
:
- sudo apt install mailutils
Finally, install postfix
:
- sudo apt install postfix
Near the end of the installation process, you will be presented with a window that looks like the one in the image below. The default option is Internet Site. That’s the recommended option for this tutorial, so press TAB
, then ENTER
.
After that, you’ll get another window just like the one in the next image. The System mail name should be the same as the name you assigned to the server when you were creating it. If it shows a subdomain like subdomain.example.com
, change it to just example.com
. When you’ve finished, press TAB
, then ENTER
.
You now have Postfix installed and are ready to modify its configuration settings.
In this step, you’ll configure Postfix to process requests to send emails only from the server on which it is running, i.e. from localhost
.
For that to happen, Postfix needs to be configured to listen only on the loopback interface, the virtual network interface that the server uses to communicate internally. To make the change, open the main Postfix configuration file using nano
or your favorite text editor:
- sudo nano /etc/postfix/main.cf
With the file open, scroll down until you see the following section:
. . .
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
. . .
Change the line that reads inet_interfaces = all
to inet_interfaces = loopback-only
:
. . .
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = loopback-only
. . .
Another directive you’ll need to modify is mydestination
, which is used to specify the list of domains that are delivered via the local_transport
mail delivery transport. By default, the values are similar to these:
/etc/postfix/main.cf. . .
mydestination = $myhostname, example.com, localhost.com, , localhost
. . .
The recommended defaults for this directive are given in the code block below, so modify yours to match:
/etc/postfix/main.cf. . .
mydestination = $myhostname, localhost.$your_domain, $your_domain
. . .
Save and close the file.
Note: If you’re hosting multiple domains on a single server, the other domains can also be passed to Postfix using the mydestination
directive. However, to configure Postfix in a manner that scales and that does not present issues for such a setup involves additional configurations that are beyond the scope of this article.
Finally, restart Postfix.
- sudo systemctl restart postfix
In this step, you’ll test whether Postfix can send emails to an external email account using the mail
command, which is part of the mailutils
package you installed in Step 1.
To send a test email, type:
- echo "This is the body of the email" | mail -s "This is the subject line" your_email_address
In performing your own test(s), you may use the body and subject line text as-is, or change them to your liking. However, in place of your_email_address
, use a valid email address. The domain part can be gmail.com
, fastmail.com
, yahoo.com
, or any other email service provider that you use.
Now check the email address where you sent the test message. You should see the message in your Inbox. If not, check your Spam folder.
Note that with this configuration, the address in the From field for the test emails you send will be sammy@example.com
, where sammy is your Linux username and the domain is the server’s hostname. If you change your username, the From address will also change.
The last thing we want to set up is forwarding, so you’ll get emails sent to root on the system at your personal, external email address.
To configure Postfix so that system-generated emails will be sent to your email address, you need to edit the /etc/aliases
file:
- sudo nano /etc/aliases
The full contents of the file on a default installation of Debian 9 are as follows:
mailer-daemon: postmaster
postmaster: root
nobody: root
hostmaster: root
usenet: root
news: root
webmaster: root
www: root
ftp: root
abuse: root
noc: root
security: root
The postmaster: root
setting ensures that system-generated emails are sent to the root user. You want to edit these settings so these emails are rerouted to your email address. To accomplish that, edit the file so that it reads:
mailer-daemon: postmaster
postmaster: root
root: your_email_address
. . .
Replace your_email_address
with your personal email address. When finished, save and close the file. For the change to take effect, run the following command:
- sudo newaliases
You can test that it works by sending an email to the root account using:
- echo "This is the body of the email" | mail -s "This is the subject line" root
You should receive the email at your email address. If not, check your Spam folder.
That’s all it takes to set up a send-only email server using Postfix. You may want to take some additional steps to protect your domain from spammers, however.
If you want to receive notifications from your server at a single address, then having emails marked as Spam is less of an issue because you can create a whitelist workaround. However, if you want to send emails to potential site users (such as confirmation emails for a message board sign-up), you should definitely set up SPF records and DKIM so your server’s emails are more likely to be seen as legitimate.
How To Use an SPF Record to Prevent Spoofing & Improve E-mail Reliability
How To Install and Configure DKIM with Postfix on Debian Wheezy.
If configured correctly, these steps make it difficult to send Spam with an address that appears to originate from your domain. Taking these additional configuration steps will also make it more likely for common mail providers to see emails from your server as legitimate.
]]>Hadoop is a Java-based programming framework that supports the processing and storage of extremely large datasets on a cluster of inexpensive machines. It was the first major open source project in the big data playing field and is sponsored by the Apache Software Foundation.
Hadoop is comprised of four main layers:
Hadoop clusters are relatively complex to set up, so the project includes a stand-alone mode which is suitable for learning about Hadoop, performing simple operations, and debugging.
In this tutorial, you’ll install Hadoop in stand-alone mode and run one of the example example MapReduce programs it includes to verify the installation.
Before you begin, you might also like to take a look at An Introduction to Big Data Concepts and Terminology or An Introduction to Hadoop
To follow this tutorial, you will need:
sudo
privileges and a firewall, which you can set up by following the Initial Server Setup with Debian 9 tutorial.JAVA_HOME
environment variable set in /etc/environment
, as shown in How to Install Java with Apt on Debian 9. Hadoop requires this variable to be set.To install Hadoop, first visit the Apache Hadoop Releases page to find the most recent stable release.
Navigate to binary for the release you’d like to install. In this guide, we’ll install Hadoop 3.0.3.
On the next page, right-click and copy the link to the release binary.
On your server, use wget
to fetch it:
- wget http://www-us.apache.org/dist/hadoop/common/hadoop-3.0.3/hadoop-3.0.3.tar.gz
Note: The Apache website will direct you to the best mirror dynamically, so your URL may not match the URL above.
In order to ensure that the file you downloaded hasn’t been altered, do a quick check using SHA-256. Return to the releases page, then right-click and copy the link to the checksum file for the release binary you downloaded:
Again, use wget
on your server to download the file:
- wget https://dist.apache.org/repos/dist/release/hadoop/common/hadoop-3.0.3/hadoop-3.0.3.tar.gz.mds
Then run the verification:
- sha256sum hadoop-3.0.3.tar.gz
Outputdb96e2c0d0d5352d8984892dfac4e27c0e682d98a497b7e04ee97c3e2019277a hadoop-3.0.3.tar.gz
Compare this value with the SHA-256 value in the .mds
file:
- cat hadoop-3.0.3.tar.gz.mds | grep SHA256
...
SHA256 = DB96E2C0 D0D5352D 8984892D FAC4E27C 0E682D98 A497B7E0 4EE97C3E 2019277A
You can safely ignore the difference in case and the spaces. The output of the command you ran against the file we downloaded from the mirror should match the value in the file you downloaded from apache.org.
Now that you’ve verified that the file wasn’t corrupted or changed, use the tar
command with the -x
flag to extract, -z
to uncompress, -v
for verbose output, and -f
to specify that you’re extracting the archive from a file. Use tab-completion or substitute the correct version number in the command below:
- tar -xzvf hadoop-3.0.3.tar.gz
Finally, move the extracted files into /usr/local
, the appropriate place for locally installed software. Change the version number, if needed, to match the version you downloaded.
- sudo mv hadoop-3.0.3 /usr/local/hadoop
With the software in place, we’re ready to configure its environment.
Let’s make sure Hadoop runs. Execute the following command to launch Hadoop and display its help options:
- /usr/local/hadoop/bin/hadoop
You’ll see the following output, which lets you know you’ve successfully configured Hadoop to run in stand-alone mode.
OutputUsage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
or hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
where CLASSNAME is a user-provided Java class
OPTIONS is none or any of:
--config dir Hadoop config directory
--debug turn on shell script debug mode
--help usage information
buildpaths attempt to add class files from build tree
hostnames list[,of,host,names] hosts to use in slave mode
hosts filename list of hosts to use in slave mode
loglevel level set the log4j level for this command
workers turn on worker mode
SUBCOMMAND is one of:
. . .
We’ll ensure that it is functioning properly by running the example MapReduce program it ships with. To do so, create a directory called input
in your home directory and copy Hadoop’s configuration files into it to use those files as our data.
- mkdir ~/input
- cp /usr/local/hadoop/etc/hadoop/*.xml ~/input
Next, we’ll run the MapReduce hadoop-mapreduce-examples
program, a Java archive with several options. We’ll invoke its grep
program, one of the many examples included in hadoop-mapreduce-examples
, followed by the input directory, input
and the output directory grep_example
. The MapReduce grep program will count the matches of a literal word or regular expression. Finally, we’ll supply the regular expression allowed[.]*
to find occurrences of the word allowed
within or at the end of a declarative sentence. The expression is case-sensitive, so we wouldn’t find the word if it were capitalized at the beginning of a sentence.
Execute the following command:
- /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep ~/input ~/grep_example 'allowed[.]*'
When the task completes, it provides a summary of what has been processed and errors it has encountered, but this doesn’t contain the actual results:
Output . . .
File System Counters
FILE: Number of bytes read=1330690
FILE: Number of bytes written=3128841
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=2
Map output records=2
Map output bytes=33
Map output materialized bytes=43
Input split bytes=115
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=43
Reduce input records=2
Reduce output records=2
Spilled Records=4
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=3
Total committed heap usage (bytes)=478150656
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=147
File Output Format Counters
Bytes Written=34
The results are stored in the ~/grep_example
directory.
If this output directory already exists, the program will fail, and rather than seeing the summary, you’ll see something like this:
Output . . .
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.hadoop.util.RunJar.run(RunJar.java:244)
at org.apache.hadoop.util.RunJar.main(RunJar.java:158)
Check the results by running cat
on the output directory:
- cat ~/grep_example/*
You’ll see this output:
Output19 allowed.
1 allowed
The MapReduce task found 19 occurrences of the word allowed
followed by a period and one occurrence where it was not. Running the example program has verified that our stand-alone installation is working properly and that non-privileged users on the system can run Hadoop for exploration or debugging.
In this tutorial, we’ve installed Hadoop in stand-alone mode and verified it by running an example program it provided. To learn how to write your own MapReduce programs, visit Apache Hadoop’s MapReduce tutorial which walks through the code behind the example you used in this tutorial. When you’re ready to set up a cluster, see the Apache Foundation Hadoop Cluster Setup guide.
]]>Apache’s mod_rewrite
module lets you rewrite URLs in a cleaner fashion, translating human-readable paths into code-friendly query strings. It also lets you rewrite URLs based on conditions.
An .htaccess
file lets you create and apply rewrite rules without accessing server configuration files. By placing the .htaccess
file in the root of your web site, you can manage rewrites on a per-site or per-directory basis.
In this tutorial, you’ll enable mod_rewrite
and use .htaccess
files to create a basic URL redirection, and then explore a couple of advanced use cases.
To follow this tutorial, you will need:
One Debian 9 server set up by following the Debian 9 initial server setup guide, including a sudo non-root user and a firewall.
Apache installed by following Steps 1 and 2 of How To Install the Apache Web Server on Debian 9.
In order for Apache to understand rewrite rules, we first need to activate mod_rewrite
. It’s already installed, but it’s disabled on a default Apache installation. Use the a2enmod
command to enable the module:
- sudo a2enmod rewrite
This will activate the module or alert you that the module is already enabled. To put these changes into effect, restart Apache:
- sudo systemctl restart apache2
mod_rewrite
is now fully enabled. In the next step we will set up an .htaccess
file that we’ll use to define rewrite rules for redirects.
An .htaccess
file allows us to modify our rewrite rules without accessing server configuration files. For this reason, .htaccess
is critical to your web application’s security. The period that precedes the filename ensures that the file is hidden.
Note: Any rules that you can put in an .htaccess
file can also be put directly into server configuration files. In fact, the official Apache documentation recommends using server configuration files instead of .htaccess
thanks to faster processing times.
However, in this simple example, the performance increase will be negligible. Additionally, setting rules in .htaccess
is convenient, especially with multiple websites on the same server. It does not require a server restart for changes to take effect or root privileges to edit rules, simplifying maintenance and the process of making changes with an unprivileged account. Popular open-source software like Wordpress and Joomla relies on .htaccess
files to make modifications and additional rules on demand.
Before you start using .htaccess
files, you’ll need to set up and secure a few more settings.
By default, Apache prohibits using an .htaccess
file to apply rewrite rules, so first you need to allow changes to the file. Open the default Apache configuration file using nano
or your favorite text editor:
- sudo nano /etc/apache2/sites-available/000-default.conf
Inside that file, you will find a <VirtualHost *:80>
block starting on the first line. Inside of that block, add the following new block so your configuration file looks like the following. Make sure that all blocks are properly indented.
<VirtualHost *:80>
<Directory /var/www/html>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
. . .
</VirtualHost>
Save and close the file.
Check your configuration:
- sudo apache2ctl configtest
If there are no errors, restart Apache to put your changes into effect:
- sudo systemctl restart apache2
Now, create an .htaccess
file in the web root:
- sudo nano /var/www/html/.htaccess
Add this line at the top of the new file to activate the rewrite engine.
RewriteEngine on
Save the file and exit.
You now have an operational .htaccess
file that you can use to govern your web application’s routing rules. In the next step, we will create sample website files that we’ll use to demonstrate rewrite rules.
Here, we will set up a basic URL rewrite which converts pretty URLs into actual paths to pages. Specifically, we will allow users to access http://your_server_ip/about
, and display a page called about.html
.
Begin by creating a file named about.html
in the web root:
- sudo nano /var/www/html/about.html
Copy the following HTML code into the file, then save and close it.
<html>
<head>
<title>About Us</title>
</head>
<body>
<h1>About Us</h1>
</body>
</html>
You can access this page at http://your_server_ip/about.html
, but notice that if you try to access http://your_server_ip/about
, you will see a 404 Not Found error. To access the page using /about
instead, we’ll create a rewrite rule.
All RewriteRules
follow this format:
RewriteRule pattern substitution [flags]
RewriteRule
specifies the directive.pattern
is a regular expression that matches the desired string from the URL, which is what the viewer types in the browser.substitution
is the path to the actual URL, i.e. the path of the file Apache serves.flags
are optional parameters that can modify how the rule works.Let’s create our URL rewrite rule. Open up the .htaccess
file:
- sudo nano /var/www/html/.htaccess
After the first line, add the following RewriteRule
and save the file:
RewriteEngine on
RewriteRule ^about$ about.html [NC]
In this case, ^about$
is the pattern, about.html
is the substitution, and [NC]
is a flag. Our example uses a few characters with special meaning:
^
indicates the start of the URL, after your_server_ip/
.$
indicates the end of the URL.about
matches the string “about”.about.html
is the actual file that the user accesses.[NC]
is a flag that makes the rule case insensitive.You can now access http://your_server_ip/about
in your browser. In fact, with the rule shown above, the following URLs will also point to about.html
:
http://your_server_ip/about
, because of the rule definition.http://your_server_ip/About
, because the rule is case insensitive.http://your_server_ip/about.html
, because the original filename will always work.However, the following will not work:
http://your_server_ip/about/
, because the rule explicitly states that there may be nothing after about
, since the $
character appears after about
.http://your_server_ip/contact
, because it won’t match the about
string in the rule.You now have an operational .htaccess
file with a basic rule that you can modify and extend to your needs. In the following sections, we will show two additional examples of commonly used directives.
Web applications often make use of query strings, which are appended to a URL using a question mark (?
) after the address. Separate parameters are delimited using an ampersand (&
). Query strings may be used for passing additional data between individual application pages.
For example, a search result page written in PHP may use a URL like http://example.com/results.php?item=shirt&season=summer
. In this example, two additional parameters are passed to the imaginary result.php
application script: item
, with the value shirt
, and season
with the value summer
. The application may use the query string information to build the right page for the visitor.
Apache rewrite rules are often employed to simplify such long and unpleasant links as the example above into friendly URLs that are easier to type and interpret visually. In this example, we would like to simplify the above link to become http://example.com/shirt/summer
. The shirt
and summer
parameter values are still in the address, but without the query string and script name.
Here’s one rule to implement this:
RewriteRule ^shirt/summer$ results.php?item=shirt&season=summer [QSA]
The shirt/summer
is explicitly matched in the requested address and Apache is told to serve results.php?item=shirt&season=summer
instead.
The [QSA]
flags are commonly used in rewrite rules. They tell Apache to append any additional query string to the served URL, so if the visitor types http://example.com/shirt/summer?page=2
the server will respond with results.php?item=shirt&season=summer&page=2
. Without it, the additional query string would get discarded.
While this method achieves the desired effect, both the item name and season are hardcoded into the rule. This means the rule will not work for any other items, like pants
, or seasons, like winter
.
To make the rule more generic, we can use regular expressions to match parts of the original address and use those parts in a substitution pattern. The modified rule will then look like this:
RewriteRule ^([A-Za-z0-9]+)/(summer|winter|fall|spring) results.php?item=$1&season=$2 [QSA]
The first regular expression group in parenthesis matches a string containing alphanumeric characters and numbers like shirt
or pants
and saves the matched fragment as the $1
variable. The second regular expression group in parentheses matches exactly summer
, winter
, fall
, or spring
, and similarly saves the matched fragment as $2
.
The matched fragments are then used in the resulting URL in item
and season
variables instead of the hardcoded shirt
and summer
values we used before.
The above will convert, for example, http://example.com/pants/summer
into http://example.com/results.php?item=pants&season=summer
. This example is also future proof, allowing multiple items and seasons to be correctly rewritten using a single rule.
Rewrite rules are not necessarily always evaluated one by one without any limitations. The RewriteCond
directive lets us add conditions to our rewrite rules to control when the rules will be processed. All RewriteConds
abide by the following format:
RewriteCond TestString Condition [Flags]
RewriteCond
specifies the RewriteCond
directive.TestString
is the string to test against.Condition
is the pattern or condition to match.Flags
are optional parameters that may modify the condition and evaluation rules.If a RewriteCond
evaluates to true, the next RewriteRule
will be considered. If it doesn’t, the rule will be discarded. Multiple RewriteConds
may be used one after another, though all must evaluate to true for the next rule to be considered.
As an example, let’s assume you would like to redirect all requests to non-existent files or directories on your site back to the home page instead of showing the standard 404 Not Found error page. This can be achieved with following conditions rules:
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /
With the above:
%{REQUEST_FILENAME}
is the string to check. In this case, it’s the requested filename, which is a system variable available for every request.-f
is a built-in condition which verifies if the requested name exists on disk and is a file. The !
is a negation operator. Combined, !-f
evaluates to true only if a specified name does not exist or is not a file.!-d
evaluates to true only if a specified name does not exist or is not a directory.The RewriteRule
on the final line will come into effect only for requests to non-existent files or directories. The RewriteRule
itself is very simple and redirects every request to the /
website root.
mod_rewrite
lets you create human-readable URLs. In this tutorial, you learned how to use the RewriteRule
directive to redirect URLs, including ones with query strings. You also learned how to conditionally redirect URLs using the RewriteCond
directive.
If you’d like to learn more about mod_rewrite
, take a look at Apache’s mod_rewrite Introduction and Apache’s official documentation for mod_rewrite.
Composer is a popular dependency management tool for PHP, created mainly to facilitate installation and updates for project dependencies. It will check which other packages a specific project depends on and install them for you, using the appropriate versions according to the project requirements.
In this tutorial, you’ll install and get started with Composer on Debian 9.
To complete this tutorial, you will need:
sudo
access and a firewall.Before you download and install Composer, ensure your server has all dependencies installed.
First, update the package manager cache by running:
- sudo apt update
Now, let’s install the dependencies. We’ll need curl
in order to download Composer and php-cli
for installing and running it. The php-mbstring
package is necessary to provide functions for a library we’ll be using. git
is used by Composer for downloading project dependencies, and unzip
for extracting zipped packages. Everything can be installed with the following command:
- sudo apt install curl php-cli php-mbstring git unzip
With the prerequisites installed, we can install Composer itself.
Composer provides an installer, written in PHP. We’ll download it, verify that it’s not corrupted, and then use it to install Composer.
Make sure you’re in your home directory, then retrieve the installer using curl
:
- cd ~
- curl -sS https://getcomposer.org/installer -o composer-setup.php
Next, verify that the installer matches the SHA-384 hash for the latest installer found on the [Composer Public Keys / Signatures][composer-sigs] page. Copy the hash from that page and store it as a shell variable:
- HASH=544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061
Make sure that you substitute the latest hash for the highlighted value.
Now execute the following PHP script to verify that the installation script is safe to run:
- php -r "if (hash_file('SHA384', 'composer-setup.php') === '$HASH') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
You’ll see the following output.
Installer verified
If you see Installer corrupt
, then you’ll need to redownload the installation script again and double check that you’re using the correct hash. Then run the command to verify the installer again. Once you have a verified installer, you can continue.
To install composer
globally, use the following command which will download and install Composer as a system-wide command named composer
, under /usr/local/bin
:
- sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
You’ll see the following output:
OutputAll settings correct for using Composer
Downloading...
Composer (version 1.7.2) successfully installed to: /usr/local/bin/composer
Use it: php /usr/local/bin/composer
To test your installation, run:
- composer
And you’ll see this output displaying Composer’s version and arguments.
Output ______
/ ____/___ ____ ___ ____ ____ ________ _____
/ / / __ \/ __ `__ \/ __ \/ __ \/ ___/ _ \/ ___/
/ /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ /
\____/\____/_/ /_/ /_/ .___/\____/____/\___/_/
/_/
Composer version 1.7.2 2018-08-16 16:57:12
Usage:
command [options] [arguments]
Options:
-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
--profile Display timing and memory usage information
--no-plugins Whether to disable plugins.
-d, --working-dir=WORKING-DIR If specified, use the given directory as working directory.
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
. . .
This verifies that Composer installed successfully on your system and is available system-wide.
Note: If you prefer to have separate Composer executables for each project you host on this server, you can install it locally, on a per-project basis. Users of NPM will be familiar with this approach. This method is also useful when your system user doesn’t have permission to install software system-wide.
To do this, use the command php composer-setup.php
. This will generate a composer.phar
file in your current directory, which can be executed with ./composer.phar command
.
Now let’s look at using Composer to manage dependencies.
PHP projects often depend on external libraries, and managing those dependencies and their versions can be tricky. Composer solves that by tracking your dependencies and making it easy for others to install them.
In order to use Composer in your project, you’ll need a composer.json
file. The composer.json
file tells Composer which dependencies it needs to download for your project, and which versions of each package are allowed to be installed. This is extremely important to keep your project consistent and avoid installing unstable versions that could potentially cause backwards compatibility issues.
You don’t need to create this file manually - it’s easy to run into syntax errors when you do so. Composer auto-generates the composer.json
file when you add a dependency to your project using the require
command. You can add additional dependencies in the same way, without the need to manually edit this file.
The process of using Composer to install a package as dependency in a project involves the following steps:
composer require
to include the dependency in the composer.json
file and install the package.Let’s try this out with a demo application.
The goal of this application is to transform a given sentence into a URL-friendly string - a slug. This is commonly used to convert page titles to URL paths (like the final portion of the URL for this tutorial).
Let’s start by creating a directory for our project. We’ll call it slugify:
- cd ~
- mkdir slugify
- cd slugify
Now it’s time to search Packagist.org for a package that can help us generate slugs. If you search for the term “slug” on Packagist, you’ll get a result similar to this:
You’ll see two numbers on the right side of each package in the list. The number on the top represents how many times the package was installed, and the number on the bottom shows how many times a package was starred on GitHub. You can reorder the search results based on these numbers (look for the two icons on the right side of the search bar). Generally speaking, packages with more installations and more stars tend to be more stable, since so many people are using them. It’s also important to check the package description for relevance to make sure it’s what you need.
We need a simple string-to-slug converter. From the search results, the package cocur/slugify
seems to be a good match, with a reasonable amount of installations and stars. (The package is a bit further down the page than the screenshot shows.)
Packages on Packagist have a vendor name and a package name. Each package has a unique identifier (a namespace) in the same format GitHub uses for its repositories, in the form vendor/package
. The library we want to install uses the namespace cocur/slugif
. You need the namespace in order to require the package in your project.
Now that you know exactly which package you want to install, run composer require
to include it as a dependency and also generate the composer.json
file for the project:
- composer require cocur/slugify
You’ll see this output as Composer downloads the dependency:
OutputUsing version ^3.1 for cocur/slugify
./composer.json has been created
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 1 install, 0 updates, 0 removals
- Installing cocur/slugify (v3.1): Downloading (100%)
Writing lock file
Generating autoload files
As you can see from the output, Composer automatically decided which version of the package to use. If you check your project’s directory now, it will contain two new files: composer.json
and composer.lock
, and a vendor
directory:
- ls -l
Outputtotal 12
-rw-r--r-- 1 sammy sammy 59 Sep 7 16:03 composer.json
-rw-r--r-- 1 sammy sammy 2934 Sep 7 16:03 composer.lock
drwxr-xr-x 4 sammy sammy 4096 Sep 7 16:03 vendor
The composer.lock
file is used to store information about which versions of each package are installed, and ensure the same versions are used if someone else clones your project and installs its dependencies. The vendor
directory is where the project dependencies are located. The vendor
folder doesn’t need to be committed into version control - you only need to include the composer.json and composer.lock files.
When installing a project that already contains a composer.json
file, run composer install
in order to download the project’s dependencies.
Let’s take a quick look at version constraints. If you check the contents of your composer.json
file, you’ll see something like this:
- cat composer.json
Output{
"require": {
"cocur/slugify": "^3.1"
}
}
You might notice the special character ^
before the version number in composer.json
. Composer supports several different constraints and formats for defining the required package version, in order to provide flexibility while also keeping your project stable. The caret (^
) operator used by the auto-generated composer.json
file is the recommended operator for maximum interoperability, following semantic versioning. In this case, it defines 3.1 as the minimum compatible version, and allows updates to any future version below 4.0.
Generally speaking, you won’t need to tamper with version constraints in your composer.json
file. However, some situations might require that you manually edit the constraints–for instance, when a major new version of your required library is released and you want to upgrade, or when the library you want to use doesn’t follow semantic versioning.
Here are some examples to give you a better understanding of how Composer version constraints work:
Constraint | Meaning | Example Versions Allowed |
---|---|---|
^1.0 | >= 1.0 < 2.0 | 1.0, 1.2.3, 1.9.9 |
^1.1.0 | >= 1.1.0 < 2.0 | 1.1.0, 1.5.6, 1.9.9 |
~1.0 | >= 1.0 < 2.0.0 | 1.0, 1.4.1, 1.9.9 |
~1.0.0 | >= 1.0.0 < 1.1 | 1.0.0, 1.0.4, 1.0.9 |
1.2.1 | 1.2.1 | 1.2.1 |
1.* | >= 1.0 < 2.0 | 1.0.0, 1.4.5, 1.9.9 |
1.2.* | >= 1.2 < 1.3 | 1.2.0, 1.2.3, 1.2.9 |
For a more in-depth view of Composer version constraints, see the official documentation.
Next, let’s look at how to load dependencies automatically with Composer.
Since PHP itself doesn’t automatically load classes, Composer provides an autoload script that you can include in your project to get autoloading for free. This makes it much easier to work with your dependencies.
The only thing you need to do is include the vendor/autoload.php
file in your PHP scripts before any class instantiation. This file is automatically generated by Composer when you add your first dependency.
Let’s try it out in our application. Create the file test.php
and open it in your text editor:
- nano test.php
Add the following code which brings in the vendor/autoload.php
file, loads the cocur/slugify
dependency, and uses it to create a slug:
<?php require __DIR__ . '/vendor/autoload.php';
use Cocur\Slugify\Slugify;
$slugify = new Slugify();
echo $slugify->slugify('Hello World, this is a long sentence and I need to make a slug from it!');
Save the file and exit your editor.
Now run the script:
- php test.php
This produces the output hello-world-this-is-a-long-sentence-and-i-need-to-make-a-slug-from-it
.
Dependencies need updates when new versions come out, so let’s look at how to handle that.
Whenever you want to update your project dependencies to more recent versions, run the update
command:
- composer update
This will check for newer versions of the libraries you required in your project. If a newer version is found and it’s compatible with the version constraint defined in the composer.json
file, Composer will replace the previous version installed. The composer.lock
file will be updated to reflect these changes.
You can also update one or more specific libraries by specifying them like this:
- composer update vendor/package vendor2/package2
Be sure to check in your composer.json
and composer.lock
files after you update your dependencies so that others can install these newer versions.
Composer is a powerful tool every PHP developer should have in their utility belt. In this tutorial you installed Composer on Debian 9 and used it in a simple project. You now know how to install and update dependencies.
Beyond providing an easy and reliable way for managing project dependencies, it also establishes a new de facto standard for sharing and discovering PHP packages created by the community.
]]>ownCloud is an open-source file sharing server and collaboration platform that can store your personal content, like documents and pictures, in a centralized location. This allows you to take control of your content and security by not relying on third-party content hosting services like Dropbox.
In this tutorial, we will install and configure an ownCloud instance on a Debian 9 server.
In order to complete the steps in this guide, you will need the following:
sudo
privileges and set up a basic firewall by following the Debian 9 initial server setup guide.The ownCloud server package does not exist within the default repositories for Debian. However, ownCloud maintains a dedicated repository for the distribution that we can add to our server.
To begin, let’s install a few components to help us add the ownCloud repositories. The apt-transport-https
package allows us to use the deb https://
in our apt
sources list to indicate external repositories served over HTTPS:
- sudo apt update
- sudo apt install curl apt-transport-https
Next, download the ownCloud release key using the curl
command and import it with the apt-key
utility with the add
command:
- curl https://download.owncloud.org/download/repositories/production/Debian_9.0/Release.key | sudo apt-key add -
The ‘Release.key’ file contains a PGP (Pretty Good Privacy) public key which apt
will use to verify that the ownCloud package is authentic.
In addition to importing the key, create a file called owncloud.list
in the sources.list.d
directory for apt
. The file will contain the address to the ownCloud repository.
- echo 'deb http://download.owncloud.org/download/repositories/production/Debian_9.0/ /' | sudo tee /etc/apt/sources.list.d/owncloud.list
Now, we can use the package manager to find and install ownCloud. Along with the main package, we will also install a few additional PHP libraries that ownCloud uses to add extra functionality. Update your local package index and install everything by typing:
- sudo apt update
- sudo apt install php-bz2 php-curl php-gd php-imagick php-intl php-mbstring php-xml php-zip owncloud-files
Everything we need is now installed on the server, so next we can finish the configuration and we can begin using the service.
The ownCloud package we installed copies the web files to /var/www/owncloud
on the server. Currently, the Apache virtual host configuration is set up to serve files out of a different directory. We need to change the DocumentRoot
setting in our configuration to point to the new directory.
You find which virtual host files reference your domain name or IP address using the apache2ctl
utility with the DUMP_VHOSTS
option. Filter the output by your server’s domain name or IP address to find which files you need to edit in the next few commands:
- sudo apache2ctl -t -D DUMP_VHOSTS | grep server_domain_or_IP
The output will probably look something like this:
Output*:443 server_domain_or_IP (/etc/apache2/sites-enabled/server_domain_or_IP-le-ssl.conf:2)
port 80 namevhost server_domain_or_IP (/etc/apache2/sites-enabled/server_domain_or_IP.conf:1)
In the parentheses, you can see each of the files that reference the domain name or IP address we’ll use to access ownCloud. These are the files you’ll need to edit.
For each match, open the file in a text editor with sudo
privileges:
- sudo nano /etc/apache2/sites-enabled/server_domain_or_IP.conf
Inside, search for the DocumentRoot
directive. Change the line so that it points to the /var/www/owncloud
directory:
<VirtualHost *:80>
. . .
DocumentRoot /var/www/owncloud
. . .
</VirtualHost>
Save and close the file when you are finished. Complete this process for each of the files that referenced your domain name (or IP address if you did not configure a domain for your server).
When you are finished, check the syntax of your Apache files to make sure there were no detectable typos in your configuration:
- sudo apache2ctl configtest
OutputSyntax OK
Depending on your configuration, you may see a warning about setting ServerName
globally. As long as the output ends with Syntax OK
, you can ignore that warning. If you see additional errors, go back and check the files you just edited for mistakes.
If your syntax check passed, reload the Apache service to activate the new changes:
- sudo systemctl reload apache2
Apache should now know how to server your ownCloud files.
Before we move on to the web configuration, we need to set up the database. During the web-based configuration process, we will need to provide an database name, a database username, and a database password so that ownCloud can connect and manage its information within MySQL.
Begin by logging into your database with the MySQL administrative account:
- sudo mysql
If you set up password authentication for a MySQL administrative account, you may have to use this syntax instead:
- mysql -u admin -p
Create a dedicated database for ownCloud to use. We will name the database owncloud
for clarity:
- CREATE DATABASE owncloud;
Note: Every MySQL statement must end with a semi-colon (;). Be sure to check that this is present if you are experiencing an issue.
Next, create a separate MySQL user account to manage the newly created database. Creating one-function databases and accounts is a good idea from a management and security standpoint. As with the naming of the database, choose a username that you prefer. We elected to go with the name owncloud
in this guide.
- GRANT ALL ON owncloud.* to 'owncloud'@'localhost' IDENTIFIED BY 'owncloud_database_password';
Warning: Be sure to put an actual password where the command states: owncloud_database_password
With the user assigned access to the database, perform the flush privileges operation to ensure that the running instance of MySQL knows about the recent privilege assignment:
- FLUSH PRIVILEGES;
You can now exit the MySQL session by typing:
- exit
With the ownCloud server installed and the database set up, we are ready to turn our attention to configuring the ownCloud application.
To access the ownCloud web interface, open a web browser and navigate to the following address:
https://server_domain_or_IP
Note: If you are using a self-signed SSL certificate, you will likely be presented with a warning because the certificate is not signed by one of your browser’s trusted authorities. This is expected and normal. Click the appropriate button or link to proceed to the ownCloud admin page.
You should see the ownCloud web configuration page in your browser.
Create an admin account by choosing a username and a password. For security purposes it is not recommended to use something like “admin” for the username:
Next, leave the Data folder setting as-is and scroll down to the database configuration section.
Fill out the details of the database name, database username, and database password you created in the previous section. If you used the settings from this guide, both the database name and username will be owncloud
. Leave the database host as localhost
:
Click the Finish setup button to finish configuring ownCloud using the information you’ve provided. You will be taken to a login screen where you can sign in using your new account:
On your first login, a screen will appear where you can download applications to sync your files on various devices. You can download and configure these now or do it at a later time. When you are finished, click the x in the top-right corner of the splash screen to access the main interface:
Here, you can create or upload files to your personal cloud.
ownCloud can replicate the capabilities of popular third-party cloud storage services. Content can be shared between users or externally with public URLs. The advantage of ownCloud is that the information is stored in a place that you control and manage without a third party.
Explore the interface and for additional functionality, install plugins using ownCloud’s app store.
]]>WordPress is the most popular CMS (content management system) on the internet. It allows you to easily set up flexible blogs and websites on top of a MariaDB backend with PHP processing. WordPress has seen incredible adoption and is a great choice for getting a website up and running quickly. After setup, almost all administration can be done through the web frontend.
In this guide, we’ll focus on getting a WordPress instance set up on a LAMP stack (Linux, Apache, MariaDB, and PHP) on a Debian 9 server.
In order to complete this tutorial, you will need access to a Debian 9 server.
You will need to perform the following tasks before you can start this guide:
sudo
user on your server: We will be completing the steps in this guide using a non-root user with sudo
privileges. You can create a user with sudo
privileges by following our Debian 9 initial server setup guide.When you are finished with the setup steps, log in to your server as your sudo
user and continue below.
The first step that we will take is a preparatory one. WordPress uses MySQL to manage and store site and user information. We have MariaDB — a drop-in replacement for MySQL — installed already, but we need to make a database and a user for WordPress to use.
To get started, open up the MariaDB prompt as the root account:
- sudo mariadb
Note: If you set up another account with administrative privileges when you installed and set up MariaDB, you can also log in as that user. You’ll need to do so with the following command:
- mariadb -u username -p
After issuing this command, MariaDB will prompt you for the password you set for that account.
Begin by creating a new database that WordPress will control. You can call this whatever you would like but, to keep it simple for this guide, we will name it wordpress.
Create the database for WordPress by typing:
- CREATE DATABASE wordpress DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
Note that every MySQL statement must end in a semi-colon (;
). Check to make sure this is present if you are running into any issues.
Next, create a separate MySQL user account that we will use exclusively to operate on our new database. Creating single-function databases and accounts is a good idea from a management and security standpoint. We will use the name wordpressuser in this guide, but feel free to change this if you’d like.
Create this account, set a password, and grant the user access to the database you just created with the following command. Remember to choose a strong password for your database user:
- GRANT ALL ON wordpress.* TO 'wordpressuser'@'localhost' IDENTIFIED BY 'password';
You now have a database and user account, each made specifically for WordPress. Run the following command to reload the grant tables so that the current instance of MariaDB knows about the changes you’ve made:
- FLUSH PRIVILEGES;
Exit out of MariaDB by typing:
- EXIT;
Now that you’ve configured the database and user that will be used by WordPress, you can move on to installing some PHP-related packages used by the CMS.
When setting up our LAMP stack, we only required a very minimal set of extensions in order to get PHP to communicate with MariaDB. WordPress and many of its plugins leverage additional PHP extensions.
Download and install some of the most popular PHP extensions for use with WordPress by typing:
- sudo apt update
- sudo apt install php-curl php-gd php-mbstring php-xml php-xmlrpc php-soap php-intl php-zip
Note: Each WordPress plugin has its own set of requirements. Some may require additional PHP packages to be installed. Check your plugin documentation to find its PHP requirements. If they are available, they can be installed with apt
as demonstrated above.
We will restart Apache to load these new extensions in the next section. If you are returning here to install additional plugins, you can restart Apache now by typing:
- sudo systemctl restart apache2
At this point, all that’s left to do before installing WordPress is to make some changes to your Apache configuration in order to allow the CMS to function smoothly.
With the additional PHP extensions installed and ready for use, the next thing to do is to make a few changes to your Apache configuration. Based on the prerequisite tutorials, you should have a configuration file for your site in the /etc/apache2/sites-available/
directory. We’ll use /etc/apache2/sites-available/wordpress.conf
as an example here, but you should substitute the path to your configuration file where appropriate.
Additionally, we will use /var/www/wordpress
as the root directory of our WordPress install. You should use the web root specified in your own configuration.
Note: It’s possible you are using the 000-default.conf
default configuration (with /var/www/html
as your web root). This is fine to use if you’re only going to host one website on this server. If not, it’s best to split the necessary configuration into logical chunks, one file per site.
Currently, the use of .htaccess
files is disabled. WordPress and many WordPress plugins use these files extensively for in-directory tweaks to the web server’s behavior.
Open the Apache configuration file for your website. Note that if you have an existing Apache configuration file for your website, this file’s name will be different:
- sudo nano /etc/apache2/sites-available/wordpress.conf
To allow .htaccess
files, you’ll need to add a Directory
block pointing to your document root with an AllowOverride
directive within it. Add the following block of text inside the VirtualHost
block in your configuration file, being sure to use the correct web root directory:
<Directory /var/www/wordpress/>
AllowOverride All
</Directory>
When you are finished, save and close the file.
Next, enable the rewrite
module in order to utilize the WordPress permalink feature:
- sudo a2enmod rewrite
Before implementing the changes you’ve made, check to make sure that you haven’t made any syntax errors:
- sudo apache2ctl configtest
If your configuration file’s syntax is correct, you’ll see the following in your output:
OutputSyntax OK
If this command reports any errors, go back and check that you haven’t made any syntax errors in your configuration file. Otherwise, restart Apache to implement the changes:
- sudo systemctl restart apache2
Next, we will download and set up WordPress itself.
Now that your server software is configured, you can download and set up WordPress. For security reasons in particular, it is always recommended to get the latest version of WordPress directly from their site.
Note: We will use curl
to download WordPress, but this program may not be installed by default on your Debian server. To install it, run:
- sudo apt install curl
Change into a writable directory and then download the compressed release by typing:
- cd /tmp
- curl -O https://wordpress.org/latest.tar.gz
Extract the compressed file to create the WordPress directory structure:
- tar xzvf latest.tar.gz
We will move these files into our document root momentarily. Before we do, though, add a dummy .htaccess
file so that this will be available for WordPress to use later.
Create the file by typing:
- touch /tmp/wordpress/.htaccess
Then copy over the sample configuration file to the filename that WordPress actually reads:
- cp /tmp/wordpress/wp-config-sample.php /tmp/wordpress/wp-config.php
Additionally, create the upgrade
directory so that WordPress won’t run into permissions issues when trying to do this on its own following an update to its software:
- mkdir /tmp/wordpress/wp-content/upgrade
Then, copy the entire contents of the directory into your document root. Notice that the following command includes a dot at the end of the source directory to indicate that everything within the directory should be copied, including hidden files (like the .htaccess
file you created):
- sudo cp -a /tmp/wordpress/. /var/www/wordpress
With that, you’ve successfully installed WordPress onto your web server and performed some of the initial configuration steps. Next, we’ll discuss some further configuration changes that will give WordPress the privileges it needs to function as well as access to the MariaDB database and user account you created previously.
Before we can go through the web-based setup process for WordPress, we need to adjust some items in our WordPress directory.
Start by giving ownership of all the files to the www-data user and group. This is the user that the Apache web server runs as, and Apache will need to be able to read and write WordPress files in order to serve the website and perform automatic updates.
Update the ownership with chown
:
- sudo chown -R www-data:www-data /var/www/wordpress
Next we will run two find
commands to set the correct permissions on the WordPress directories and files:
- sudo find /var/www/wordpress/ -type d -exec chmod 750 {} \;
- sudo find /var/www/wordpress/ -type f -exec chmod 640 {} \;
These should be a reasonable permissions set to start with, although some plugins and procedures might require additional tweaks.
Following this, you will need to make some changes to the main WordPress configuration file.
When you open the file, your first objective will be to adjust some secret keys to provide some security for your installation. WordPress provides a secure generator for these values so that you do not have to try to come up with good values on your own. These are only used internally, so it won’t hurt usability to have complex, secure values here.
To grab secure values from the WordPress secret key generator, type:
- curl -s https://api.wordpress.org/secret-key/1.1/salt/
You will get back unique values that look something like this:
Warning! It is important that you request unique values each time. Do NOT copy the values shown below!
Outputdefine('AUTH_KEY', '1jl/vqfs<XhdXoAPz9 DO NOT COPY THESE VALUES c_j{iwqD^<+c9.k<J@4H');
define('SECURE_AUTH_KEY', 'E2N-h2]Dcvp+aS/p7X DO NOT COPY THESE VALUES {Ka(f;rv?Pxf})CgLi-3');
define('LOGGED_IN_KEY', 'W(50,{W^,OPB%PB<JF DO NOT COPY THESE VALUES 2;y&,2m%3]R6DUth[;88');
define('NONCE_KEY', 'll,4UC)7ua+8<!4VM+ DO NOT COPY THESE VALUES #`DXF+[$atzM7 o^-C7g');
define('AUTH_SALT', 'koMrurzOA+|L_lG}kf DO NOT COPY THESE VALUES 07VC*Lj*lD&?3w!BT#-');
define('SECURE_AUTH_SALT', 'p32*p,]z%LZ+pAu:VY DO NOT COPY THESE VALUES C-?y+K0DK_+F|0h{!_xY');
define('LOGGED_IN_SALT', 'i^/G2W7!-1H2OQ+t$3 DO NOT COPY THESE VALUES t6**bRVFSD[Hi])-qS`|');
define('NONCE_SALT', 'Q6]U:K?j4L%Z]}h^q7 DO NOT COPY THESE VALUES 1% ^qUswWgn+6&xqHN&%');
These are configuration lines that you will paste directly into your configuration file to set secure keys. Copy the output you received to your clipboard, and then open the WordPress configuration file located in your document root:
- sudo nano /var/www/wordpress/wp-config.php
Find the section that contains the dummy values for those settings. It will look something like this:
. . .
define('AUTH_KEY', 'put your unique phrase here');
define('SECURE_AUTH_KEY', 'put your unique phrase here');
define('LOGGED_IN_KEY', 'put your unique phrase here');
define('NONCE_KEY', 'put your unique phrase here');
define('AUTH_SALT', 'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT', 'put your unique phrase here');
define('NONCE_SALT', 'put your unique phrase here');
. . .
Delete these lines and paste in the values you copied from the command line:
. . .
define('AUTH_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('SECURE_AUTH_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('LOGGED_IN_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('NONCE_KEY', 'VALUES COPIED FROM THE COMMAND LINE');
define('AUTH_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('SECURE_AUTH_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('LOGGED_IN_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
define('NONCE_SALT', 'VALUES COPIED FROM THE COMMAND LINE');
. . .
Next, modify the database connection settings at the top of the file. You need to adjust the database name, the database user, and the associated password that you’ve configured within MariaDB.
The other change you must make is to set the method that WordPress should use to write to the filesystem. Since we’ve given the web server permission to write where it needs to, we can explicitly set the filesystem method to “direct”. Failure to set this with our current settings would result in WordPress prompting for FTP credentials when you perform certain actions.
This setting can be added below the database connection settings, or anywhere else in the file:
. . .
define('DB_NAME', 'wordpress');
/** MySQL database username */
define('DB_USER', 'wordpressuser');
/** MySQL database password */
define('DB_PASSWORD', 'password');
. . .
define('FS_METHOD', 'direct');
Save and close the file when you are finished. Finally, you can finish installing and configuring WordPress by accessing it through your web browser.
Now that the server configuration is complete, we can complete the installation through the web interface.
In your web browser, navigate to your server’s domain name or public IP address:
https://server_domain_or_IP
Select the language you would like to use:
Next, you will come to the main setup page. Select a name for your WordPress site and choose a username (it is recommended not to choose something like “admin” for security purposes). A strong password is generated automatically. Save this password or select an alternative strong password.
Enter your email address and select whether you want to discourage search engines from indexing your site:
When ready, click the Install WordPress button. You’ll be taken to a page that prompts you to log in:
Once you log in, you will be taken to the WordPress administration dashboard:
From the dashboard, you can begin making changes to your site’s theme and publishing content.
WordPress should be installed and ready to use! Some common next steps are to choose the permalinks setting for your posts (which can be found in Settings > Permalinks) or to select a new theme (in Appearance > Themes). If this is your first time using WordPress, explore the interface a bit to get acquainted with your new CMS.
]]>TLS, or transport layer security, and its predecessor SSL, which stands for secure sockets layer, are web protocols used to wrap normal traffic in a protected, encrypted wrapper.
Using this technology, servers can send traffic safely between servers and clients without the possibility of messages being intercepted by outside parties. The certificate system also assists users in verifying the identity of the sites that they are connecting with.
In this guide, we will show you how to set up a self-signed SSL certificate for use with an Apache web server on Debian 9.
Note: A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.
A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where an encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate. You can find out how to set up a free trusted certificate with the Let’s Encrypt project here.
Before you begin, you should have a non-root user configured with sudo
privileges. You can learn how to set up such a user account by following our Initial Server Setup with Debian 9.
You will also need to have the Apache web server installed. If you would like to install an entire LAMP (Linux, Apache, MariaDB, PHP) stack on your server, you can follow our guide on setting up LAMP on Debian 9. If you just want the Apache web server, skip the steps pertaining to PHP and MariaDB.
When you have completed these prerequisites, continue below.
TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server. It is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.
We can create a self-signed key and certificate pair with OpenSSL in a single command:
- sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt
You will be asked a series of questions. Before we go over that, let’s take a look at what is happening in the command we are issuing:
rsa:2048
portion tells it to make an RSA key that is 2048 bits long.As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.
Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name)
. You need to enter the domain name associated with your server or, more likely, your server’s public IP address.
The entirety of the prompts will look something like this:
OutputCountry Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New York
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
Organizational Unit Name (eg, section) []:Ministry of Water Slides
Common Name (e.g. server FQDN or YOUR name) []:server_IP_address
Email Address []:admin@your_domain.com
Both of the files you created will be placed in the appropriate subdirectories under /etc/ssl
.
We have created our key and certificate files under the /etc/ssl
directory. Now we just need to modify our Apache configuration to take advantage of these.
We will make a few adjustments to our configuration:
When we are finished, we should have a secure SSL configuration.
First, we will create an Apache configuration snippet to define some SSL settings. This will set Apache up with a strong SSL cipher suite and enable some advanced features that will help keep our server secure. The parameters we will set can be used by any Virtual Hosts enabling SSL.
Create a new snippet in the /etc/apache2/conf-available
directory. We will name the file ssl-params.conf
to make its purpose clear:
- sudo nano /etc/apache2/conf-available/ssl-params.conf
To set up Apache SSL securely, we will be using the recommendations by Remy van Elst on the Cipherli.st site. This site is designed to provide easy-to-consume encryption settings for popular software.
The suggested settings on the site linked to above offer strong security. Sometimes, this comes at the cost of greater client compatibility. If you need to support older clients, there is an alternative list that can be accessed by clicking the link on the page labelled “Yes, give me a ciphersuite that works with legacy / old software.” That list can be substituted for the items copied below.
The choice of which config you use will depend largely on what you need to support. They both will provide great security.
For our purposes, we can copy the provided settings in their entirety. We will just make one small change to this and disable the Strict-Transport-Security
header (HSTS).
Preloading HSTS provides increased security, but can have far-reaching consequences if accidentally enabled or enabled incorrectly. In this guide, we will not enable the settings, but you can modify that if you are sure you understand the implications.
Before deciding, take a moment to read up on HTTP Strict Transport Security, or HSTS, and specifically about the “preload” functionality.
Paste the following configuration into the ssl-params.conf
file we opened:
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLHonorCipherOrder On
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
# Header always set Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
Header always set X-Frame-Options DENY
Header always set X-Content-Type-Options nosniff
# Requires Apache >= 2.4
SSLCompression off
SSLUseStapling on
SSLStaplingCache "shmcb:logs/stapling-cache(150000)"
# Requires Apache >= 2.4.11
SSLSessionTickets Off
Save and close the file when you are finished.
Next, let’s modify /etc/apache2/sites-available/default-ssl.conf
, the default Apache SSL Virtual Host file. If you are using a different server block file, substitute its name in the commands below.
Before we go any further, let’s back up the original SSL Virtual Host file:
- sudo cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/default-ssl.conf.bak
Now, open the SSL Virtual Host file to make adjustments:
- sudo nano /etc/apache2/sites-available/default-ssl.conf
Inside, with most of the comments removed, the Virtual Host block should look something like this by default:
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
SSLEngine on
SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<Directory /usr/lib/cgi-bin>
SSLOptions +StdEnvVars
</Directory>
</VirtualHost>
</IfModule>
We will be making some minor adjustments to the file. We will set the normal things we’d want to adjust in a Virtual Host file (ServerAdmin email address, ServerName, etc.), and adjust the SSL directives to point to our certificate and key files. Again, if you’re using a different document root, be sure to update the DocumentRoot
directive.
After making these changes, your server block should look similar to this:
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
ServerAdmin your_email@example.com
ServerName server_domain_or_IP
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
SSLEngine on
SSLCertificateFile /etc/ssl/certs/apache-selfsigned.crt
SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key
<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<Directory /usr/lib/cgi-bin>
SSLOptions +StdEnvVars
</Directory>
</VirtualHost>
</IfModule>
Save and close the file when you are finished.
As it stands now, the server will provide both unencrypted HTTP and encrypted HTTPS traffic. For better security, it is recommended in most cases to redirect HTTP to HTTPS automatically. If you do not want or need this functionality, you can safely skip this section.
To adjust the unencrypted Virtual Host file to redirect all traffic to be SSL encrypted, open the /etc/apache2/sites-available/000-default.conf
file:
- sudo nano /etc/apache2/sites-available/000-default.conf
Inside, within the VirtualHost
configuration blocks, add a Redirect
directive, pointing all traffic to the SSL version of the site:
<VirtualHost *:80>
. . .
Redirect "/" "https://your_domain_or_IP/"
. . .
</VirtualHost>
Save and close the file when you are finished.
That’s all of the configuration changes you need to make to Apache. Next, we will discuss how to update firewall rules with ufw
to allow encrypted HTTPS traffic to your server.
If you have the ufw
firewall enabled, as recommended by the prerequisite guides, you might need to adjust the settings to allow for SSL traffic. Fortunately, when installed on Debian 9, ufw
comes loaded with app profiles which you can use to tweak your firewall settings
We can see the available profiles by typing:
- sudo ufw app list
You should see a list like this, with the following four profiles near the bottom of the output:
OutputAvailable applications:
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
You can see the current setting by typing:
- sudo ufw status
If you allowed only regular HTTP traffic earlier, your output might look like this:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW (v6) ALLOW Anywhere (v6)
To additionally let in HTTPS traffic, allow the “WWW Full” profile and then delete the redundant “WWW” profile allowance:
- sudo ufw allow 'WWW Full'
- sudo ufw delete allow 'WWW'
Your status should look like this now:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW Full (v6) ALLOW Anywhere (v6)
With your firewall configured to allow HTTPS traffic, you can move on to the next step where we’ll go over how to enable a few modules and configuration files to allow SSL to function properly.
Now that we’ve made our changes and adjusted our firewall, we can enable the SSL and headers modules in Apache, enable our SSL-ready Virtual Host, and then restart Apache to put these changes into effect.
Enable mod_ssl
, the Apache SSL module, and mod_headers
, which is needed by some of the settings in our SSL snippet, with the a2enmod
command:
- sudo a2enmod ssl
- sudo a2enmod headers
Next, enable your SSL Virtual Host with the a2ensite
command:
- sudo a2ensite default-ssl
You will also need to enable your ssl-params.conf
file, to read in the values you’ve set:
- sudo a2enconf ssl-params
At this point, the site and the necessary modules are enabled. We should check to make sure that there are no syntax errors in our files. Do this by typing:
- sudo apache2ctl configtest
If everything is successful, you will get a result that looks like this:
OutputSyntax OK
As long as your output has Syntax OK
in it, then your configuration file has no syntax errors and you can safely restart Apache to implement the changes:
- sudo systemctl restart apache2
With that, your self-signed SSL certificate is all set. You can now test that your server is correctly encrypting its traffic.
You’re now ready to test your SSL server.
Open your web browser and type https://
followed by your server’s domain name or IP into the address bar:
https://server_domain_or_IP
Because the certificate you created isn’t signed by one of your browser’s trusted certificate authorities, you will likely see a scary looking warning like the one below:
This is expected and normal. We are only interested in the encryption aspect of our certificate, not the third party validation of our host’s authenticity. Click ADVANCED and then the link provided to proceed to your host anyways:
You should be taken to your site. If you look in the browser address bar, you will see a lock with an “x” over it or another similar “not secure” notice. In this case, this just means that the certificate cannot be validated. It is still encrypting your connection.
If you configured Apache to redirect HTTP to HTTPS, you can also check whether the redirect functions correctly:
http://server_domain_or_IP
If this results in the same icon, this means that your redirect worked correctly. However, the redirect you created earlier is only a temporary redirect. If you’d like to make the redirection to HTTPS permanent, continue on to the final step.
If your redirect worked correctly and you are sure you want to allow only encrypted traffic, you should modify the unencrypted Apache Virtual Host again to make the redirect permanent.
Open your server block configuration file again:
- sudo nano /etc/apache2/sites-available/000-default.conf
Find the Redirect
line we added earlier. Add permanent
to that line, which changes the redirect from a 302 temporary redirect to a 301 permanent redirect:
<VirtualHost *:80>
. . .
Redirect permanent "/" "https://your_domain_or_IP/"
. . .
</VirtualHost>
Save and close the file.
Check your configuration for syntax errors:
- sudo apache2ctl configtest
If this command doesn’t report any syntax errors, restart Apache:
- sudo systemctl restart apache2
This will make the redirect permanent, and your site will only serve traffic over HTTPS.
You have configured your Apache server to use strong encryption for client connections. This will allow you serve requests securely, and will prevent outside parties from reading your traffic.
]]>GitLab CE, or Community Edition, is an open-source application primarily used to host Git repositories, with additional development-related features like issue tracking. It is designed to be hosted using your own infrastructure, and provides flexibility in deploying as an internal repository store for your development team, a public way to interface with users, or a means for contributors to host their own projects.
The GitLab project makes it relatively straightforward to set up a GitLab instance on your own hardware with an easy installation mechanism. In this guide, we will cover how to install and configure GitLab on a Debian 9 server.
For this tutorial, you will need:
sudo
user and basic firewall. To set this up, follow our Debian 9 initial server setup guide.The published GitLab hardware requirements recommend using a server with:
Although you may be able to get by with substituting some swap space for RAM, it is not recommended. For this guide we will assume that you have the above resources as a minimum.
Before we can install GitLab itself, it is important to install some of the software that it leverages during installation and on an ongoing basis. Fortunately, all of the required software can be easily installed from Debian’s default package repositories.
Since this is our first time using apt
during this session, we can refresh the local package index and then install the dependencies by typing:
- sudo apt update
- sudo apt install ca-certificates curl openssh-server postfix
You may have some of this software installed already. For the postfix
installation, select Internet Site when prompted. On the next screen, enter your server’s domain name to configure how the system will send mail.
Now that the dependencies are in place, we can install GitLab itself. This is a straightforward process that leverages an installation script to configure your system with the GitLab repositories.
Move into the /tmp
directory and then download the installation script:
- cd /tmp
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Feel free to examine the downloaded script to ensure that you are comfortable with the actions it will take. You can also find a hosted version of the script here:
- less /tmp/script.deb.sh
Once you are satisfied with the safety of the script, run the installer:
- sudo bash /tmp/script.deb.sh
The script will set up your server to use the GitLab maintained repositories. This lets you manage GitLab with the same package management tools you use for your other system packages. Once this is complete, you can install the actual GitLab application with apt
:
- sudo apt install gitlab-ce
This will install the necessary components on your system.
Before you configure GitLab, you will need to ensure that your firewall rules are permissive enough to allow web traffic. If you followed the guide linked in the prerequisites, you will have a ufw
firewall enabled.
View the current status of your active firewall by typing:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
As you can see, the current rules allow SSH traffic through, but access to other services is restricted. Since GitLab is a web application, we should allow HTTP access. Because we will be taking advantage of GitLab’s ability to request and enable a free TLS/SSL certificate from Let’s Encrypt, let’s also allow HTTPS access.
We can allow access to both HTTP and HTTPS by allowing the “WWW Full” app profile through our firewall. If you didn’t already have OpenSSH traffic enabled, you should allow that traffic now too:
- sudo ufw allow "WWW Full"
- sudo ufw allow OpenSSH
Check the ufw status
again, this time appending the verbose
flag; you should see access configured to at least these two services:
- sudo ufw status verbose
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN Anywhere
80,443/tcp (WWW Full) ALLOW IN Anywhere
22/tcp (OpenSSH (v6)) ALLOW IN Anywhere (v6)
80,443/tcp (WWW Full (v6)) ALLOW IN Anywhere (v6)
The above output indicates that the GitLab web interface will be accessible once we configure the application.
Before you can use the application, you need to update the configuration file and run a reconfiguration command. First, open Gitlab’s configuration file:
- sudo nano /etc/gitlab/gitlab.rb
Near the top is the external_url
configuration line. Update it to match your domain. Change http
to https
so that GitLab will automatically redirect users to the site protected by the Let’s Encrypt certificate:
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
external_url 'https://example.com'
Next, look for the letsencrypt['contact_emails']
setting. This setting defines a list of email addresses that the Let’s Encrypt project can use to contact you if there are problems with your domain. It’s a good idea to uncomment and fill this out so that you will know of any issues:
letsencrypt['contact_emails'] = ['sammy@example.com']
Save and close the file. Run the following command to reconfigure Gitlab:
- sudo gitlab-ctl reconfigure
This will initialize GitLab using the information it can find about your server. This is a completely automated process, so you will not have to answer any prompts. The process will also configure a Let’s Encrypt certificate for your domain.
With GitLab running and access permitted, we can perform some initial configuration of the application through the web interface.
Visit the domain name of your GitLab server in your web browser:
https://example.com
On your first time visiting, you should see an initial prompt to set a password for the administrative account:
In the initial password prompt, supply and confirm a secure password for the administrative account. Click on the Change your password button when you are finished.
You will be redirected to the conventional GitLab login page:
Here, you can log in with the password you just set. The credentials are:
Enter these values into the fields for existing users and click the Sign in button. You will be signed into the application and taken to a landing page that prompts you to begin adding projects:
You can now make some simple changes to get GitLab set up the way you’d like.
One of the first things you should do after a fresh installation is get your profile into better shape. GitLab selects some reasonable defaults, but these are not usually appropriate once you start using the software.
To make the necessary modifications, click on the user icon in the upper-right hand corner of the interface. In the drop down menu that appears, select Settings:
You will be taken to the Profile section of your settings:
Adjust the Name and Email address from “Administrator” and “admin@example.com” to something more accurate. The name you select will be displayed to other users, while the email will be used for default avatar detection, notifications, Git actions through the interface, etc.
Click on the Update Profile settings button at the bottom when you are done:
A confirmation email will be sent to the address you provided. Follow the instructions in the email to confirm your account so that you can begin using it with GitLab.
Next, click on the Account item in the left-hand menu bar:
Here, you can find your private API token or configure two-factor authentication. However, the functionality we are interested in at the moment is the Change username section.
By default, the first administrative account is given the name root. Since this is a known account name, it is more secure to change this to a different name. You will still have administrative privileges; the only thing that will change is the name. Replace root with your preferred username:
Click on the Update username button to make the change:
Next time you log in to the GitLab, remember to use your new username.
In most cases, you will want to use SSH keys with Git to interact with your GitLab projects. To do this, you need to add your SSH public key to your GitLab account.
If you already have an SSH key pair created on your local computer, you can usually view the public key by typing:
- cat ~/.ssh/id_rsa.pub
You should see a large chunk of text, like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copy this text and head back to the Settings page in GitLab’s web interface.
If, instead, you get a message that looks like this, you do not yet have an SSH key pair configured on your machine:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
If this is the case, you can create an SSH key pair by typing:
- ssh-keygen
Accept the defaults and optionally provide a password to secure the key locally:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Once you have this, you can display your public key as above by typing:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copy the block of text that’s displayed and head back to your Settings in GitLab’s web interface.
Click on the SSH Keys item in the left-hand menu:
In the provided space paste the public key you copied from your local machine. Give it a descriptive title, and click the Add key button:
You should now be able to manage your GitLab projects and repositories from your local machine without having to provide your GitLab account credentials.
You may have noticed that it is possible for anyone to sign up for an account when you visit your GitLab instance’s landing page. This may be what you want if you are looking to host public project. However, many times, more restrictive settings are desirable.
To begin, make your way to the administrative area by clicking on the wrench icon in the main menu bar at the top of the page:
On the page that follows, you can see an overview of your GitLab instance as a whole. To adjust the settings, click on the Settings item at the bottom of the left-hand menu:
You will be taken to the global settings for your GitLab instance. Here, you can adjust a number of settings that affect whether new users can sign up and their level of access.
If you wish to disable sign-ups completely (you can still manually create accounts for new users), scroll down to the Sign-up Restrictions section.
Deselect the Sign-up enabled check box:
Scroll down to the bottom and click on the Save changes button:
The sign-up section should now be removed from the GitLab landing page.
If you are using GitLab as part of an organization that provides email addresses associated with a domain, you can restrict sign-ups by domain instead of completely disabling them.
In the Sign-up Restrictions section, select the Send confirmation email on sign-up box, which will allow users to log in only after they’ve confirmed their email.
Next, add your domain or domains to the Whitelisted domains for sign-ups box, one domain per line. You can use the asterisk “*” to specify wildcard domains:
Scroll down to the bottom and click on the Save changes button:
The sign-up section should now be removed from the GitLab landing page.
By default, new users can create up to 10 projects. If you wish to allow new users from the outside for visibility and participation, but want to restrict their access to creating new projects, you can do so in the Account and Limit Settings section.
Inside, you can change the Default projects limit to 0 to completely disable new users from creating projects:
New users can still be added to projects manually and will have access to internal or public projects created by other users.
Scroll down to the bottom and click on the Save changes button:
New users will now be able to create accounts, but unable to create projects.
By default, GitLab has a scheduled task set up to renew Let’s Encrypt certificates after midnight every fourth day, with the exact minute based on your external_url
. You can modify these settings in the /etc/gitlab/gitlab.rb
file. For example, if you wanted to renew every 7th day at 12:30, you could configure this as follows:
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
You can also disable auto-renewal by adding an additional setting to /etc/gitlab/gitlab.rb
:
letsencrypt['auto_renew'] = false
With auto-renewals in place, you will not need to worry about service interruptions.
You should now have a working GitLab instance hosted on your own server. You can begin to import or create new projects and configure the appropriate level of access for your team. GitLab is regularly adding features and making updates to their platform, so be sure to check out the project’s home page to stay up-to-date on any improvements or important notices.
]]>FTP, short for File Transfer Protocol, is a network protocol that was once widely used for moving files between a client and server. It has since been replaced by faster, more secure, and more convenient ways of delivering files. Many casual internet users expect to download directly from their web browser with https
, and command-line users are more likely to use secure protocols such as the scp
or SFTP.
FTP is still used to support legacy applications and workflows with very specific needs. If you have a choice of what protocol to use, consider exploring the more modern options. When you do need FTP, however, vsftpd is an excellent choice. Optimized for security, performance, and stability, vsftpd offers strong protection against many security problems found in other FTP servers and is the default for many Linux distributions.
In this tutorial, you’ll configure vsftpd to allow a user to upload files to his or her home directory using FTP with login credentials secured by SSL/TLS.
To follow along with this tutorial you will need:
Let’s start by updating our package list and installing the vsftpd
daemon:
- sudo apt update
- sudo apt install vsftpd
When the installation is complete, let’s copy the configuration file so we can start with a blank configuration, and save the original as a backup:
- sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.orig
With a backup of the configuration in place, we’re ready to configure the firewall.
Let’s check the firewall status to see if it’s enabled. If it is, we’ll ensure that FTP traffic is permitted so firewall rules don’t block our tests. This guide assumes that you have UFW installed, following Step 4 in the initial server setup guide.
Check the firewall status:
- sudo ufw status
In this case, only SSH is allowed through:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
You may have other rules in place or no firewall rules at all. Since only SSH traffic is permitted in this case, we’ll need to add rules for FTP traffic.
Let’s open ports 20
and 21
for FTP, port 990
for when we enable TLS, and ports 40000-50000
for the range of passive ports we plan to set in the configuration file:
- sudo ufw allow 20/tcp
- sudo ufw allow 21/tcp
- sudo ufw allow 990/tcp
- sudo ufw allow 40000:50000/tcp
Check the firewall status:
- sudo ufw status
Your firewall rules should now look like this:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
990/tcp ALLOW Anywhere
20/tcp ALLOW Anywhere
21/tcp ALLOW Anywhere
40000:50000/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
20/tcp (v6) ALLOW Anywhere (v6)
21/tcp (v6) ALLOW Anywhere (v6)
990/tcp (v6) ALLOW Anywhere (v6)
40000:50000/tcp (v6) ALLOW Anywhere (v6)
With vsftpd
installed and the necessary ports open, let’s move on to creating a dedicated FTP user.
We will create a dedicated FTP user, but you may already have a user in need of FTP access. We’ll take care to preserve an existing user’s access to their data in the instructions that follow. Even so, we recommend that you start with a new user until you’ve configured and tested your setup.
First, add a test user:
- sudo adduser sammy
Assign a password when prompted. Feel free to press ENTER
through the other prompts.
FTP is generally more secure when users are restricted to a specific directory. vsftpd
accomplishes this with chroot
jails. When chroot
is enabled for local users, they are restricted to their home directory by default. However, because of the way vsftpd
secures the directory, it must not be writable by the user. This is fine for a new user who should only connect via FTP, but an existing user may need to write to their home folder if they also have shell access.
In this example, rather than removing write privileges from the home directory, let’s create an ftp
directory to serve as the chroot
and a writable files
directory to hold the actual files.
Create the ftp
folder:
- sudo mkdir /home/sammy/ftp
Set its ownership:
- sudo chown nobody:nogroup /home/sammy/ftp
Remove write permissions:
- sudo chmod a-w /home/sammy/ftp
Verify the permissions:
- sudo ls -la /home/sammy/ftp
Outputtotal 8
4 dr-xr-xr-x 2 nobody nogroup 4096 Aug 24 21:29 .
4 drwxr-xr-x 3 sammy sammy 4096 Aug 24 21:29 ..
Next, let’s create the directory for file uploads and assign ownership to the user:
- sudo mkdir /home/sammy/ftp/files
- sudo chown sammy:sammy /home/sammy/ftp/files
A permissions check on the ftp
directory should return the following:
- sudo ls -la /home/sammy/ftp
Outputtotal 12
dr-xr-xr-x 3 nobody nogroup 4096 Aug 26 14:01 .
drwxr-xr-x 3 sammy sammy 4096 Aug 26 13:59 ..
drwxr-xr-x 2 sammy sammy 4096 Aug 26 14:01 files
Finally, let’s add a test.txt
file to use when we test:
- echo "vsftpd test file" | sudo tee /home/sammy/ftp/files/test.txt
Now that we’ve secured the ftp
directory and allowed the user access to the files
directory, let’s modify our configuration.
We’re planning to allow a single user with a local shell account to connect with FTP. The two key settings for this are already set in vsftpd.conf
. Start by opening the config file to verify that the settings in your configuration match those below:
- sudo nano /etc/vsftpd.conf
. . .
# Allow anonymous FTP? (Disabled by default).
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
. . .
Next, let’s enable the user to upload files by uncommenting the write_enable
setting:
. . .
write_enable=YES
. . .
We’ll also uncomment the chroot
to prevent the FTP-connected user from accessing any files or commands outside the directory tree:
. . .
chroot_local_user=YES
. . .
Let’s also add a user_sub_token
to insert the username in our local_root directory
path so our configuration will work for this user and any additional future users. Add these settings anywhere in the file:
. . .
user_sub_token=$USER
local_root=/home/$USER/ftp
Let’s also limit the range of ports that can be used for passive FTP to make sure enough connections are available:
. . .
pasv_min_port=40000
pasv_max_port=50000
Note: In Step 2, we opened the ports that we set here for the passive port range. If you change the values, be sure to update your firewall settings.
To allow FTP access on a case-by-case basis, let’s set the configuration so that users only have access when they are explicitly added to a list, rather than by default:
. . .
userlist_enable=YES
userlist_file=/etc/vsftpd.userlist
userlist_deny=NO
userlist_deny
toggles the logic: When it is set to YES
, users on the list are denied FTP access. When it is set to NO
, only users on the list are allowed access.
When you’re done making the changes, save the file and exit the editor.
Finally, let’s add our user to /etc/vsftpd.userlist
. Use the -a
flag to append to the file:
- echo "sammy" | sudo tee -a /etc/vsftpd.userlist
Check that it was added as you expected:
- cat /etc/vsftpd.userlist
Outputsammy
Restart the daemon to load the configuration changes:
- sudo systemctl restart vsftpd
With the configuration in place, let’s move on to testing FTP access.
We’ve configured the server to allow only the user sammy to connect via FTP. Let’s make sure that this works as expected.
Anonymous users should fail to connect: We’ve disabled anonymous access. Let’s test that by trying to connect anonymously. If our configuration is set up properly, anonymous users should be denied permission. Open another terminal and run the following command. Be sure to replace 203.0.113.0
with your server’s public IP address:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): anonymous
530 Permission denied.
ftp: Login failed.
ftp>
Close the connection:
- bye
Users other than sammy should fail to connect: Next, let’s try connecting as our sudo user. They should also be denied access, and it should happen before they’re allowed to enter their password:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): your_sudo_user
530 Permission denied.
ftp: Login failed.
ftp>
Close the connection:
- bye
The user sammy should be able to connect, read, and write files: Let’s make sure that our designated user can connect:
- ftp -p 203.0.113.0
OutputConnected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
331 Please specify the password.
Password: your_user's_password
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>
Let’s change into the files
directory and use the get
command to transfer the test file we created earlier to our local machine:
- cd files
- get test.txt
Output229 Entering Extended Passive Mode (|||47398|)
150 Opening BINARY mode data connection for test.txt (17 bytes).
100% |**********************************| 17 146.91 KiB/s 00:00 ETA
226 Transfer complete.
17 bytes received in 00:00 (0.17 KiB/s)
ftp>
Next, let’s upload the file with a new name to test write permissions:
- put test.txt upload.txt
Output229 Entering Extended Passive Mode (|||46598|)
150 Ok to send data.
100% |**********************************| 17 8.93 KiB/s 00:00 ETA
226 Transfer complete.
17 bytes sent in 00:00 (0.08 KiB/s)
Close the connection:
- bye
Now that we’ve tested our configuration, let’s take steps to further secure our server.
Since FTP does not encrypt any data in transit, including user credentials, we’ll enable TLS/SSL to provide that encryption. The first step is to create the SSL certificates for use with vsftpd
.
Let’s use openssl
to create a new certificate and use the -days
flag to make it valid for one year. In the same command, we’ll add a private 2048-bit RSA key. By setting both the -keyout
and -out
flags to the same value, the private key and the certificate will be located in the same file:
- sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/vsftpd.pem -out /etc/ssl/private/vsftpd.pem
You’ll be prompted to provide address information for your certificate. Substitute your own information for the highlighted values below:
OutputGenerating a 2048 bit RSA private key
............................................................................+++
...........+++
writing new private key to '/etc/ssl/private/vsftpd.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:NY
Locality Name (eg, city) []:New York City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:DigitalOcean
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []: your_server_ip
Email Address []:
For more detailed information about the certificate flags, see OpenSSL Essentials: Working with SSL Certificates, Private Keys and CSRs
Once you’ve created the certificates, open the vsftpd
configuration file again:
- sudo nano /etc/vsftpd.conf
Toward the bottom of the file, you will see two lines that begin with rsa_
. Comment them out so they look like this:
. . .
# rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
# rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
. . .
Below them, add the following lines that point to the certificate and private key we just created:
. . .
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
. . .
After that, we will force the use of SSL, which will prevent clients that can’t deal with TLS from connecting. This is necessary to ensure that all traffic is encrypted, but it may force your FTP user to change clients. Change ssl_enable
to YES
:
. . .
ssl_enable=YES
. . .
After that, add the following lines to explicitly deny anonymous connections over SSL and to require SSL for both data transfer and logins:
. . .
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
. . .
After this, configure the server to use TLS, the preferred successor to SSL, by adding the following lines:
. . .
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
. . .
Finally, we will add two more options. First, we will not require SSL reuse because it can break many FTP clients. We will require “high” encryption cipher suites, which currently means key lengths equal to or greater than 128 bits:
. . .
require_ssl_reuse=NO
ssl_ciphers=HIGH
. . .
The finished file section should look like this:
# This option specifies the location of the RSA certificate to use for SSL
# encrypted connections.
#rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
rsa_cert_file=/etc/ssl/private/vsftpd.pem
rsa_private_key_file=/etc/ssl/private/vsftpd.pem
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
require_ssl_reuse=NO
ssl_ciphers=HIGH
When you’re done, save and close the file.
Restart the server for the changes to take effect:
- sudo systemctl restart vsftpd
At this point, we will no longer be able to connect with an insecure command-line client. If we tried, we’d see something like:
Outputftp -p 203.0.113.0
Connected to 203.0.113.0.
220 (vsFTPd 3.0.3)
Name (203.0.113.0:default): sammy
530 Non-anonymous sessions must use encryption.
ftp: Login failed.
ftp>
Next, let’s verify that we can connect using a client that supports TLS.
Most modern FTP clients can be configured to use TLS encryption. We will demonstrate how to connect with FileZilla because of its cross-platform support. Consult the documentation for other clients.
When you first open FileZilla, find the Site Manager icon just above the word Host, the left-most icon on the top row. Click it:
A new window will open. Click the New Site button in the bottom right corner:
Under My Sites a new icon with the words New Site will appear. You can name it now or return later and use the Rename button.
Fill out the Host field with the name or IP address. Under the Encryption drop down menu, select Require explicit FTP over TLS.
For Logon Type, select Ask for password. Fill in your FTP user in the User field:
Click Connect at the bottom of the interface. You will be asked for the user’s password:
Click OK to connect. You should now be connected with your server with TLS/SSL encryption.
Upon success, you will be presented with a server certificate that looks like this:
When you’ve accepted the certificate, double-click the files
folder and drag upload.txt
to the left to confirm that you’re able to download files:
When you’ve done that, right-click on the local copy, rename it to upload-tls.txt
and drag it back to the server to confirm that you can upload files:
You’ve now confirmed that you can securely and successfully transfer files with SSL/TLS enabled.
If you’re unable to use TLS because of client requirements, you can gain some security by disabling the FTP user’s ability to log in any other way. One relatively straightforward way to prevent it is by creating a custom shell. This will not provide any encryption, but it will limit the access of a compromised account to files accessible by FTP.
First, open a file called ftponly
in the bin
directory:
- sudo nano /bin/ftponly
Add a message telling the user why they are unable to log in:
#!/bin/sh
echo "This account is limited to FTP access only."
Save the file and exit your editor.
Change the permissions to make the file executable:
- sudo chmod a+x /bin/ftponly
Open the list of valid shells:
- sudo nano /etc/shells
At the bottom add:
. . .
/bin/ftponly
Update the user’s shell with the following command:
- sudo usermod sammy -s /bin/ftponly
Now try logging into your server as sammy:
- ssh sammy@your_server_ip
You should see something like:
OutputThis account is limited to FTP access only.
Connection to 203.0.113.0 closed.
This confirms that the user can no longer ssh
to the server and is limited to FTP access only.
In this tutorial we covered setting up FTP for users with a local account. If you need to use an external authentication source, you might want to look into vsftpd
’s support of virtual users. This offers a rich set of options through the use of PAM, the Pluggable Authentication Modules, and is a good choice if you manage users in another system such as LDAP or Kerberos.
Java and the JVM (Java’s virtual machine) are required for many kinds of software, including Tomcat, Jetty, Glassfish, Cassandra and Jenkins.
In this guide, you will install various versions of the Java Runtime Environment (JRE) and the Java Developer Kit (JDK) using apt
. You’ll install OpenJDK as well as official packages from Oracle. You’ll then select the version you wish to use for your projects. When you’re finished, you’ll be able to use the JDK to develop software or use the Java Runtime to run software.
To follow this tutorial, you will need:
sudo
access and a firewall.The easiest option for installing Java is to use the version packaged with Debian. By default, Debian 9 includes Open JDK, which is an open-source variant of the JRE and JDK.
This package will install OpenJDK version 1.8, which is compatible with Java 8. Java 8 is the current Long Term Support version and is still widely supported, though public maintenance ends in January 2019.
To install this version, first update the package index:
- sudo apt update
Next, check if Java is already installed:
- java -version
If Java is not currently installed, you’ll see the following output:
Output-bash: java: command not found
Execute the following command to install OpenJDK:
- sudo apt install default-jre
This command will install the Java Runtime Environment (JRE). This will allow you to run almost all Java software.
Verify the installation with:
- java -version
You’ll see the following output:
Outputopenjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
You may need the Java Development Kit (JDK) in addition to the JRE in order to compile and run some specific Java-based software. To install the JDK, execute the following command, which will also install the JRE:
- sudo apt install default-jdk
Verify that the JDK is installed by checking the version of javac
, the Java compiler:
- javac -version
You’ll see the following output:
Outputjavac 1.8.0_181
Next, let’s look at how to install Oracle’s official JDK and JRE.
If you want to install the Oracle JDK, which is the official version distributed by Oracle, you’ll need to add a new package repository for the version you’d like to use.
First, install the software-properties-common
package which adds the apt-get-repository
command which you’ll use to add additional repositories to your sources list.
Install software-properties-common
with:
- sudo apt install software-properties-common
With this installed, you can install Oracle’s Java.
To install Java 8, which is the current long-term support version, first add its package repository:
- sudo add-apt-repository ppa:webupd8team/java
When you add the repository, you’ll see a message like this:
output Oracle Java (JDK) Installer (automatically downloads and installs Oracle JDK8). There are no actual Java files in this PPA.
Important -> Why Oracle Java 7 And 6 Installers No Longer Work: http://www.webupd8.org/2017/06/why-oracle-java-7-and-6-installers-no.html
Update: Oracle Java 9 has reached end of life: http://www.oracle.com/technetwork/java/javase/downloads/jdk9-downloads-3848520.html
The PPA supports Ubuntu 18.04, 17.10, 16.04, 14.04 and 12.04.
More info (and Ubuntu installation instructions):
- for Oracle Java 8: http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html
Debian installation instructions:
- Oracle Java 8: http://www.webupd8.org/2014/03/how-to-install-oracle-java-8-in-debian.html
For Oracle Java 10, see a different PPA: https://www.linuxuprising.com/2018/04/install-oracle-java-10-in-ubuntu-or.html
More info: https://launchpad.net/~webupd8team/+archive/ubuntu/java
Press [ENTER] to continue or ctrl-c to cancel adding it
Press ENTER
to continue. It will attempt to import some GPG signing keys, but it won’t be able to find any valid ones:
Outputgpg: keybox '/tmp/tmpgt9wdvth/pubring.gpg' created
gpg: /tmp/tmpgt9wdvth/trustdb.gpg: trustdb created
gpg: key C2518248EEA14886: public key "Launchpad VLC" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg: imported: 1
gpg: no valid OpenPGP data found.
Execute the following command to add the GPG key for the repository source manually:
- apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C2518248EEA14886
Then update your package list:
- sudo apt update
Once the package list updates, install Java 8:
- sudo apt install oracle-java8-installer
Your system will download the JDK from Oracle and ask you to accept the license agreement. Accept the agreement and the JDK will install.
To install Oracle Java 10, first add its repository:
- sudo add-apt-repository ppa:linuxuprising/java
You’ll see this message:
Output Oracle Java 10 installer
Java binaries are not hosted in this PPA due to licensing. The packages in this PPA download and install Oracle Java 10 (JDK 10), so a working Internet connection is required.
The packages in this PPA are based on the WebUpd8 Oracle Java PPA packages: https://launchpad.net/~webupd8team/+archive/ubuntu/java
Created for users of https://www.linuxuprising.com/
Issues or suggestions? Leave a comment here: https://www.linuxuprising.com/2018/04/install-oracle-java-10-in-ubuntu-or.html
More info: https://launchpad.net/~linuxuprising/+archive/ubuntu/java
Press [ENTER] to continue or ctrl-c to cancel adding it
Press ENTER
to continue the installation. Like with Java 8, you’ll see a message about invalid signing keys:
Outputgpg: keybox '/tmp/tmpvuqsh9ui/pubring.gpg' created
gpg: /tmp/tmpvuqsh9ui/trustdb.gpg: trustdb created
gpg: key EA8CACC073C3DB2A: public key "Launchpad PPA for Linux Uprising" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: no valid OpenPGP data found.
Execute this command to import the necessary key:
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EA8CACC073C3DB2A
Then update your package list:
- sudo apt update
Once the package list updates, install Java 10:
- sudo apt install oracle-java10-installer
Your system will download the JDK from Oracle and ask you to accept the license agreement. Accept the agreement and the JDK will install.
Now let’s look at how to select which version of Java you want to use.
You can have multiple Java installations on one server. You can configure which version is the default for use on the command line by using the update-alternatives
command.
- sudo update-alternatives --config java
This is what the output would look like if you’ve installed all versions of Java in this tutorial:
OutputThere are 3 choices for the alternative java (providing /usr/bin/java).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-10-oracle/bin/java 1091 auto mode
* 1 /usr/lib/jvm/java-10-oracle/bin/java 1091 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/java-8-oracle/jre/bin/java 1081 manual mode
Press <enter> to keep the current choice[*], or type selection number:
Choose the number associated with the Java version to use it as the default, or press ENTER
to leave the current settings in place.
You can do this for other Java commands, such as the compiler (javac
):
- sudo update-alternatives --config javac
Other commands for which this command can be run include, but are not limited to: keytool
, javadoc
and jarsigner
.
Let’s set the JAVA_HOME
environment variable next.
JAVA_HOME
Environment VariableMany programs written using Java use the JAVA_HOME
environment variable to determine the Java installation location.
To set this environment variable, first determine where Java is installed. Use the update-alternatives
command again:
- sudo update-alternatives --config java
This command shows each installation of Java along with its installation path:
Output Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-10-oracle/bin/java 1091 auto mode
* 1 /usr/lib/jvm/java-10-oracle/bin/java 1091 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
3 /usr/lib/jvm/java-8-oracle/jre/bin/java 1081 manual mode
In this case the installation paths are as follows:
/usr/lib/jvm/java-10-oracle/jre/bin/java
./usr/lib/jvm/java-8-oracle/jre/bin/java
./usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
.These paths show the path to the java
executable.
Copy the path for your preferred installation, excluding the trailing bin/java
component. Then open /etc/environment
using nano
or your favorite text editor:
- sudo nano /etc/environment
At the end of this file, add the following line, making sure to replace the highlighted path with your own copied path:
JAVA_HOME="/usr/lib/jvm/java-8-oracle/jre"
Modifying this file will set the JAVA_HOME
path for all users on your system.
Save the file and exit the editor.
Now reload this file to apply the changes to your current session:
- source /etc/environment
Verify that the environment variable is set:
- echo $JAVA_HOME
You’ll see the path you just set:
Output/usr/lib/jvm/java-8-oracle/jre
Other users will need to execute the command source /etc/environment
or log out and log back in to apply this setting.
In this tutorial you installed multiple versions of Java and learned how to manage them. You can now install software which runs on Java, such as Tomcat, Jetty, Glassfish, Cassandra or Jenkins.
]]>Docker is a great tool for automating the deployment of Linux applications inside software containers, but to take full advantage of its potential each component of an application should run in its own individual container. For complex applications with a lot of components, orchestrating all the containers to start up, communicate, and shut down together can quickly become unwieldy.
The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all of your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.
In this tutorial, we’ll show you how to install the latest version of Docker Compose to help you manage multi-container applications on a Debian 9 server.
To follow this article, you will need:
Note: Even though the Prerequisites give instructions for installing Docker on Debian 9, the docker
commands in this article should work on other operating systems as long as Docker is installed.
Although we can install Docker Compose from the official Debian repositories, it is several minor versions behind the latest release, so we’ll install it from Docker’s GitHub repository. The command below is slightly different than the one you’ll find on the Releases page. By using the -o
flag to specify the output file first rather than redirecting the output, this syntax avoids running into a permission denied error caused when using sudo
.
We’ll check the current release and, if necessary, update it in the command below:
- sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
Next we’ll set the permissions:
- sudo chmod +x /usr/local/bin/docker-compose
Then we’ll verify that the installation was successful by checking the version:
- docker-compose --version
This will print out the version we installed:
Outputdocker-compose version 1.22.0, build f46880fe
Now that we have Docker Compose installed, we’re ready to run a “Hello World” example.
The public Docker registry, Docker Hub, includes a Hello World image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image. We’ll create this minimal configuration to run our hello-world
container.
First, we’ll create a directory for the YAML file and move into it:
- mkdir hello-world
- cd hello-world
Then, we’ll create the YAML file:
- nano docker-compose.yml
Put the following contents into the file, save the file, and exit the text editor:
my-test:
image: hello-world
The first line in the YAML file is used as part of the container name. The second line specifies which image to use to create the container. When we run the docker-compose up
command, it will look for a local image by the name we specified, hello-world
. With this in place, we’ll save and exit the file.
We can look manually at images on our system with the docker images
command:
- docker images
When there are no local images at all, only the column headings display:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
Now, while still in the ~/hello-world
directory, we’ll execute the following command:
- docker-compose up
The first time we run the command, if there’s no local image named hello-world
, Docker Compose will pull it from the Docker Hub public repository:
OutputPulling my-test (hello-world:)...
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest
. . .
After pulling the image, docker-compose
creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:
Output. . .
Creating helloworld_my-test_1...
Attaching to helloworld_my-test_1
my-test_1 |
my-test_1 | Hello from Docker.
my-test_1 | This message shows that your installation appears to be working correctly.
my-test_1 |
. . .
Then it prints an explanation of what it did:
Output To generate this message, Docker took the following steps:
my-test_1 | 1. The Docker client contacted the Docker daemon.
my-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
my-test_1 | (amd64)
my-test_1 | 3. The Docker daemon created a new container from that image which runs the
my-test_1 | executable that produces the output you are currently reading.
my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
my-test_1 | to your terminal.
Docker containers only run as long as the command is active, so once hello
finished running, the container stopped. Consequently, when we look at active processes, the column headers will appear, but the hello-world
container won’t be listed because it’s not running:
- docker ps
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
We can see the container information, which we’ll need in the next step, by using the -a
flag. This shows all containers, not just active ones:
- docker ps -a
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
06069fd5ca23 hello-world "/hello" 35 minutes ago Exited (0) 35 minutes ago hello-world_my-test_1
This displays the information we’ll need to remove the container when we’re done with it.
To avoid using unnecessary disk space, we’ll remove the local image. To do so, we’ll need to delete all the containers that reference the image using the docker rm
command, followed by either the CONTAINER ID
or the NAME
. Below, we’re using the CONTAINER ID
from the docker ps -a
command we just ran. Be sure to substitute the ID of your container:
- docker rm 06069fd5ca23
Once all containers that reference the image have been removed, we can remove the image:
- docker rmi hello-world
We’ve now installed Docker Compose, tested our installation by running a Hello World example, and removed the test image and container.
While the Hello World example confirmed our installation, the simple configuration does not show one of the main benefits of Docker Compose — being able to bring a group of Docker containers up and down all at the same time. To see the power of Docker Compose in action, you might like to check out this practical example, How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04.
]]>UFW, or Uncomplicated Firewall, is an interface to iptables
that is geared towards simplifying the process of configuring a firewall. While iptables
is a solid and flexible tool, it can be difficult for beginners to learn how to use it to properly configure a firewall. If you’re looking to get started securing your network, and you’re not sure which tool to use, UFW may be the right choice for you.
This tutorial will show you how to set up a firewall with UFW on Debian 9.
To follow this tutorial, you will need One Debian 9 server with a sudo non-root user, which you can set up by following Steps 1–3 in the Initial Server Setup with Debian 9 tutorial.
Debian does not install UFW by default. If you followed through the entire Initial Server Setup tutorial you will have already installed and enabled UFW. If not, install it now using apt
:
- sudo apt install ufw
We will set up UFW and enable it in the following steps.
This tutorial is written with IPv4 in mind, but will work for IPv6 as well as long as you enable it. If your Debian server has IPv6 enabled, ensure that UFW is configured to support IPv6 so that it will manage firewall rules for IPv6 in addition to IPv4. To do this, open the UFW configuration with nano
or your favorite editor.
- sudo nano /etc/default/ufw
Then make sure the value of IPV6
is yes
. It should look like this:
IPV6=yes
Save and close the file. Now, when UFW is enabled, it will be configured to write both IPv4 and IPv6 firewall rules. However, before enabling UFW, we will want to ensure that your firewall is configured to allow you to connect via SSH. Let’s start with setting the default policies.
If you’re just getting started with your firewall, the first rules to define are your default policies. These rules control how to handle traffic that does not explicitly match any other rules. By default, UFW is set to deny all incoming connections and allow all outgoing connections. This means anyone trying to reach your server would not be able to connect, while any application within the server would be able to reach the outside world.
Let’s set your UFW rules back to the defaults so we can be sure that you’ll be able to follow along with this tutorial. To set the defaults used by UFW, use these commands:
- sudo ufw default deny incoming
- sudo ufw default allow outgoing
These commands set the defaults to deny incoming and allow outgoing connections. These firewall defaults alone might suffice for a personal computer, but servers typically need to respond to incoming requests from outside users. We’ll look into that next.
If we enabled our UFW firewall now, it would deny all incoming connections. This means that we will need to create rules that explicitly allow legitimate incoming connections — SSH or HTTP connections, for example — if we want our server to respond to those types of requests. If you’re using a cloud server, you will probably want to allow incoming SSH connections so you can connect to and manage your server.
To configure your server to allow incoming SSH connections, you can use this command:
- sudo ufw allow ssh
This will create firewall rules that will allow all connections on port 22
, which is the port that the SSH daemon listens on by default. UFW knows what port allow ssh
means because it’s listed as a service in the /etc/services
file.
However, we can actually write the equivalent rule by specifying the port instead of the service name. For example, this command works the same as the one above:
- sudo ufw allow 22
If you configured your SSH daemon to use a different port, you will have to specify the appropriate port. For example, if your SSH server is listening on port 2222
, you can use this command to allow connections on that port:
- sudo ufw allow 2222
Now that your firewall is configured to allow incoming SSH connections, we can enable it.
To enable UFW, use this command:
- sudo ufw enable
You will receive a warning that says the command may disrupt existing SSH connections. We already set up a firewall rule that allows SSH connections, so it should be fine to continue. Respond to the prompt with y
and hit ENTER
.
The firewall is now active. Run the sudo ufw status verbose
command to see the rules that are set. The rest of this tutorial covers how to use UFW in more detail, like allowing or denying different kinds of connections.
At this point, you should allow all of the other connections that your server needs to respond to. The connections that you should allow depends on your specific needs. Luckily, you already know how to write rules that allow connections based on a service name or port; we already did this for SSH on port 22
. You can also do this for:
sudo ufw allow http
or sudo ufw allow 80
sudo ufw allow https
or sudo ufw allow 443
There are several others ways to allow other connections, aside from specifying a port or known service.
You can specify port ranges with UFW. Some applications use multiple ports, instead of a single port.
For example, to allow X11 connections, which use ports 6000
-6007
, use these commands:
- sudo ufw allow 6000:6007/tcp
- sudo ufw allow 6000:6007/udp
When specifying port ranges with UFW, you must specify the protocol (tcp
or udp
) that the rules should apply to. We haven’t mentioned this before because not specifying the protocol automatically allows both protocols, which is OK in most cases.
When working with UFW, you can also specify IP addresses. For example, if you want to allow connections from a specific IP address, such as a work or home IP address of 203.0.113.4
, you need to specify from
, then the IP address:
- sudo ufw allow from 203.0.113.4
You can also specify a specific port that the IP address is allowed to connect to by adding to any port
followed by the port number. For example, If you want to allow 203.0.113.4
to connect to port 22
(SSH), use this command:
- sudo ufw allow from 203.0.113.4 to any port 22
If you want to allow a subnet of IP addresses, you can do so using CIDR notation to specify a netmask. For example, if you want to allow all of the IP addresses ranging from 203.0.113.1
to 203.0.113.254
you could use this command:
- sudo ufw allow from 203.0.113.0/24
Likewise, you may also specify the destination port that the subnet 203.0.113.0/24
is allowed to connect to. Again, we’ll use port 22
(SSH) as an example:
- sudo ufw allow from 203.0.113.0/24 to any port 22
If you want to create a firewall rule that only applies to a specific network interface, you can do so by specifying “allow in on” followed by the name of the network interface.
You may want to look up your network interfaces before continuing. To do so, use this command:
- ip addr
Output Excerpt2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
. . .
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
. . .
The highlighted output indicates the network interface names. They are typically named something like eth0
or enp3s2
.
So, if your server has a public network interface called eth0
, you could allow HTTP traffic (port 80
) to it with this command:
- sudo ufw allow in on eth0 to any port 80
Doing so would allow your server to receive HTTP requests from the public internet.
Or, if you want your MySQL database server (port 3306
) to listen for connections on the private network interface eth1
, for example, you could use this command:
- sudo ufw allow in on eth1 to any port 3306
This would allow other servers on your private network to connect to your MySQL database.
If you haven’t changed the default policy for incoming connections, UFW is configured to deny all incoming connections. Generally, this simplifies the process of creating a secure firewall policy by requiring you to create rules that explicitly allow specific ports and IP addresses through.
However, sometimes you will want to deny specific connections based on the source IP address or subnet, perhaps because you know that your server is being attacked from there. Also, if you want to change your default incoming policy to allow (which is not recommended), you would need to create deny rules for any services or IP addresses that you don’t want to allow connections for.
To write deny rules, you can use the commands described above, replacing allow with deny.
For example, to deny HTTP connections, you could use this command:
- sudo ufw deny http
Or if you want to deny all connections from 203.0.113.4
you could use this command:
- sudo ufw deny from 203.0.113.4
Now let’s take a look at how to delete rules.
Knowing how to delete firewall rules is just as important as knowing how to create them. There are two different ways to specify which rules to delete: by rule number or by the actual rule (similar to how the rules were specified when they were created). We’ll start with the delete by rule number method because it is easier.
If you’re using the rule number to delete firewall rules, the first thing you’ll want to do is get a list of your firewall rules. The UFW status command has an option to display numbers next to each rule, as demonstrated here:
- sudo ufw status numbered
Numbered Output:Status: active
To Action From
-- ------ ----
[ 1] 22 ALLOW IN 15.15.15.0/24
[ 2] 80 ALLOW IN Anywhere
If we decide that we want to delete rule 2, the one that allows port 80 (HTTP) connections, we can specify it in a UFW delete command like this:
- sudo ufw delete 2
This would show a confirmation prompt then delete rule 2, which allows HTTP connections. Note that if you have IPv6 enabled, you would want to delete the corresponding IPv6 rule as well.
The alternative to rule numbers is to specify the actual rule to delete. For example, if you want to remove the allow http
rule, you could write it like this:
- sudo ufw delete allow http
You could also specify the rule by allow 80
, instead of by service name:
- sudo ufw delete allow 80
This method will delete both IPv4 and IPv6 rules, if they exist.
At any time, you can check the status of UFW with this command:
- sudo ufw status verbose
If UFW is disabled, which it is by default, you’ll see something like this:
OutputStatus: inactive
If UFW is active, which it should be if you followed Step 3, the output will say that it’s active and it will list any rules that are set. For example, if the firewall is set to allow SSH (port 22
) connections from anywhere, the output might look something like this:
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
Use the status
command if you want to check how UFW has configured the firewall.
If you decide you don’t want to use UFW, you can disable it with this command:
- sudo ufw disable
Any rules that you created with UFW will no longer be active. You can always run sudo ufw enable
if you need to activate it later.
If you already have UFW rules configured but you decide that you want to start over, you can use the reset command:
- sudo ufw reset
This will disable UFW and delete any rules that were previously defined. Keep in mind that the default policies won’t change to their original settings, if you modified them at any point. This should give you a fresh start with UFW.
Your firewall is now configured to allow (at least) SSH connections. Be sure to allow any other incoming connections that your server, while limiting any unnecessary connections, so your server will be functional and secure.
To learn about more common UFW configurations, check out the UFW Essentials: Common Firewall Rules and Commands tutorial.
]]>Django is a powerful web framework that can help you get your Python application or website off the ground. Django includes a simplified development server for testing your code locally, but for anything even slightly production related, a more secure and powerful web server is required.
In this guide, we will demonstrate how to install and configure some components on Debian 9 to support and serve Django applications. We will be setting up a PostgreSQL database instead of using the default SQLite database. We will configure the Gunicorn application server to interface with our applications. We will then set up Nginx to reverse proxy to Gunicorn, giving us access to its security and performance features to serve our apps.
In order to complete this guide, you should have a fresh Debian 9 server instance with a basic firewall and a non-root user with sudo
privileges configured. You can learn how to set this up by running through our initial server setup guide.
We will be installing Django within a virtual environment. Installing Django into an environment specific to your project will allow your projects and their requirements to be handled separately.
Once we have our database and application up and running, we will install and configure the Gunicorn application server. This will serve as an interface to our application, translating client requests from HTTP to Python calls that our application can process. We will then set up Nginx in front of Gunicorn to take advantage of its high performance connection handling mechanisms and its easy-to-implement security features.
Let’s get started.
To begin the process, we’ll download and install all of the items we need from the Debian repositories. We will use the Python package manager pip
to install additional components a bit later.
We need to update the local apt
package index and then download and install the packages. The packages we install depend on which version of Python your project will use.
If you are using Django with Python 3, type:
- sudo apt update
- sudo apt install python3-pip python3-dev libpq-dev postgresql postgresql-contrib nginx curl
Django 1.11 is the last release of Django that will support Python 2. If you are starting new projects, it is strongly recommended that you choose Python 3. If you still need to use Python 2, type:
- sudo apt update
- sudo apt install python-pip python-dev libpq-dev postgresql postgresql-contrib nginx curl
This will install pip
, the Python development files needed to build Gunicorn later, the Postgres database system and the libraries needed to interact with it, and the Nginx web server.
We’re going to jump right in and create a database and database user for our Django application.
By default, Postgres uses an authentication scheme called “peer authentication” for local connections. Basically, this means that if the user’s operating system username matches a valid Postgres username, that user can login with no further authentication.
During the Postgres installation, an operating system user named postgres
was created to correspond to the postgres
PostgreSQL administrative user. We need to use this user to perform administrative tasks. We can use sudo and pass in the username with the -u
option.
Log into an interactive Postgres session by typing:
- sudo -u postgres psql
You will be given a PostgreSQL prompt where we can set up our requirements.
First, create a database for your project:
- CREATE DATABASE myproject;
Note: Every Postgres statement must end with a semi-colon, so make sure that your command ends with one if you are experiencing issues.
Next, create a database user for our project. Make sure to select a secure password:
- CREATE USER myprojectuser WITH PASSWORD 'password';
Afterwards, we’ll modify a few of the connection parameters for the user we just created. This will speed up database operations so that the correct values do not have to be queried and set each time a connection is established.
We are setting the default encoding to UTF-8
, which Django expects. We are also setting the default transaction isolation scheme to “read committed”, which blocks reads from uncommitted transactions. Lastly, we are setting the timezone. By default, our Django projects will be set to use UTC
. These are all recommendations from the Django project itself:
- ALTER ROLE myprojectuser SET client_encoding TO 'utf8';
- ALTER ROLE myprojectuser SET default_transaction_isolation TO 'read committed';
- ALTER ROLE myprojectuser SET timezone TO 'UTC';
Now, we can give our new user access to administer our new database:
- GRANT ALL PRIVILEGES ON DATABASE myproject TO myprojectuser;
When you are finished, exit out of the PostgreSQL prompt by typing:
- \q
Postgres is now set up so that Django can connect to and manage its database information.
Now that we have our database, we can begin getting the rest of our project requirements ready. We will be installing our Python requirements within a virtual environment for easier management.
To do this, we first need access to the virtualenv
command. We can install this with pip
.
If you are using Python 3, upgrade pip
and install the package by typing:
- sudo -H pip3 install --upgrade pip
- sudo -H pip3 install virtualenv
If you are using Python 2, upgrade pip
and install the package by typing:
- sudo -H pip install --upgrade pip
- sudo -H pip install virtualenv
With virtualenv
installed, we can start forming our project. Create and move into a directory where we can keep our project files:
- mkdir ~/myprojectdir
- cd ~/myprojectdir
Within the project directory, create a Python virtual environment by typing:
- virtualenv myprojectenv
This will create a directory called myprojectenv
within your myprojectdir
directory. Inside, it will install a local version of Python and a local version of pip
. We can use this to install and configure an isolated Python environment for our project.
Before we install our project’s Python requirements, we need to activate the virtual environment. You can do that by typing:
- source myprojectenv/bin/activate
Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (myprojectenv)user@host:~/myprojectdir$
.
With your virtual environment active, install Django, Gunicorn, and the psycopg2
PostgreSQL adaptor with the local instance of pip
:
Note: When the virtual environment is activated (when your prompt has (myprojectenv)
preceding it), use pip
instead of pip3
, even if you are using Python 3. The virtual environment’s copy of the tool is always named pip
, regardless of the Python version.
- pip install django gunicorn psycopg2-binary
You should now have all of the software needed to start a Django project.
With our Python components installed, we can create the actual Django project files.
Since we already have a project directory, we will tell Django to install the files here. It will create a second level directory with the actual code, which is normal, and place a management script in this directory. The key to this is that we are defining the directory explicitly instead of allowing Django to make decisions relative to our current directory:
- django-admin.py startproject myproject ~/myprojectdir
At this point, your project directory (~/myprojectdir
in our case) should have the following content:
~/myprojectdir/manage.py
: A Django project management script.~/myprojectdir/myproject/
: The Django project package. This should contain the __init__.py
, settings.py
, urls.py
, and wsgi.py
files.~/myprojectdir/myprojectenv/
: The virtual environment directory we created earlier.The first thing we should do with our newly created project files is adjust the settings. Open the settings file in your text editor:
- nano ~/myprojectdir/myproject/settings.py
Start by locating the ALLOWED_HOSTS
directive. This defines a list of the server’s addresses or domain names may be used to connect to the Django instance. Any incoming requests with a Host header that is not in this list will raise an exception. Django requires that you set this to prevent a certain class of security vulnerability.
In the square brackets, list the IP addresses or domain names that are associated with your Django server. Each item should be listed in quotations with entries separated by a comma. If you wish requests for an entire domain and any subdomains, prepend a period to the beginning of the entry. In the snippet below, there are a few commented out examples used to demonstrate:
Note: Be sure to include localhost
as one of the options since we will be proxying connections through a local Nginx instance.
. . .
# The simplest case: just add the domain name(s) and IP addresses of your Django server
# ALLOWED_HOSTS = [ 'example.com', '203.0.113.5']
# To respond to 'example.com' and any subdomains, start the domain with a dot
# ALLOWED_HOSTS = ['.example.com', '203.0.113.5']
ALLOWED_HOSTS = ['your_server_domain_or_IP', 'second_domain_or_IP', . . ., 'localhost']
Next, find the section that configures database access. It will start with DATABASES
. The configuration in the file is for a SQLite database. We already created a PostgreSQL database for our project, so we need to adjust the settings.
Change the settings with your PostgreSQL database information. We tell Django to use the psycopg2
adaptor we installed with pip
. We need to give the database name, the database username, the database user’s password, and then specify that the database is located on the local computer. You can leave the PORT
setting as an empty string:
. . .
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'myproject',
'USER': 'myprojectuser',
'PASSWORD': 'password',
'HOST': 'localhost',
'PORT': '',
}
}
. . .
Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. This is necessary so that Nginx can handle requests for these items. The following line tells Django to place them in a directory called static
in the base project directory:
. . .
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static/')
Save and close the file when you are finished.
Now, we can migrate the initial database schema to our PostgreSQL database using the management script:
- ~/myprojectdir/manage.py makemigrations
- ~/myprojectdir/manage.py migrate
Create an administrative user for the project by typing:
- ~/myprojectdir/manage.py createsuperuser
You will have to select a username, provide an email address, and choose and confirm a password.
We can collect all of the static content into the directory location we configured by typing:
- ~/myprojectdir/manage.py collectstatic
You will have to confirm the operation. The static files will then be placed in a directory called static
within your project directory.
If you followed the initial server setup guide, you should have a UFW firewall protecting your server. In order to test the development server, we’ll have to allow access to the port we’ll be using.
Create an exception for port 8000 by typing:
- sudo ufw allow 8000
Finally, you can test our your project by starting up the Django development server with this command:
- ~/myprojectdir/manage.py runserver 0.0.0.0:8000
In your web browser, visit your server’s domain name or IP address followed by :8000
:
http://server_domain_or_IP:8000
You should see the default Django index page:
If you append /admin
to the end of the URL in the address bar, you will be prompted for the administrative username and password you created with the createsuperuser
command:
After authenticating, you can access the default Django admin interface:
When you are finished exploring, hit CTRL-C in the terminal window to shut down the development server.
The last thing we want to do before leaving our virtual environment is test Gunicorn to make sure that it can serve the application. We can do this by entering our project directory and using gunicorn
to load the project’s WSGI module:
- cd ~/myprojectdir
- gunicorn --bind 0.0.0.0:8000 myproject.wsgi
This will start Gunicorn on the same interface that the Django development server was running on. You can go back and test the app again.
Note: The admin interface will not have any of the styling applied since Gunicorn does not know how to find the static CSS content responsible for this.
We passed Gunicorn a module by specifying the relative directory path to Django’s wsgi.py
file, which is the entry point to our application, using Python’s module syntax. Inside of this file, a function called application
is defined, which is used to communicate with the application. To learn more about the WSGI specification, click here.
When you are finished testing, hit CTRL-C in the terminal window to stop Gunicorn.
We’re now finished configuring our Django application. We can back out of our virtual environment by typing:
- deactivate
The virtual environment indicator in your prompt will be removed.
We have tested that Gunicorn can interact with our Django application, but we should implement a more robust way of starting and stopping the application server. To accomplish this, we’ll make systemd service and socket files.
The Gunicorn socket will be created at boot and will listen for connections. When a connection occurs, systemd will automatically start the Gunicorn process to handle the connection.
Start by creating and opening a systemd socket file for Gunicorn with sudo
privileges:
- sudo nano /etc/systemd/system/gunicorn.socket
Inside, we will create a [Unit]
section to describe the socket, a [Socket]
section to define the socket location, and an [Install]
section to make sure the socket is created at the right time:
[Unit]
Description=gunicorn socket
[Socket]
ListenStream=/run/gunicorn.sock
[Install]
WantedBy=sockets.target
Save and close the file when you are finished.
Next, create and open a systemd service file for Gunicorn with sudo
privileges in your text editor. The service filename should match the socket filename with the exception of the extension:
- sudo nano /etc/systemd/system/gunicorn.service
Start with the [Unit]
section, which is used to specify metadata and dependencies. We’ll put a description of our service here and tell the init system to only start this after the networking target has been reached. Because our service relies on the socket from the socket file, we need to include a Requires
directive to indicate that relationship:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
Next, we’ll open up the [Service]
section. We’ll specify the user and group that we want to process to run under. We will give our regular user account ownership of the process since it owns all of the relevant files. We’ll give group ownership to the www-data
group so that Nginx can communicate easily with Gunicorn.
We’ll then map out the working directory and specify the command to use to start the service. In this case, we’ll have to specify the full path to the Gunicorn executable, which is installed within our virtual environment. We will bind the process to the Unix socket we created within the /run
directory so that the process can communicate with Nginx. We log all data to standard output so that the journald
process can collect the Gunicorn logs. We can also specify any optional Gunicorn tweaks here. For example, we specified 3 worker processes in this case:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myprojectdir
ExecStart=/home/sammy/myprojectdir/myprojectenv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
myproject.wsgi:application
Finally, we’ll add an [Install]
section. This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:
[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=sammy
Group=www-data
WorkingDirectory=/home/sammy/myprojectdir
ExecStart=/home/sammy/myprojectdir/myprojectenv/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
myproject.wsgi:application
[Install]
WantedBy=multi-user.target
With that, our systemd service file is complete. Save and close it now.
We can now start and enable the Gunicorn socket. This will create the socket file at /run/gunicorn.sock
now and at boot. When a connection is made to that socket, systemd will automatically start the gunicorn.service
to handle it:
- sudo systemctl start gunicorn.socket
- sudo systemctl enable gunicorn.socket
We can confirm that the operation was successful by checking for the socket file.
Check the status of the process to find out whether it was able to start:
- sudo systemctl status gunicorn.socket
Next, check for the existence of the gunicorn.sock
file within the /run
directory:
- file /run/gunicorn.sock
Output/run/gunicorn.sock: socket
If the systemctl status
command indicated that an error occurred or if you do not find the gunicorn.sock
file in the directory, it’s an indication that the Gunicorn socket was not able to be created correctly. Check the Gunicorn socket’s logs by typing:
- sudo journalctl -u gunicorn.socket
Take another look at your /etc/systemd/system/gunicorn.socket
file to fix any problems before continuing.
Currently, if you’ve only started the gunicorn.socket
unit, the gunicorn.service
will not be active yet since the socket has not yet received any connections. You can check this by typing:
- sudo systemctl status gunicorn
Output● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled)
Active: inactive (dead)
To test the socket activation mechanism, we can send a connection to the socket through curl
by typing:
- curl --unix-socket /run/gunicorn.sock localhost
You should see the HTML output from your application in the terminal. This indicates that Gunicorn was started and was able to serve your Django application. You can verify that the Gunicorn service is running by typing:
- sudo systemctl status gunicorn
Output● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled)
Active: active (running) since Mon 2018-07-09 20:00:40 UTC; 4s ago
Main PID: 1157 (gunicorn)
Tasks: 4 (limit: 1153)
CGroup: /system.slice/gunicorn.service
├─1157 /home/sammy/myprojectdir/myprojectenv/bin/python3 /home/sammy/myprojectdir/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock myproject.wsgi:application
├─1178 /home/sammy/myprojectdir/myprojectenv/bin/python3 /home/sammy/myprojectdir/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock myproject.wsgi:application
├─1180 /home/sammy/myprojectdir/myprojectenv/bin/python3 /home/sammy/myprojectdir/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock myproject.wsgi:application
└─1181 /home/sammy/myprojectdir/myprojectenv/bin/python3 /home/sammy/myprojectdir/myprojectenv/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock myproject.wsgi:application
Jul 09 20:00:40 django1 systemd[1]: Started gunicorn daemon.
Jul 09 20:00:40 django1 gunicorn[1157]: [2018-07-09 20:00:40 +0000] [1157] [INFO] Starting gunicorn 19.9.0
Jul 09 20:00:40 django1 gunicorn[1157]: [2018-07-09 20:00:40 +0000] [1157] [INFO] Listening at: unix:/run/gunicorn.sock (1157)
Jul 09 20:00:40 django1 gunicorn[1157]: [2018-07-09 20:00:40 +0000] [1157] [INFO] Using worker: sync
Jul 09 20:00:40 django1 gunicorn[1157]: [2018-07-09 20:00:40 +0000] [1178] [INFO] Booting worker with pid: 1178
Jul 09 20:00:40 django1 gunicorn[1157]: [2018-07-09 20:00:40 +0000] [1180] [INFO] Booting worker with pid: 1180
Jul 09 20:00:40 django1 gunicorn[1157]: [2018-07-09 20:00:40 +0000] [1181] [INFO] Booting worker with pid: 1181
Jul 09 20:00:41 django1 gunicorn[1157]: - - [09/Jul/2018:20:00:41 +0000] "GET / HTTP/1.1" 200 16348 "-" "curl/7.58.0"
If the output from curl
or the output of systemctl status
indicates that a problem occurred, check the logs for additional details:
- sudo journalctl -u gunicorn
Check your /etc/systemd/system/gunicorn.service
file for problems. If you make changes to the /etc/systemd/system/gunicorn.service
file, reload the daemon to reread the service definition and restart the Gunicorn process by typing:
- sudo systemctl daemon-reload
- sudo systemctl restart gunicorn
Make sure you troubleshoot the above issues before continuing.
Now that Gunicorn is set up, we need to configure Nginx to pass traffic to the process.
Start by creating and opening a new server block in Nginx’s sites-available
directory:
- sudo nano /etc/nginx/sites-available/myproject
Inside, open up a new server block. We will start by specifying that this block should listen on the normal port 80 and that it should respond to our server’s domain name or IP address:
server {
listen 80;
server_name server_domain_or_IP;
}
Next, we will tell Nginx to ignore any problems with finding a favicon. We will also tell it where to find the static assets that we collected in our ~/myprojectdir/static
directory. All of these files have a standard URI prefix of “/static”, so we can create a location block to match those requests:
server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/sammy/myprojectdir;
}
}
Finally, we’ll create a location / {}
block to match all other requests. Inside of this location, we’ll include the standard proxy_params
file included with the Nginx installation and then we will pass the traffic directly to the Gunicorn socket:
server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/sammy/myprojectdir;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
Save and close the file when you are finished. Now, we can enable the file by linking it to the sites-enabled
directory:
- sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled
Test your Nginx configuration for syntax errors by typing:
- sudo nginx -t
If no errors are reported, go ahead and restart Nginx by typing:
- sudo systemctl restart nginx
Finally, we need to open up our firewall to normal traffic on port 80. Since we no longer need access to the development server, we can remove the rule to open port 8000 as well:
- sudo ufw delete allow 8000
- sudo ufw allow 'Nginx Full'
You should now be able to go to your server’s domain or IP address to view your application.
Note: After configuring Nginx, the next step should be securing traffic to the server using SSL/TLS. This is important because without it, all information, including passwords are sent over the network in plain text.
If you have a domain name, the easiest way get an SSL certificate to secure your traffic is using Let’s Encrypt. Follow this guide to set up Let’s Encrypt with Nginx on Debian 9. Follow the procedure using the Nginx server block we created in this guide.
If you do not have a domain name, you can still secure your site for testing and learning with a self-signed SSL certificate. Again, follow the process using the Nginx server block we created in this tutorial.
If this last step does not show your application, you will need to troubleshoot your installation.
If Nginx displays the default page instead of proxying to your application, it usually means that you need to adjust the server_name
within the /etc/nginx/sites-available/myproject
file to point to your server’s IP address or domain name.
Nginx uses the server_name
to determine which server block to use to respond to requests. If you are seeing the default Nginx page, it is a sign that Nginx wasn’t able to match the request to a sever block explicitly, so it’s falling back on the default block defined in /etc/nginx/sites-available/default
.
The server_name
in your project’s server block must be more specific than the one in the default server block to be selected.
A 502 error indicates that Nginx is unable to successfully proxy the request. A wide range of configuration problems express themselves with a 502 error, so more information is required to troubleshoot properly.
The primary place to look for more information is in Nginx’s error logs. Generally, this will tell you what conditions caused problems during the proxying event. Follow the Nginx error logs by typing:
- sudo tail -F /var/log/nginx/error.log
Now, make another request in your browser to generate a fresh error (try refreshing the page). You should see a fresh error message written to the log. If you look at the message, it should help you narrow down the problem.
You might see some of the following message:
connect() to unix:/run/gunicorn.sock failed (2: No such file or directory)
This indicates that Nginx was unable to find the gunicorn.sock
file at the given location. You should compare the proxy_pass
location defined within /etc/nginx/sites-available/myproject
file to the actual location of the gunicorn.sock
file generated by the gunicorn.socket
systemd unit.
If you cannot find a gunicorn.sock
file within the /run
directory, it generally means that the systemd socket file was unable to create it. Go back to the section on checking for the Gunicorn socket file to step through the troubleshooting steps for Gunicorn.
connect() to unix:/run/gunicorn.sock failed (13: Permission denied)
This indicates that Nginx was unable to connect to the Gunicorn socket because of permissions problems. This can happen when the procedure is followed using the root user instead of a sudo
user. While systemd is able to create the Gunicorn socket file, Nginx is unable to access it.
This can happen if there are limited permissions at any point between the root directory (/
) the gunicorn.sock
file. We can see the permissions and ownership values of the socket file and each of its parent directories by passing the absolute path to our socket file to the namei
command:
- namei -l /run/gunicorn.sock
Outputf: /run/gunicorn.sock
drwxr-xr-x root root /
drwxr-xr-x root root run
srw-rw-rw- root root gunicorn.sock
The output displays the permissions of each of the directory components. By looking at the permissions (first column), owner (second column) and group owner (third column), we can figure out what type of access is allowed to the socket file.
In the above example, the socket file and each of the directories leading up to the socket file have world read and execute permissions (the permissions column for the directories end with r-x
instead of ---
). The Nginx process should be able to access the socket successfully.
If any of the directories leading up to the socket do not have world read and execute permission, Nginx will not be able to access the socket without allowing world read and execute permissions or making sure group ownership is given to a group that Nginx is a part of.
One message that you may see from Django when attempting to access parts of the application in the web browser is:
OperationalError at /admin/login/
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
This indicates that Django is unable to connect to the Postgres database. Make sure that the Postgres instance is running by typing:
- sudo systemctl status postgresql
If it is not, you can start it and enable it to start automatically at boot (if it is not already configured to do so) by typing:
- sudo systemctl start postgresql
- sudo systemctl enable postgresql
If you are still having issues, make sure the database settings defined in the ~/myprojectdir/myproject/settings.py
file are correct.
For additional troubleshooting, the logs can help narrow down root causes. Check each of them in turn and look for messages indicating problem areas.
The following logs may be helpful:
sudo journalctl -u nginx
sudo less /var/log/nginx/access.log
sudo less /var/log/nginx/error.log
sudo journalctl -u gunicorn
sudo journalctl -u gunicorn.socket
As you update your configuration or application, you will likely need to restart the processes to adjust to your changes.
If you update your Django application, you can restart the Gunicorn process to pick up the changes by typing:
- sudo systemctl restart gunicorn
If you change Gunicorn socket or service files, reload the daemon and restart the process by typing:
- sudo systemctl daemon-reload
- sudo systemctl restart gunicorn.socket gunicorn.service
If you change the Nginx server block configuration, test the configuration and then Nginx by typing:
- sudo nginx -t && sudo systemctl restart nginx
These commands are helpful for picking up changes as you adjust your configuration.
In this guide, we’ve set up a Django project in its own virtual environment. We’ve configured Gunicorn to translate client requests so that Django can handle them. Afterwards, we set up Nginx to act as a reverse proxy to handle client connections and serve the correct project depending on the client request.
Django makes creating projects and applications simple by providing many of the common pieces, allowing you to focus on the unique elements. By leveraging the general tool chain described in this article, you can easily serve the applications you create from a single server.
]]>An important part of managing server configuration and infrastructure includes maintaining an easy way to look up network interfaces and IP addresses by name, by setting up a proper Domain Name System (DNS). Using fully qualified domain names (FQDNs), instead of IP addresses, to specify network addresses eases the configuration of services and applications, and increases the maintainability of configuration files. Setting up your own DNS for your private network is a great way to improve the management of your servers.
In this tutorial, we will go over how to set up an internal DNS server, using the BIND name server software (BIND9) on Debian 9, that can be used by your servers to resolve private hostnames and private IP addresses. This provides a central way to manage your internal hostnames and private IP addresses, which is indispensable when your environment expands to more than a few hosts.
To complete this tutorial, you will need the following infrastructure. Create each server in the same datacenter with private networking enabled:
On each of these servers, configure administrative access via a sudo
user and a firewall by following our Debian 9 initial server setup guide.
If you are unfamiliar with DNS concepts, it is recommended that you read at least the first three parts of our Introduction to Managing DNS.
For the purposes of this article, we will assume the following:
10.128.0.0/16
subnet. You will likely have to adjust this for your servers).With these assumptions, we decide that it makes sense to use a naming scheme that uses “nyc3.example.com” to refer to our private subnet or zone. Therefore, host1’s private Fully-Qualified Domain Name (FQDN) will be host1.nyc3.example.com. Refer to the following table the relevant details:
Host | Role | Private FQDN | Private IP Address |
---|---|---|---|
ns1 | Primary DNS Server | ns1.nyc3.example.com | 10.128.10.11 |
ns2 | Secondary DNS Server | ns2.nyc3.example.com | 10.128.20.12 |
host1 | Generic Host 1 | host1.nyc3.example.com | 10.128.100.101 |
host2 | Generic Host 2 | host2.nyc3.example.com | 10.128.200.102 |
Note
Your existing setup will be different, but the example names and IP addresses will be used to demonstrate how to configure a DNS server to provide a functioning internal DNS. You should be able to easily adapt this setup to your own environment by replacing the host names and private IP addresses with your own. It is not necessary to use the region name of the datacenter in your naming scheme, but we use it here to denote that these hosts belong to a particular datacenter’s private network. If you utilize multiple datacenters, you can set up an internal DNS within each respective datacenter.
By the end of this tutorial, we will have a primary DNS server, ns1, and optionally a secondary DNS server, ns2, which will serve as a backup.
Let’s get started by installing our Primary DNS server, ns1.
Note
Text that is highlighted in red is important! It will often be used to denote something that needs to be replaced with your own settings or that it should be modified or added to a configuration file. For example, if you see something like host1.nyc3.example.com
, replace it with the FQDN of your own server. Likewise, if you see host1_private_IP
, replace it with the private IP address of your own server.
On both DNS servers, ns1 and ns2, update the apt
package cache by typing:
- sudo apt update
Now install BIND:
- sudo apt install bind9 bind9utils bind9-doc
Before continuing, let’s set BIND to IPv4 mode since our private networking uses IPv4 exclusively. On both servers, edit the bind9
default settings file by typing:
- sudo nano /etc/default/bind9
Add “-4” to the end of the OPTIONS
parameter. It should look like the following:
. . .
OPTIONS="-u bind -4"
Save and close the file when you are finished.
Restart BIND to implement the changes:
- sudo systemctl restart bind9
Now that BIND is installed, let’s configure the primary DNS server.
BIND’s configuration consists of multiple files, which are included from the main configuration file, named.conf
. These filenames begin with named
because that is the name of the process that BIND runs (short for “domain name daemon”). We will start with configuring the options file.
On ns1, open the named.conf.options
file for editing:
- sudo nano /etc/bind/named.conf.options
Above the existing options
block, create a new ACL (access control list) block called “trusted”. This is where we will define a list of clients that we will allow recursive DNS queries from (i.e. your servers that are in the same datacenter as ns1). Using our example private IP addresses, we will add ns1, ns2, host1, and host2 to our list of trusted clients:
acl "trusted" {
10.128.10.11; # ns1 - can be set to localhost
10.128.20.12; # ns2
10.128.100.101; # host1
10.128.200.102; # host2
};
options {
. . .
Now that we have our list of trusted DNS clients, we will want to edit the options
block. Currently, the start of the block looks like the following:
. . .
};
options {
directory "/var/cache/bind";
. . .
}
Below the directory
directive, add the highlighted configuration lines (and substitute in the proper ns1 IP address) so it looks something like this:
. . .
};
options {
directory "/var/cache/bind";
recursion yes; # enables resursive queries
allow-recursion { trusted; }; # allows recursive queries from "trusted" clients
listen-on { 10.128.10.11; }; # ns1 private IP address - listen on private network only
allow-transfer { none; }; # disable zone transfers by default
forwarders {
8.8.8.8;
8.8.4.4;
};
. . .
};
When you are finished, save and close the named.conf.options
file. The above configuration specifies that only your own servers (the “trusted” ones) will be able to query your DNS server for outside domains.
Next, we will configure the local file, to specify our DNS zones.
On ns1, open the named.conf.local
file for editing:
- sudo nano /etc/bind/named.conf.local
Aside from a few comments, the file should be empty. Here, we will specify our forward and reverse zones. DNS zones designate a specific scope for managing and defining DNS records. Since our domains will all be within the “nyc3.example.com” subdomain, we will use that as our forward zone. Because our servers’ private IP addresses are each in the 10.128.0.0/16
IP space, we will set up a reverse zone so that we can define reverse lookups within that range.
Add the forward zone with the following lines, substituting the zone name with your own and the secondary DNS server’s private IP address in the allow-transfer
directive:
zone "nyc3.example.com" {
type master;
file "/etc/bind/zones/db.nyc3.example.com"; # zone file path
allow-transfer { 10.128.20.12; }; # ns2 private IP address - secondary
};
Assuming that our private subnet is 10.128.0.0/16
, add the reverse zone by with the following lines (note that our reverse zone name starts with “128.10” which is the octet reversal of “10.128”):
. . .
};
zone "128.10.in-addr.arpa" {
type master;
file "/etc/bind/zones/db.10.128"; # 10.128.0.0/16 subnet
allow-transfer { 10.128.20.12; }; # ns2 private IP address - secondary
};
If your servers span multiple private subnets but are in the same datacenter, be sure to specify an additional zone and zone file for each distinct subnet. When you are finished adding all of your desired zones, save and exit the named.conf.local
file.
Now that our zones are specified in BIND, we need to create the corresponding forward and reverse zone files.
The forward zone file is where we define DNS records for forward DNS lookups. That is, when the DNS receives a name query, “host1.nyc3.example.com” for example, it will look in the forward zone file to resolve host1’s corresponding private IP address.
Let’s create the directory where our zone files will reside. According to our named.conf.local configuration, that location should be /etc/bind/zones
:
- sudo mkdir /etc/bind/zones
We will base our forward zone file on the sample db.local
zone file. Copy it to the proper location with the following commands:
- sudo cp /etc/bind/db.local /etc/bind/zones/db.nyc3.example.com
Now let’s edit our forward zone file:
- sudo nano /etc/bind/zones/db.nyc3.example.com
Initially, it will look something like the following:
$TTL 604800
@ IN SOA localhost. root.localhost. (
2 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS localhost. ; delete this line
@ IN A 127.0.0.1 ; delete this line
@ IN AAAA ::1 ; delete this line
First, you will want to edit the SOA record. Replace the first “localhost” with ns1’s FQDN, then replace “root.localhost” with “admin.nyc3.example.com”. Every time you edit a zone file, you need to increment the serial value before you restart the named
process. We will increment it to “3”. It should now look something like this:
@ IN SOA ns1.nyc3.example.com. admin.nyc3.example.com. (
3 ; Serial
. . .
Next, delete the three records at the end of the file (after the SOA record). If you’re not sure which lines to delete, they are marked with a “delete this line” comment above.
At the end of the file, add your name server records with the following lines (replace the names with your own). Note that the second column specifies that these are “NS” records:
. . .
; name servers - NS records
IN NS ns1.nyc3.example.com.
IN NS ns2.nyc3.example.com.
Now, add the A records for your hosts that belong in this zone. This includes any server whose name we want to end with “.nyc3.example.com” (substitute the names and private IP addresses). Using our example names and private IP addresses, we will add A records for ns1, ns2, host1, and host2 like so:
. . .
; name servers - A records
ns1.nyc3.example.com. IN A 10.128.10.11
ns2.nyc3.example.com. IN A 10.128.20.12
; 10.128.0.0/16 - A records
host1.nyc3.example.com. IN A 10.128.100.101
host2.nyc3.example.com. IN A 10.128.200.102
Save and close the db.nyc3.example.com
file.
Our final example forward zone file looks like the following:
$TTL 604800
@ IN SOA ns1.nyc3.example.com. admin.nyc3.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; name servers - NS records
IN NS ns1.nyc3.example.com.
IN NS ns2.nyc3.example.com.
; name servers - A records
ns1.nyc3.example.com. IN A 10.128.10.11
ns2.nyc3.example.com. IN A 10.128.20.12
; 10.128.0.0/16 - A records
host1.nyc3.example.com. IN A 10.128.100.101
host2.nyc3.example.com. IN A 10.128.200.102
Now let’s move onto the reverse zone file(s).
Reverse zone files are where we define DNS PTR records for reverse DNS lookups. That is, when the DNS receives a query by IP address, “10.128.100.101” for example, it will look in the reverse zone file(s) to resolve the corresponding FQDN, “host1.nyc3.example.com” in this case.
On ns1, for each reverse zone specified in the named.conf.local
file, create a reverse zone file. We will base our reverse zone file(s) on the sample db.127
zone file. Copy it to the proper location with the following commands (substituting the destination filename so it matches your reverse zone definition):
- sudo cp /etc/bind/db.127 /etc/bind/zones/db.10.128
Edit the reverse zone file that corresponds to the reverse zone(s) defined in named.conf.local
:
- sudo nano /etc/bind/zones/db.10.128
Initially, it will look something like the following:
$TTL 604800
@ IN SOA localhost. root.localhost. (
1 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS localhost. ; delete this line
1.0.0 IN PTR localhost. ; delete this line
In the same manner as the forward zone file, you will want to edit the SOA record and increment the serial value. It should look something like this:
@ IN SOA ns1.nyc3.example.com. admin.nyc3.example.com. (
3 ; Serial
. . .
Now delete the two records at the end of the file (after the SOA record). If you’re not sure which lines to delete, they are marked with a “delete this line” comment above.
At the end of the file, add your name server records with the following lines (replace the names with your own). Note that the second column specifies that these are “NS” records:
. . .
; name servers - NS records
IN NS ns1.nyc3.example.com.
IN NS ns2.nyc3.example.com.
Then add PTR
records for all of your servers whose IP addresses are on the subnet of the zone file that you are editing. In our example, this includes all of our hosts because they are all on the 10.128.0.0/16
subnet. Note that the first column consists of the last two octets of your servers’ private IP addresses in reversed order. Be sure to substitute names and private IP addresses to match your servers:
. . .
; PTR Records
11.10 IN PTR ns1.nyc3.example.com. ; 10.128.10.11
12.20 IN PTR ns2.nyc3.example.com. ; 10.128.20.12
101.100 IN PTR host1.nyc3.example.com. ; 10.128.100.101
102.200 IN PTR host2.nyc3.example.com. ; 10.128.200.102
Save and close the reverse zone file (repeat this section if you need to add more reverse zone files).
Our final example reverse zone file looks like the following:
$TTL 604800
@ IN SOA nyc3.example.com. admin.nyc3.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers
IN NS ns1.nyc3.example.com.
IN NS ns2.nyc3.example.com.
; PTR Records
11.10 IN PTR ns1.nyc3.example.com. ; 10.128.10.11
12.20 IN PTR ns2.nyc3.example.com. ; 10.128.20.12
101.100 IN PTR host1.nyc3.example.com. ; 10.128.100.101
102.200 IN PTR host2.nyc3.example.com. ; 10.128.200.102
We’re done editing our files, so next we can check our files for errors.
Run the following command to check the syntax of the named.conf*
files:
- sudo named-checkconf
If your named configuration files have no syntax errors, you will return to your shell prompt and see no error messages. If there are problems with your configuration files, review the error message and the “Configure Primary DNS Server” section, then try named-checkconf
again.
The named-checkzone
command can be used to check the correctness of your zone files. Its first argument specifies a zone name, and the second argument specifies the corresponding zone file, which are both defined in named.conf.local
.
For example, to check the “nyc3.example.com” forward zone configuration, run the following command (change the names to match your forward zone and file):
- sudo named-checkzone nyc3.example.com /etc/bind/zones/db.nyc3.example.com
And to check the “128.10.in-addr.arpa” reverse zone configuration, run the following command (change the numbers to match your reverse zone and file):
- sudo named-checkzone 128.10.in-addr.arpa /etc/bind/zones/db.10.128
When all of your configuration and zone files have no errors in them, you should be ready to restart the BIND service.
Restart BIND:
- sudo systemctl restart bind9
If you have the UFW firewall configured, open up access to BIND by typing:
- sudo ufw allow Bind9
Your primary DNS server is now setup and ready to respond to DNS queries. Let’s move on to creating the secondary DNS server.
In most environments, it is a good idea to set up a secondary DNS server that will respond to requests if the primary becomes unavailable. Luckily, the secondary DNS server is much easier to configure.
On ns2, edit the named.conf.options
file:
- sudo nano /etc/bind/named.conf.options
At the top of the file, add the ACL with the private IP addresses of all of your trusted servers:
acl "trusted" {
10.128.10.11; # ns1
10.128.20.12; # ns2 - can be set to localhost
10.128.100.101; # host1
10.128.200.102; # host2
};
options {
. . .
Below the directory
directive, add the following lines:
recursion yes;
allow-recursion { trusted; };
listen-on { 10.128.20.12; }; # ns2 private IP address
allow-transfer { none; }; # disable zone transfers by default
forwarders {
8.8.8.8;
8.8.4.4;
};
Save and close the named.conf.options
file. This file should look exactly like ns1’s named.conf.options
file except it should be configured to listen on ns2’s private IP address.
Now edit the named.conf.local
file:
- sudo nano /etc/bind/named.conf.local
Define slave zones that correspond to the master zones on the primary DNS server. Note that the type is “slave”, the file does not contain a path, and there is a masters
directive which should be set to the primary DNS server’s private IP address. If you defined multiple reverse zones in the primary DNS server, make sure to add them all here:
zone "nyc3.example.com" {
type slave;
file "db.nyc3.example.com";
masters { 10.128.10.11; }; # ns1 private IP
};
zone "128.10.in-addr.arpa" {
type slave;
file "db.10.128";
masters { 10.128.10.11; }; # ns1 private IP
};
Now save and close the named.conf.local
file.
Run the following command to check the validity of your configuration files:
- sudo named-checkconf
Once that checks out, restart BIND:
- sudo systemctl restart bind9
Allow DNS connections to the server by altering the UFW firewall rules:
- sudo ufw allow Bind9
Now you have primary and secondary DNS servers for private network name and IP address resolution. Now you must configure your client servers to use your private DNS servers.
Before all of your servers in the “trusted” ACL can query your DNS servers, you must configure each of them to use ns1 and ns2 as name servers. This process varies depending on OS, but for most Linux distributions it involves adding your name servers to the /etc/resolv.conf
file.
On Ubuntu 18.04, networking is configured with Netplan, an abstraction that allows you to write standardized network configuration and apply it to incompatible backend networking software. To configure DNS, we need to write a Netplan configuration file.
First, find the device associated with your private network by querying the private subnet with the ip address
command:
- ip address show to 10.128.0.0/16
Output3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
inet 10.128.100.101/16 brd 10.128.255.255 scope global eth1
valid_lft forever preferred_lft forever
In this example, the private interface is eth1
.
Next, create a new file in /etc/netplan
called 00-private-nameservers.yaml
:
- sudo nano /etc/netplan/00-private-nameservers.yaml
Inside, paste the following contents. You will need to modify the interface of the private network, the addresses of your ns1 and ns2 DNS servers, and the DNS zone:
Note: Netplan uses the YAML data serialization format for its configuration files. Because YAML uses indentation and whitespace to define its data structure, make sure that your definition uses consistent indentation to avoid errors.
network:
version: 2
ethernets:
eth1: # Private network interface
nameservers:
addresses:
- 10.128.10.11 # Private IP for ns1
- 10.132.20.12 # Private IP for ns2
search: [ nyc3.example.com ] # DNS zone
Save and close the file when you are finished.
Next, tell Netplan to attempt to use the new configuration file by using netplan try
. If there are problems that cause a loss of networking, Netplan will automatically roll back the changes after a timeout:
- sudo netplan try
OutputWarning: Stopping systemd-networkd.service, but it can still be activated by:
systemd-networkd.socket
Do you want to keep these settings?
Press ENTER before the timeout to accept the new configuration
Changes will revert in 120 seconds
If the countdown is updating correctly at the bottom, the new configuration is at least functional enough to not break your SSH connection. Press ENTER to accept the new configuration.
Now, check that the system’s DNS resolver to determine if your DNS configuration has been applied:
- sudo systemd-resolve --status
Scroll down until you see the section for your private network interface. You should see the private IP addresses for your DNS servers listed first, followed by some fallback values. Your domain should should be in the “DNS Domain”:
Output. . .
Link 3 (eth1)
Current Scopes: DNS
LLMNR setting: yes
MulticastDNS setting: no
DNSSEC setting: no
DNSSEC supported: no
DNS Servers: 10.128.10.11
10.128.20.12
67.207.67.2
67.207.67.3
DNS Domain: nyc3.example.com
. . .
Your client should now be configured to use your internal DNS servers.
On Ubuntu 16.04 and Debian Linux servers, you can edit the /etc/network/interfaces
file:
- sudo nano /etc/network/interfaces
Inside, find the dns-nameservers
line. If it is attached to the lo
interface, move it to your networking interface (eth0
or eth1
for example). Next, prepend your own name servers in front of the list that is currently there. Below that line, add a dns-search
option pointed to the base domain of your infrastructure. In our case, this would be “nyc3.example.com”:
. . .
dns-nameservers 10.128.10.11 10.128.20.12 8.8.8.8
dns-search nyc3.example.com
. . .
Save and close the file when you are finished.
Make sure that the resolvconf
package is installed on your system:
- sudo apt update
- sudo apt install resolvconf
Now, restart your networking services, applying the new changes with the following commands. Make sure you replace eth0
with the name of your networking interface:
- sudo ifdown --force eth0 && sudo ip addr flush dev eth0 && sudo ifup --force eth0
This should restart your network without dropping your current connection. If it worked correctly, you should see something like this:
OutputRTNETLINK answers: No such process
Waiting for DAD... Done
Double check that your settings were applied by typing:
- cat /etc/resolv.conf
You should see your name servers in the /etc/resolv.conf
file, as well as your search domain:
Output# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.128.10.11
nameserver 10.128.20.12
nameserver 8.8.8.8
search nyc3.example.com
Your client is now configured to use your DNS servers.
On CentOS, RedHat, and Fedora Linux, edit the /etc/sysconfig/network-scripts/ifcfg-eth0
file. You may have to substitute eth0
with the name of your primary network interface:
- sudo nano /etc/sysconfig/network-scripts/ifcfg-eth0
Search for the DNS1
and DNS2
options and set them to the private IP addresses of your primary and secondary name servers. Add a DOMAIN
parameter that with your infrastructure’s base domain. In this guide, that would be “nyc3.example.com”:
. . .
DNS1=10.128.10.11
DNS2=10.128.20.12
DOMAIN='nyc3.example.com'
. . .
Save and close the file when you are finished.
Now, restart the networking service by typing:
- sudo systemctl restart network
The command may hang for a few seconds, but should return you to the prompt shortly.
Check that your changes were applied by typing:
- cat /etc/resolv.conf
You should see your name servers and search domain in the list:
nameserver 10.128.10.11
nameserver 10.128.20.12
search nyc3.example.com
Your client should now be able to connect to and use your DNS servers.
Use nslookup
to test if your clients can query your name servers. You should be able to do this on all of the clients that you have configured and are in the “trusted” ACL.
For CentOS clients, you may need to install the utility with:
- sudo yum install bind-utils
For Debian clients, you can install with:
- sudo apt install dnsutils
We can start by performing a forward lookup.
For example, we can perform a forward lookup to retrieve the IP address of host1.nyc3.example.com by running the following command:
- nslookup host1
Querying “host1” expands to "host1.nyc3.example.com because of the search
option is set to your private subdomain, and DNS queries will attempt to look on that subdomain before looking for the host elsewhere. The output of the command above would look like the following:
OutputServer: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: host1.nyc3.example.com
Address: 10.128.100.101
Next, we can check reverse lookups.
To test the reverse lookup, query the DNS server with host1’s private IP address:
- nslookup 10.128.100.101
You should see output that looks like the following:
Output11.10.128.10.in-addr.arpa name = host1.nyc3.example.com.
Authoritative answers can be found from:
If all of the names and IP addresses resolve to the correct values, that means that your zone files are configured properly. If you receive unexpected values, be sure to review the zone files on your primary DNS server (e.g. db.nyc3.example.com
and db.10.128
).
Congratulations! Your internal DNS servers are now set up properly! Now we will cover maintaining your zone records.
Now that you have a working internal DNS, you need to maintain your DNS records so they accurately reflect your server environment.
Whenever you add a host to your environment (in the same datacenter), you will want to add it to DNS. Here is a list of steps that you need to take:
named.conf.options
)Test your configuration files:
- sudo named-checkconf
- sudo named-checkzone nyc3.example.com db.nyc3.example.com
- sudo named-checkzone 128.10.in-addr.arpa /etc/bind/zones/db.10.128
Then reload BIND:
- sudo systemctl reload bind9
Your primary server should be configured for the new host now.
named.conf.options
)Check the configuration syntax:
- sudo named-checkconf
Then reload BIND:
- sudo systemctl reload bind9
Your secondary server will now accept connections from the new host.
/etc/resolv.conf
to use your DNS serversnslookup
If you remove a host from your environment or want to just take it out of DNS, just remove all the things that were added when you added the server to DNS (i.e. the reverse of the steps above).
Now you may refer to your servers’ private network interfaces by name, rather than by IP address. This makes configuration of services and applications easier because you no longer have to remember the private IP addresses, and the files will be easier to read and understand. Also, now you can change your configurations to point to a new servers in a single place, your primary DNS server, instead of having to edit a variety of distributed configuration files, which eases maintenance.
Once you have your internal DNS set up, and your configuration files are using private FQDNs to specify network connections, it is critical that your DNS servers are properly maintained. If they both become unavailable, your services and applications that rely on them will cease to function properly. This is why it is recommended to set up your DNS with at least one secondary server, and to maintain working backups of all of them.
]]>Anaconda is an open-source package manager, environment manager, and distribution of the Python and R programming languages. Designed for data science and machine learning workflows, it is commonly used for large-scale data processing, scientific computing, and predictive analytics.
Available in both free and paid enterprise versions, Anaconda offers a collection of over 1,000 data science packages. The Anaconda distribution ships with the conda
command-line utility. You can learn more about Anaconda and conda
by reading the official Anaconda Documentation.
This tutorial will guide you through installing the Python 3 version of Anaconda on a Debian 9 server.
Before you begin with this guide, you should have a non-root user with sudo privileges set up on your server.
You can achieve this prerequisite by completing our Debian 9 initial server setup guide.
The best way to install Anaconda is to download the latest Anaconda installer bash script, verify it, and then run it.
Find the latest version of Anaconda for Python 3 at the Downloads page accessible via the Anaconda home page. At the time of writing, the latest version is 5.2, but you should use a later stable version if it is available.
Next, change to the /tmp
directory on your server. This is a good directory to download ephemeral items, like the Anaconda bash script, which we won’t need after running it.
- cd /tmp
We’ll use the curl
command-line tool to download the script. Install curl
:
- sudo apt install curl
Now, use curl
to download the link that you copied from the Anaconda website:
- curl -O https://repo.anaconda.com/archive/Anaconda3-5.2.0-Linux-x86_64.sh
We can now verify the data integrity of the installer with cryptographic hash verification through the SHA-256 checksum. We’ll use the sha256sum
command along with the filename of the script:
- sha256sum Anaconda3-5.2.0-Linux-x86_64.sh
You’ll receive output that looks similar to this:
Output09f53738b0cd3bb96f5b1bac488e5528df9906be2480fe61df40e0e0d19e3d48 Anaconda3-5.2.0-Linux-x86_64.sh
You should check the output against the hashes available at the Anaconda with Python 3 on 64-bit Linux page for your appropriate Anaconda version. As long as your output matches the hash displayed in the sha2561
row, you’re good to go.
Now we can run the script:
- bash Anaconda3-5.2.0-Linux-x86_64.sh
You’ll receive the following output:
Output
Welcome to Anaconda3 5.2.0
In order to continue the installation process, please review the license
agreement.
Please, press ENTER to continue
>>>
Press ENTER
to continue and then press ENTER
to read through the license. Once you’re done reading the license, you’ll be prompted to approve the license terms:
OutputDo you approve the license terms? [yes|no]
As long as you agree, type yes
.
At this point, you’ll be prompted to choose the location of the installation. You can press ENTER
to accept the default location, or specify a different location to modify it.
OutputAnaconda3 will now be installed into this location:
/home/sammy/anaconda3
- Press ENTER to confirm the location
- Press CTRL-C to abort the installation
- Or specify a different location below
[/home/sammy/anaconda3] >>>
The installation process will continue. Note that it may take some time.
Once installation is complete, you’ll receive the following output:
Output...
installation finished.
Do you wish the installer to prepend the Anaconda3 install location
to PATH in your /home/sammy/.bashrc ? [yes|no]
[no] >>>
Type yes
so that you can use the conda
command. You’ll receive the following output next:
OutputAppending source /home/sammy/anaconda3/bin/activate to /home/sammy/.bashrc
A backup will be made to: /home/sammy/.bashrc-anaconda3.bak
...
Finally, you’ll receive the following prompt regarding whether or not you would like to download Visual Studio Code (or VSCode), a free and open-source editor for code developed by Microsoft that can run on Linux. You can learn more about the editor on the official Visual Studio Code website.
At this point, you can decide whether or not to download the editor now by typing yes
or no
.
Anaconda is partnered with Microsoft! Microsoft VSCode is a streamlined
code editor with support for development operations like debugging, task
running and version control.
To install Visual Studio Code, you will need:
- Administrator Privileges
- Internet connectivity
Visual Studio Code License: https://code.visualstudio.com/license
Do you wish to proceed with the installation of Microsoft VSCode? [yes|no]
>>>
In order to activate the installation, you should source the ~/.bashrc
file:
- source ~/.bashrc
Once you have done that, you can verify your install by making use of the conda
command, for example with list
:
- conda list
You’ll receive output of all the packages you have available through the Anaconda installation:
Output# packages in environment at /home/sammy/anaconda3:
#
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py36he11e457_0
alabaster 0.7.10 py36h306e16b_0
anaconda 5.2.0 py36_3
...
Now that Anaconda is installed, we can go on to setting up Anaconda environments.
Anaconda virtual environments allow you to keep projects organized by Python versions and packages needed. For each Anaconda environment you set up, you can specify which version of Python to use and can keep all of your related programming files together within that directory.
First, we can check to see which versions of Python are available for us to use:
- conda search "^python$"
You’ll receive output with the different versions of Python that you can target, including both Python 3 and Python 2 versions. Since we are using the Anaconda with Python 3 in this tutorial, you will have access only to the Python 3 versions of packages.
Let’s create an environment using the most recent version of Python 3. We can achieve this by assigning version 3 to the python
argument. We’ll call the environment my_env, but you’ll likely want to use a more descriptive name for your environment especially if you are using environments to access more than one version of Python.
- conda create --name my_env python=3
We’ll receive output with information about what is downloaded and which packages will be installed, and then be prompted to proceed with y
or n
. As long as you agree, type y
.
The conda
utility will now fetch the packages for the environment and let you know when it’s complete.
You can activate your new environment by typing the following:
- source activate my_env
With your environment activated, your command prompt prefix will change:
-
Within the environment, you can verify that you’re using the version of Python that you had intended to use:
- python --version
OutputPython 3.7.0 :: Anaconda, Inc.
When you’re ready to deactivate your Anaconda environment, you can do so by typing:
- source deactivate
Note that you can replace the word source
with .
to achieve the same results.
To target a more specific version of Python, you can pass a specific version to the python
argument, like 3.5
, for example:
- conda create -n my_env35 python=3.5
You can update your version of Python along the same branch (as in updating Python 3.5.1 to Python 3.5.2) within a respective environment with the following command:
- conda update python
If you would like to target a more specific version of Python, you can pass that to the python
argument, as in python=3.3.2
.
You can inspect all of the environments you have set up with this command:
- conda info --envs
Output# conda environments:
#
base * /home/sammy/anaconda3
my_env /home/sammy/anaconda3/envs/my_env
my_env35 /home/sammy/anaconda3/envs/my_env35
The asterisk indicates the current active environment.
Each environment you create with conda create
will come with several default packages:
openssl
pip
python
readline
setuptools
sqlite
tk
wheel
xz
zlib
You can add additional packages, such as numpy
for example, with the following command:
- conda install --name my_env35 numpy
If you know you would like a numpy
environment upon creation, you can target it in your conda create
command:
- conda create --name my_env python=3 numpy
If you are no longer working on a specific project and have no further need for the associated environment, you can remove it. To do so, type the following:
- conda remove --name my_env35 --all
Now, when you type the conda info --envs
command, the environment that you removed will no longer be listed.
You should regularly ensure that Anaconda is up-to-date so that you are working with all the latest package releases.
To do this, you should first update the conda
utility:
- conda update conda
When prompted to do so, type y
to proceed with the update.
Once the update of conda
is complete, you can update the Anaconda distribution:
- conda update anaconda
Again when prompted to do so, type y
to proceed.
This will ensure that you are using the latest releases of conda
and Anaconda.
If you are no longer using Anaconda and find that you need to uninstall it, you should start with the anaconda-clean
module, which will remove configuration files for when you uninstall Anaconda.
- conda install anaconda-clean
Type y
when prompted to do so.
Once it is installed, you can run the following command. You will be prompted to answer y
before deleting each one. If you would prefer not to be prompted, add --yes
to the end of your command:
anaconda-clean
This will also create a backup folder called .anaconda_backup
in your home directory:
OutputBackup directory: /home/sammy/.anaconda_backup/2018-09-06T183049
You can now remove your entire Anaconda directory by entering the following command:
- rm -rf ~/anaconda3
Finally, you can remove the PATH line from your .bashrc
file that Anaconda added. To do so, first open a text editor such as nano:
- nano ~/.bashrc
Then scroll down to the end of the file (if this is a recent install) or type CTRL + W
to search for Anaconda. Delete or comment out the export PATH
line:
...
# added by Anaconda3 installer
export PATH="/home/sammy/anaconda3/bin:$PATH"
When you’re done editing the file, type CTRL + X
to exit and y
to save changes.
Anaconda is now removed from your server.
This tutorial walked you through the installation of Anaconda, working with the conda
command-line utility, setting up environments, updating Anaconda, and deleting Anaconda if you no longer need it.
You can use Anaconda to help you manage workloads for data science, scientific computing, analytics, and large-scale data processing. From here, you can check out our tutorials on data analysis and machine learning to learn more about various tools available to use and projects that you can do.
]]>Node.js is an open-source JavaScript runtime environment for building server-side and networking applications. The platform runs on Linux, macOS, FreeBSD, and Windows. Though you can run Node.js applications at the command line, this tutorial will focus on running them as a service. This means that the applications will restart on reboot or failure and are safe for use in a production environment.
In this tutorial, you will set up a production-ready Node.js environment on a single Debian 9 server. This server will run a Node.js application managed by PM2, and provide users with secure access to the application through an Nginx reverse proxy. The Nginx server will offer HTTPS, using a free certificate provided by Let’s Encrypt.
This guide assumes that you have the following:
When you’ve completed the prerequisites, you will have a server serving your domain’s default placeholder page at https://example.com/
.
Let’s begin by installing the latest LTS release of Node.js, using the NodeSource package archives.
To install the NodeSource PPA and access its contents, you will first need to update your package index and install curl
:
- sudo apt update
- sudo apt install curl
Make sure you’re in your home directory, and then use curl
to retrieve the installation script for the Node.js 8.x archives:
- cd ~
- curl -sL https://deb.nodesource.com/setup_8.x -o nodesource_setup.sh
You can inspect the contents of this script with nano
or your preferred text editor:
- nano nodesource_setup.sh
When you’re done inspecting the script, run it under sudo
:
- sudo bash nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script from Nodesource, you can install the Node.js package:
- sudo apt install nodejs
To check which version of Node.js you have installed after these initial steps, type:
- nodejs -v
Outputv8.11.4
Note: When installing from the NodeSource PPA, the Node.js executable is called nodejs
, rather than node
.
The nodejs
package contains the nodejs
binary as well as npm
, a package manager for Node modules, so you don’t need to install npm
separately.
npm
uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm
. Execute this command to verify that npm
is installed and to create the configuration file:
- npm -v
Output5.6.0
In order for some npm
packages to work (those that require compiling code from source, for example), you will need to install the build-essential
package:
- sudo apt install build-essential
You now have the necessary tools to work with npm
packages that require compiling code from source.
With the Node.js runtime installed, let’s move on to writing a Node.js application.
Let’s write a Hello World application that returns “Hello World” to any HTTP requests. This sample application will help you get Node.js set up. You can replace it with your own application — just make sure that you modify your application to listen on the appropriate IP addresses and ports.
First, let’s create a sample application called hello.js
:
- cd ~
- nano hello.js
Insert the following code into the file:
const http = require('http');
const hostname = 'localhost';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
Save the file and exit the editor.
This Node.js application listens on the specified address (localhost
) and port (3000
), and returns “Hello World!” with a 200
HTTP success code. Since we’re listening on localhost
, remote clients won’t be able to connect to our application.
To test your application, type:
- node hello.js
You will see the following output:
OutputServer running at http://localhost:3000/
Note: Running a Node.js application in this manner will block additional commands until the application is killed by pressing CTRL+C
.
To test the application, open another terminal session on your server, and connect to localhost
with curl
:
- curl http://localhost:3000
If you see the following output, the application is working properly and listening on the correct address and port:
OutputHello World!
If you do not see the expected output, make sure that your Node.js application is running and configured to listen on the proper address and port.
Once you’re sure it’s working, kill the application (if you haven’t already) by pressing CTRL+C
.
Next let’s install PM2, a process manager for Node.js applications. PM2 makes it possible to daemonize applications so that they will run in the background as a service.
Use npm
to install the latest version of PM2 on your server:
- sudo npm install pm2@latest -g
The -g
option tells npm
to install the module globally, so it’s available system-wide.
Let’s first use the pm2 start
command to run your application, hello.js
, in the background:
- pm2 start hello.js
This also adds your application to PM2’s process list, which is outputted every time you start an application:
Output[PM2] Spawning PM2 daemon with pm2_home=/home/sammy/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/sammy/hello.js in fork_mode (1 instance)
[PM2] Done.
┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────┬───────────┬───────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────┼───────────┼───────┼──────────┤
│ hello │ 0 │ fork │ 1338 │ online │ 0 │ 0s │ 0% │ 23.0 MB │ sammy │ disabled │
└──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────┴───────────┴───────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
As you can see, PM2 automatically assigns an App name
(based on the filename, without the .js
extension) and a PM2 id
. PM2 also maintains other information, such as the PID
of the process, its current status, and memory usage.
Applications that are running under PM2 will be restarted automatically if the application crashes or is killed, but we can take an additional step to get the application to launch on system startup using the startup
subcommand. This subcommand generates and configures a startup script to launch PM2 and its managed processes on server boots:
- pm2 startup systemd
The last line of the resulting output will include a command to run with superuser privileges to set PM2 to start on boot:
Output[PM2] Init System found: systemd
[PM2] To setup the Startup Script, copy/paste the following command:
sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
Run the command from the output, with your username in place of sammy
:
- sudo env PATH=$PATH:/usr/bin /usr/lib/node_modules/pm2/bin/pm2 startup systemd -u sammy --hp /home/sammy
As an additional step, we can save the PM2 process list and corresponding environments:
- pm2 save
You have now created a systemd unit that runs pm2
for your user on boot. This pm2
instance, in turn, runs hello.js
.
Start the service with systemctl
:
- sudo systemctl start pm2-sammy
Check the status of the systemd unit:
- systemctl status pm2-sammy
For a detailed overview of systemd, see Systemd Essentials: Working with Services, Units, and the Journal.
In addition to those we have covered, PM2 provides many subcommands that allow you to manage or look up information about your applications.
Stop an application with this command (specify the PM2 App name
or id
):
- pm2 stop app_name_or_id
Restart an application:
- pm2 restart app_name_or_id
List the applications currently managed by PM2:
- pm2 list
Get information about a specific application using its App name
:
- pm2 info app_name
The PM2 process monitor can be pulled up with the monit
subcommand. This displays the application status, CPU, and memory usage:
- pm2 monit
Note that running pm2
without any arguments will also display a help page with example usage.
Now that your Node.js application is running and managed by PM2, let’s set up the reverse proxy.
Your application is running and listening on localhost
, but you need to set up a way for your users to access it. We will set up the Nginx web server as a reverse proxy for this purpose.
In the prerequisite tutorial, you set up your Nginx configuration in the /etc/nginx/sites-available/example.com
file. Open this file for editing:
- sudo nano /etc/nginx/sites-available/example.com
Within the server
block, you should have an existing location /
block. Replace the contents of that block with the following configuration. If your application is set to listen on a different port, update the highlighted portion to the correct port number:
server {
...
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
This configures the server to respond to requests at its root. Assuming our server is available at example.com
, accessing https://example.com/
via a web browser would send the request to hello.js
, listening on port 3000
at localhost
.
You can add additional location
blocks to the same server block to provide access to other applications on the same server. For example, if you were also running another Node.js application on port 3001
, you could add this location block to allow access to it via https://example.com/app2
:
server {
...
location /app2 {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
...
}
Once you are done adding the location blocks for your applications, save the file and exit your editor.
Make sure you didn’t introduce any syntax errors by typing:
- sudo nginx -t
Restart Nginx:
- sudo systemctl restart nginx
Assuming that your Node.js application is running, and your application and Nginx configurations are correct, you should now be able to access your application via the Nginx reverse proxy. Try it out by accessing your server’s URL (its public IP address or domain name).
Congratulations! You now have your Node.js application running behind an Nginx reverse proxy on a Debian 9 server. This reverse proxy setup is flexible enough to provide your users access to other applications or static web content that you want to share.
]]>Apache Tomcat is a web server and servlet container that is used to serve Java applications. Tomcat is an open source implementation of the Java Servlet and JavaServer Pages technologies, released by the Apache Software Foundation. This tutorial covers the basic installation and some configuration of the latest release of Tomcat 9 on your Debian 9 server.
Before you begin with this guide, you should have a non-root user with sudo
privileges set up on your server. You can learn how to do this by completing our Debian 9 initial server setup guide.
Tomcat requires Java to be installed on the server so that any Java web application code can be executed. We can satisfy that requirement by installing OpenJDK with apt.
First, update your apt package index:
- sudo apt update
Then install the Java Development Kit package with apt:
- sudo apt install default-jdk
Now that Java is installed, we can create a tomcat
user, which will be used to run the Tomcat service.
For security purposes, Tomcat should be run as an unprivileged user (i.e. not root). We will create a new user and group that will run the Tomcat service.
Note: In some environments, a package called unscd
may be installed by default in order to speed up requests to name servers like LDAP. The most recent version currently available in Debian contains a bug that causes certain commands (like the adduser
command below) to produce additional output that looks like this:
sent invalidate(passwd) request, exiting
sent invalidate(group) request, exiting
These messages are harmless, but if you wish to avoid them, it is safe to remove the unscd
package if you do not not plan on using systems like LDAP for user information:
- apt remove unscd
First, create a new tomcat
group:
- sudo groupadd tomcat
Next, create a new tomcat
user. We’ll make this user a member of the tomcat
group, with a home directory of /opt/tomcat
(where we will install Tomcat), and with a shell of /bin/false
(so nobody can log into the account):
- sudo useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat
Now that our tomcat
user is set up, let’s download and install Tomcat.
The best way to install Tomcat 9 is to download the latest binary release then configure it manually.
Find the latest version of Tomcat 9 at the Tomcat 9 Downloads page. At the time of writing, the latest version is 9.0.11, but you should use a later stable version if it is available. Under the Binary Distributions section, then under the Core list, copy the link to the “tar.gz”.
Next, change to the /tmp
directory on your server. This is a good directory to download ephemeral items, like the Tomcat tarball, which we won’t need after extracting the Tomcat contents:
- cd /tmp
We’ll use the curl
command-line tool to download the tarball. Install curl
:
- sudo apt install curl
Now, use curl
to download the link that you copied from the Tomcat website:
- curl -O http://www-eu.apache.org/dist/tomcat/tomcat-9/v9.0.11/bin/apache-tomcat-9.0.11.tar.gz
We will install Tomcat to the /opt/tomcat
directory. Create the directory, then extract the archive to it with these commands:
- sudo mkdir /opt/tomcat
- sudo tar xzvf apache-tomcat-9*tar.gz -C /opt/tomcat --strip-components=1
Next, we can set up the proper user permissions for our installation.
The tomcat
user that we set up needs to have access to the Tomcat installation. We’ll set that up now.
Change to the directory where we unpacked the Tomcat installation:
- cd /opt/tomcat
Give the tomcat
group ownership over the entire installation directory:
- sudo chgrp -R tomcat /opt/tomcat
Next, give the tomcat
group read access to the conf
directory and all of its contents, and execute access to the directory itself:
- sudo chmod -R g+r conf
- sudo chmod g+x conf
Make the tomcat
user the owner of the webapps
, work
, temp
, and logs
directories:
- sudo chown -R tomcat webapps/ work/ temp/ logs/
Now that the proper permissions are set up, we can create a systemd service file to manage the Tomcat process.
We want to be able to run Tomcat as a service, so we will set up systemd service file.
Tomcat needs to know where Java is installed. This path is commonly referred to as “JAVA_HOME”. The easiest way to look up that location is by running this command:
- sudo update-java-alternatives -l
Outputjava-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64
Your JAVA_HOME
is the output from the last column (highlighted in red). Given the example above, the correct JAVA_HOME
for this server would be:
JAVA_HOME/usr/lib/jvm/java-1.8.0-openjdk-amd64
Your JAVA_HOME
may be different.
With this piece of information, we can create the systemd service file. Open a file called tomcat.service
in the /etc/systemd/system
directory by typing:
- sudo nano /etc/systemd/system/tomcat.service
Paste the following contents into your service file. Modify the value of JAVA_HOME
if necessary to match the value you found on your system. You may also want to modify the memory allocation settings that are specified in CATALINA_OPTS
:
[Unit]
Description=Apache Tomcat Web Application Container
After=network.target
[Service]
Type=forking
Environment=JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat
Environment='CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC'
Environment='JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom'
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh
User=tomcat
Group=tomcat
UMask=0007
RestartSec=10
Restart=always
[Install]
WantedBy=multi-user.target
When you are finished, save and close the file.
Next, reload the systemd daemon so that it knows about our service file:
- sudo systemctl daemon-reload
Start the Tomcat service by typing:
- sudo systemctl start tomcat
Double check that it started without errors by typing:
- sudo systemctl status tomcat
You should see output similar to the following:
Output● tomcat.service - Apache Tomcat Web Application Container
Loaded: loaded (/etc/systemd/system/tomcat.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 20:47:44 UTC; 3s ago
Process: 9037 ExecStart=/opt/tomcat/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 9046 (java)
Tasks: 46 (limit: 4915)
CGroup: /system.slice/tomcat.service
└─9046 /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin/java -Djava.util.logging.config.file=/opt/tomcat/conf/logging.properties -Dja
Sep 05 20:47:44 tomcat systemd[1]: Starting Apache Tomcat Web Application Container...
Sep 05 20:47:44 tomcat systemd[1]: Started Apache Tomcat Web Application Container.
This confirms that Tomcat is up and running on your server.
Now that the Tomcat service is started, we can test to make sure the default page is available.
Before we do that, we need to adjust the firewall to allow our requests to get to the service. If you followed the prerequisites, you will have a ufw
firewall enabled currently.
Tomcat uses port 8080
to accept conventional requests. Allow traffic to that port by typing:
- sudo ufw allow 8080
With the firewall modified, you can access the default splash page by going to your domain or IP address followed by :8080
in a web browser:
Open in web browserhttp://server_domain_or_IP:8080
You will see the default Tomcat splash page, in addition to other information. However, if you click the links for the Manager App, for instance, you will be denied access. We can configure that access next.
If you were able to successfully accessed Tomcat, now is a good time to enable the service file so that Tomcat automatically starts at boot:
- sudo systemctl enable tomcat
In order to use the manager web app that comes with Tomcat, we must add a login to our Tomcat server. We will do this by editing the tomcat-users.xml
file:
- sudo nano /opt/tomcat/conf/tomcat-users.xml
You will want to add a user who can access the manager-gui
and admin-gui
(web apps that come with Tomcat). You can do so by defining a user, similar to the example below, between the tomcat-users
tags. Be sure to change the username and password to something secure:
<tomcat-users . . .>
<user username="admin" password="password" roles="manager-gui,admin-gui"/>
</tomcat-users>
Save and close the file when you are finished.
By default, newer versions of Tomcat restrict access to the Manager and Host Manager apps to connections coming from the server itself. Since we are installing on a remote machine, you will probably want to remove or alter this restriction. To change the IP address restrictions on these, open the appropriate context.xml
files.
For the Manager app, type:
- sudo nano /opt/tomcat/webapps/manager/META-INF/context.xml
For the Host Manager app, type:
- sudo nano /opt/tomcat/webapps/host-manager/META-INF/context.xml
Inside, comment out the IP address restriction to allow connections from anywhere. Alternatively, if you would like to allow access only to connections coming from your own IP address, you can add your public IP address to the list:
<Context antiResourceLocking="false" privileged="true" >
<!--<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />-->
</Context>
Save and close the files when you are finished.
To put our changes into effect, restart the Tomcat service:
- sudo systemctl restart tomcat
Now that we have create a user, we can access the web management interface again in a web browser. Once again, you can get to the correct interface by entering your server’s domain name or IP address followed on port 8080 in your browser:
Open in web browserhttp://server_domain_or_IP:8080
The page you see should be the same one you were given when you tested earlier:
Let’s take a look at the Manager App, accessible via the link or http://server_domain_or_IP:8080/manager/html
. You will need to enter the account credentials that you added to the tomcat-users.xml
file. Afterwards, you should see a page that looks like this:
The Web Application Manager is used to manage your Java applications. You can Start, Stop, Reload, Deploy, and Undeploy here. You can also run some diagnostics on your apps (i.e. find memory leaks). Lastly, information about your server is available at the very bottom of this page.
Now let’s take a look at the Host Manager, accessible via the link or http://server_domain_or_IP:8080/host-manager/html/
:
From the Virtual Host Manager page, you can add virtual hosts to serve your applications from.
Your installation of Tomcat is complete! Your are now free to deploy your own Java web applications!
Currently, your Tomcat installation is functional, but entirely unencrypted. This means that all data, including sensitive items like passwords, are sent in plain text that can be intercepted and read by other parties on the internet. In order to prevent this from happening, it is strongly recommended that you encrypt your connections with SSL. You can find out how to encrypt your connections to Tomcat by following this guide (note: this guide covers Tomcat 8 encryption on Ubuntu 16.04).
]]>While many users need the functionality of a database management system like MariaDB, they may not feel comfortable interacting with the system solely from the MariaDB prompt.
phpMyAdmin was created so that users can interact with MariaDB through a web interface. In this guide, we’ll discuss how to install and secure phpMyAdmin so that you can safely use it to manage your databases on a Debian 9 system.
Before you get started with this guide, you need to have some basic steps completed.
First, we’ll assume that your server has a non-root user with sudo
privileges, as well as a firewall configured with ufw
, as described in the initial server setup guide for Debian 9.
We’re also going to assume that you’ve completed a LAMP (Linux, Apache, MariaDB, and PHP) installation on your Debian 9 server. If you’ve not yet done this, follow our guide on installing a LAMP stack on Debian 9 to set this up.
Finally, there are important security considerations when using software like phpMyAdmin, since it:
For these reasons, and because it is a widely-deployed PHP application which is frequently targeted for attack, you should never run phpMyAdmin on remote systems over a plain HTTP connection. If you do not have an existing domain configured with an SSL/TLS certificate, you can follow this guide on securing Apache with Let’s Encrypt on Debian 9. This will require you to register a domain name, create DNS records for your server, and set up an Apache Virtual Host.
Once you are finished with these steps, you’re ready to get started with this guide.
To get started, we will install phpMyAdmin from the default Debian repositories.
This is done by updating your server’s package index and then using the apt
packaging system to pull down the files and install them on your system:
- sudo apt update
- sudo apt install phpmyadmin php-mbstring php-gettext
This will ask you a few questions in order to configure your installation correctly.
Warning: When the prompt appears, “apache2” is highlighted, but not selected. If you do not hit SPACE
to select Apache, the installer will not move the necessary files during installation. Hit SPACE
, TAB
, and then ENTER
to select Apache.
apache2
Yes
when asked whether to use dbconfig-common
to set up the databaseNote: MariaDB is a community-developed fork of MySQL, and although the two programs are closely related, they are not completely interchangeable. While phpMyAdmin was designed specifically for managing MySQL databases and makes reference to MySQL in various dialogue boxes, rest assured that your installation of MariaDB will work correctly with phpMyAdmin.
The installation process adds the phpMyAdmin Apache configuration file into the /etc/apache2/conf-enabled/
directory, where it is read automatically. The only thing you need to do is explicitly enable the mbstring
PHP extension which is used to manage non-ASCII strings and convert strings to different encodings. Do this by typing:
- sudo phpenmod mbstring
Afterwards, restart Apache for your changes to be recognized:
- sudo systemctl restart apache2
phpMyAdmin is now installed and configured. However, before you can log in and begin managing your MariaDB databases, you will need to ensure that your MariaDB users have the privileges required for interacting with the program.
When you installed phpMyAdmin onto your server, it automatically created a database user called phpmyadmin
which performs certain underlying processes for the program. Rather than logging in as this user with the administrative password you set during installation, it’s recommended that you log in using a different account.
In new installs on Debian systems, the root MariaDB user is set to authenticate using the unix_socket
plugin by default rather than with a password. This allows for some greater security and usability in many cases, but it can also complicate things when you need to allow an external program (e.g., phpMyAdmin) administrative rights through this user. Because the server uses the root account for tasks like log rotation and starting and stopping the server, it is best not to change the root account’s authentication details. Since phpMyAdmin requires users to authenticate with a password, you will need to create a new MariaDB account in order to access the interface.
If you followed the prerequisite tutorial on installing a LAMP stack and created a MariaDB user account as described in Step 2, you can just log in to phpMyAdmin under that account using the password you created when you set it up by visiting this link:
https://your_domain_or_IP/phpmyadmin
If you haven’t created a MariaDB user, or if you have but you’d like to create another user just for the purpose of managing databases through phpMyAdmin, continue on with this section to learn how to set one up.
Begin by opening up the MariaDB shell:
- sudo mariadb
Note: If you have password authentication enabled, as you would if you’ve already created a new user account for your MariaDB server, you will need to use a different command to access the MariaDB shell. The following will run your MariaDB client with regular user privileges, and you will only gain administrator privileges within the database by authenticating:
- mariadb -u user -p
From there, create a new user and give it a strong password:
- CREATE USER 'sammy'@'localhost' IDENTIFIED BY 'password';
Then, grant your new user appropriate privileges. For example, you could grant the user privileges to all tables within the database, as well as the power to add, change, and remove user privileges, with this command:
- GRANT ALL PRIVILEGES ON *.* TO 'sammy'@'localhost' WITH GRANT OPTION;
Following that, exit the MariaDB shell:
- exit
You can now access the web interface by visiting your server’s domain name or public IP address, followed by /phpmyadmin
:
https://your_domain_or_IP/phpmyadmin
Log in to the interface with the username and password you configured.
When you log in, you’ll see the user interface, which will look something like this:
Now that you’re able to connect and interact with phpMyAdmin, all that’s left to do is harden your system’s security to protect it from attackers.
Because of its ubiquity, phpMyAdmin is a popular target for attackers, and you should take extra care to prevent unauthorized access. One of the easiest ways of doing this is to place a gateway in front of the entire application by using Apache’s built-in .htaccess
authentication and authorization functionalities.
To do this, you must first enable the use of .htaccess
file overrides by editing your Apache configuration file.
Edit the linked file that has been placed in your Apache configuration directory:
- sudo nano /etc/apache2/conf-available/phpmyadmin.conf
Add an AllowOverride All
directive within the <Directory /usr/share/phpmyadmin>
section of the configuration file, like this:
<Directory /usr/share/phpmyadmin>
Options FollowSymLinks
DirectoryIndex index.php
AllowOverride All
. . .
When you have added this line, save and close the file.
To implement the changes you made, restart Apache:
- sudo systemctl restart apache2
Now that you have enabled .htaccess
use for your application, you need to create one to actually implement some security.
In order for this to be successful, the file must be created within the application directory. You can create the necessary file and open it in your text editor with root privileges by typing:
- sudo nano /usr/share/phpmyadmin/.htaccess
Within this file, enter the following information:
AuthType Basic
AuthName "Restricted Files"
AuthUserFile /etc/phpmyadmin/.htpasswd
Require valid-user
Here is what each of these lines mean:
AuthType Basic
: This line specifies the authentication type that you are implementing. This type will implement password authentication using a password file.AuthName
: This sets the message for the authentication dialog box. You should keep this generic so that unauthorized users won’t gain any information about what is being protected.AuthUserFile
: This sets the location of the password file that will be used for authentication. This should be outside of the directories that are being served. We will create this file shortly.Require valid-user
: This specifies that only authenticated users should be given access to this resource. This is what actually stops unauthorized users from entering.When you are finished, save and close the file.
The location that you selected for your password file was /etc/phpmyadmin/.htpasswd
. You can now create this file and pass it an initial user with the htpasswd
utility:
- sudo htpasswd -c /etc/phpmyadmin/.htpasswd username
You will be prompted to select and confirm a password for the user you are creating. Afterwards, the file is created with the hashed password that you entered.
If you want to enter an additional user, you need to do so without the -c
flag, like this:
- sudo htpasswd /etc/phpmyadmin/.htpasswd additionaluser
Now, when you access your phpMyAdmin subdirectory, you will be prompted for the additional account name and password that you just configured:
https://domain_name_or_IP/phpmyadmin
After entering the Apache authentication, you’ll be taken to the regular phpMyAdmin authentication page to enter your MariaDB credentials. This setup adds an additional layer of security, which is desirable since phpMyAdmin has suffered from vulnerabilities in the past.
You should now have phpMyAdmin configured and ready to use on your Debian 9 server. Using this interface, you can easily create databases, users, tables, etc., and perform the usual operations like deleting and modifying structures and data.
]]>Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.
In this tutorial, you will use Certbot to obtain a free SSL certificate for Apache on Debian 9 and set up your certificate to renew automatically.
This tutorial will use a separate Apache virtual host file instead of the default configuration file. We recommend creating new Apache virtual host files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration.
To follow this tutorial, you will need:
One Debian 9 server set up by following this initial server setup for Debian 9 tutorial, including a non-root user with sudo
privileges and a firewall.
A fully registered domain name. This tutorial will use example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
example.com
pointing to your server’s public IP address.www.example.com
pointing to your server’s public IP address.Apache installed by following How To Install Apache on Debian 9. Be sure that you have a virtual host file for your domain. This tutorial will use /etc/apache2/sites-available/example.com.conf
as an example.
The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on your server.
As of this writing, Certbot is not available from the Debian software repositories by default. In order to download the software using apt
, you will need to add the backports repository to your sources.list
file where apt
looks for package sources. Backports are packages from Debian’s testing and unstable distributions that are recompiled so they will run without new libraries on stable Debian distributions.
To add the backports repository, open (or create) the sources.list
file in your /etc/apt/
directory:
- sudo nano /etc/apt/sources.list
At the bottom of the file, add the following line:
. . .
deb http://ftp.debian.org/debian stretch-backports main
This includes the main
packages, which are Debian Free Software Guidelines (DFSG)-compliant, as well as the non-free
and contrib
components, which are either not DFSG-compliant themselves or include dependencies in this category.
Save and close the file by pressing CTRL+X
, Y
, then ENTER
, then update your package lists:
- sudo apt update
Then install Certbot with the following command. Note that the -t
option tells apt
to search for the package by looking in the backports repository you just added:
- sudo apt install python-certbot-apache -t stretch-backports
Certbot is now ready to use, but in order for it to configure SSL for Apache, we need to verify that Apache has been configured correctly.
Certbot needs to be able to find the correct virtual host in your Apache configuration for it to automatically configure SSL. Specifically, it does this by looking for a ServerName
directive that matches the domain you request a certificate for.
If you followed the virtual host set up step in the Apache installation tutorial, you should have a VirtualHost
block for your domain at /etc/apache2/sites-available/example.com.conf
with the ServerName
directive already set appropriately.
To check, open the virtual host file for your domain using nano
or your favorite text editor:
- sudo nano /etc/apache2/sites-available/example.com.conf
Find the existing ServerName
line. It should look like this, with your own domain name instead of example.com
:
...
ServerName example.com;
...
If it doesn’t already, update the ServerName
directive to point to your domain name. Then save the file, quit your editor, and verify the syntax of your configuration edits:
- sudo apache2ctl configtest
If there aren’t any syntax errors, you will see this output:
OutputSyntax OK
If you get an error, reopen the virtual host file and check for any typos or missing characters. Once your configuration file’s syntax is correct, reload Apache to load the new configuration:
- sudo systemctl reload apache2
Certbot can now find the correct VirtualHost block and update it.
Next, let’s update the firewall to allow HTTPS traffic.
If you have the ufw
firewall enabled, as recommended by the prerequisite guides, you’ll need to adjust the settings to allow for HTTPS traffic. Luckily, when installed on Debian, ufw
comes packaged with a few profiles that help to simplify the process of changing firewall rules for HTTP and HTTPS traffic.
You can see the current setting by typing:
- sudo ufw status
If you followed the Step 2 of our guide on How to Install Apache on Debian 9, the output of this command will look like this, showing that only HTTP traffic is allowed to the web server:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW (v6) ALLOW Anywhere (v6)
To additionally let in HTTPS traffic, allow the “WWW Full” profile and delete the redundant “WWW” profile allowance:
- sudo ufw allow 'WWW Full'
- sudo ufw delete allow 'WWW'
Your status should now look like this:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW Full (v6) ALLOW Anywhere (v6)
Next, let’s run Certbot and fetch our certificates.
Certbot provides a variety of ways to obtain SSL certificates through plugins. The Apache plugin will take care of reconfiguring Apache and reloading the config whenever necessary. To use this plugin, type the following:
- sudo certbot --apache -d example.com -d www.example.com
This runs certbot
with the --apache
plugin, using -d
to specify the names you’d like the certificate to be valid for.
If this is your first time running certbot
, you will be prompted to enter an email address and agree to the terms of service. After doing so, certbot
will communicate with the Let’s Encrypt server, then run a challenge to verify that you control the domain you’re requesting a certificate for.
If that’s successful, certbot
will ask how you’d like to configure your HTTPS settings:
OutputPlease choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):
Select your choice then hit ENTER
. The configuration will be updated, and Apache will reload to pick up the new settings. certbot
will wrap up with a message telling you the process was successful and where your certificates are stored:
OutputIMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2018-12-04. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Your certificates are downloaded, installed, and loaded. Try reloading your website using https://
and notice your browser’s security indicator. It should indicate that the site is properly secured, usually with a green lock icon. If you test your server using the SSL Labs Server Test, it will get an A grade.
Let’s finish by testing the renewal process.
Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot
package we installed takes care of this for us by adding a renew script to /etc/cron.d
. This script runs twice a day and will automatically renew any certificate that’s within thirty days of expiration.
To test the renewal process, you can do a dry run with certbot
:
- sudo certbot renew --dry-run
If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Apache to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.
In this tutorial, you installed the Let’s Encrypt client certbot
, downloaded SSL certificates for your domain, configured Apache to use these certificates, and set up automatic certificate renewal. If you have further questions about using Certbot, their documentation is a good place to start.
Redis is an in-memory key-value store known for its flexibility, performance, and wide language support. This tutorial demonstrates how to install, configure, and secure Redis on a Debian 9 server.
To complete this guide, you will need access to a Debian 9 server that has a non-root user with sudo
privileges and a basic firewall configured. You can set this up by following our Initial Server Setup guide.
When you are ready to begin, log in to your server as your sudo-enabled user and continue below.
In order to get the latest version of Redis, we will use apt
to install it from the official Debian repositories.
Update your local apt
package cache and install Redis by typing:
- sudo apt update
- sudo apt install redis-server
This will download and install Redis and its dependencies. Following this, there is one important configuration change to make in the Redis configuration file, which was generated automatically during the installation.
Open this file with your preferred text editor:
- sudo nano /etc/redis/redis.conf
Inside the file, find the supervised
directive. This directive allows you to declare an init system to manage Redis as a service, providing you with more control over its operation. The supervised
directive is set to no
by default. Since you are running Debian, which uses the systemd init system, change this to systemd
:
. . .
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd
. . .
That’s the only change you need to make to the Redis configuration file at this point, so save and close it when you are finished. Then, reload the Redis service file to reflect the changes you made to the configuration file:
- sudo systemctl restart redis
With that, you’ve installed and configured Redis and it’s running on your machine. Before you begin using it, though, it’s prudent to first check whether Redis is functioning correctly.
As with any newly-installed software, it’s a good idea to ensure that Redis is functioning as expected before making any further changes to its configuration. We will go over a handful of ways to check that Redis is working correctly in this step.
Start by checking that the Redis service is running:
- sudo systemctl status redis
If it is running without any errors, this command will produce output similar to the following:
Output● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 20:19:44 UTC; 41s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Process: 10829 ExecStopPost=/bin/run-parts --verbose /etc/redis/redis-server.post-down.d (code=exited, status=0/SUCCESS)
Process: 10825 ExecStop=/bin/kill -s TERM $MAINPID (code=exited, status=0/SUCCESS)
Process: 10823 ExecStop=/bin/run-parts --verbose /etc/redis/redis-server.pre-down.d (code=exited, status=0/SUCCESS)
Process: 10842 ExecStartPost=/bin/run-parts --verbose /etc/redis/redis-server.post-up.d (code=exited, status=0/SUCCESS)
Process: 10838 ExecStart=/usr/bin/redis-server /etc/redis/redis.conf (code=exited, status=0/SUCCESS)
Process: 10834 ExecStartPre=/bin/run-parts --verbose /etc/redis/redis-server.pre-up.d (code=exited, status=0/SUCCESS)
Main PID: 10841 (redis-server)
Tasks: 3 (limit: 4915)
CGroup: /system.slice/redis-server.service
└─10841 /usr/bin/redis-server 127.0.0.1:6379
. . .
Here, you can see that Redis is running and is already enabled, meaning that it is set to start up every time the server boots.
Note: This setting is desirable for many common use cases of Redis. If, however, you prefer to start up Redis manually every time your server boots, you can configure this with the following command:
- sudo systemctl disable redis
To test that Redis is functioning correctly, connect to the server using the command-line client:
- redis-cli
In the prompt that follows, test connectivity with the ping
command:
- ping
OutputPONG
This output confirms that the server connection is still alive. Next, check that you’re able to set keys by running:
- set test "It's working!"
OutputOK
Retrieve the value by typing:
- get test
Assuming everything is working, you will be able to retrieve the value you stored:
Output"It's working!"
After confirming that you can fetch the value, exit the Redis prompt to get back to the shell:
- exit
As a final test, we will check whether Redis is able to persist data even after it’s been stopped or restarted. To do this, first restart the Redis instance:
- sudo systemctl restart redis
Then connect with the command-line client once again and confirm that your test value is still available:
- redis-cli
- get test
The value of your key should still be accessible:
Output"It's working!"
Exit out into the shell again when you are finished:
- exit
With that, your Redis installation is fully operational and ready for you to use. However, some of its default configuration settings are insecure and provide malicious actors with opportunities to attack and gain access to your server and its data. The remaining steps in this tutorial cover methods for mitigating these vulnerabilities, as prescribed by the official Redis website. Although these steps are optional and Redis will still function if you choose not to follow them, it is strongly recommended that you complete them in order to harden your system’s security.
By default, Redis is only accessible from localhost. However, if you installed and configured Redis by following a different tutorial than this one, you might have updated the configuration file to allow connections from anywhere. This is not as secure as binding to localhost.
To correct this, open the Redis configuration file for editing:
- sudo nano /etc/redis/redis.conf
Locate this line and make sure it is uncommented (remove the #
if it exists):
bind 127.0.0.1
Save and close the file when finished (press CTRL + X
, Y
, then ENTER
).
Then, restart the service to ensure that systemd reads your changes:
- sudo systemctl restart redis
To check that this change has gone into effect, run the following netstat
command:
- sudo netstat -lnp | grep redis
Outputtcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 10959/redis-server
This output shows that the redis-server
program is bound to localhost (127.0.0.1
), reflecting the change you just made to the configuration file. If you see another IP address in that column (0.0.0.0
, for example), then you should double check that you uncommented the correct line and restart the Redis service again.
Now that your Redis installation is only listening in on localhost, it will be more difficult for malicious actors to make requests or gain access to your server. However, Redis isn’t currently set to require users to authenticate themselves before making changes to its configuration or the data it holds. To remedy this, Redis allows you to require users to authenticate with a password before making changes via the Redis client (redis-cli
).
Configuring a Redis password enables one of its two built-in security features — the auth
command, which requires clients to authenticate to access the database. The password is configured directly in Redis’s configuration file, /etc/redis/redis.conf
, so open that file again with your preferred editor:
- sudo nano /etc/redis/redis.conf
Scroll to the SECURITY
section and look for a commented directive that reads:
# requirepass foobared
Uncomment it by removing the #
, and change foobared
to a secure password.
Note: Above the requirepass
directive in the redis.conf
file, there is a commented warning:
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
Thus, it’s important that you specify a very strong and very long value as your password. Rather than make up a password yourself, you can use the openssl
command to generate a random one, as in the following example. By piping the output of the first command to the second openssl
command, as shown here, it will remove any line breaks produced by that the first command:
- openssl rand 60 | openssl base64 -A
Your output should look something like:
OutputRBOJ9cCNoGCKhlEBwQLHri1g+atWgn4Xn4HwNUbtzoVxAYxkiYBi7aufl4MILv1nxBqR4L6NNzI0X6cE
After copying and pasting the output of that command as the new value for requirepass
, it should read:
/etc/redis/redis.confrequirepass RBOJ9cCNoGCKhlEBwQLHri1g+atWgn4Xn4HwNUbtzoVxAYxkiYBi7aufl4MILv1nxBqR4L6NNzI0X6cE
After setting the password, save and close the file, then restart Redis:
- sudo systemctl restart redis.service
To test that the password works, access the Redis command line:
- redis-cli
The following shows a sequence of commands used to test whether the Redis password works. The first command tries to set a key to a value before authentication:
- set key1 10
That won’t work because you didn’t authenticate, so Redis returns an error:
Output(error) NOAUTH Authentication required.
The next command authenticates with the password specified in the Redis configuration file:
- auth your_redis_password
Redis acknowledges:
OutputOK
After that, running the previous command again will succeed:
- set key1 10
OutputOK
get key1
queries Redis for the value of the new key.
- get key1
Output"10"
After confirming that you’re able to run commands in the Redis client after authenticating, you can exit the redis-cli
:
- quit
Next, we’ll look at renaming Redis commands which, if entered by mistake or by a malicious actor, could cause serious damage to your machine.
The other security feature built into Redis involves renaming or completely disabling certain commands that are considered dangerous.
When run by unauthorized users, such commands can be used to reconfigure, destroy, or otherwise wipe your data. Like the authentication password, renaming or disabling commands is configured in the same SECURITY
section of the /etc/redis/redis.conf
file.
Some of the commands that are considered dangerous include: FLUSHDB, FLUSHALL, KEYS, PEXPIRE, DEL, CONFIG, SHUTDOWN, BGREWRITEAOF, BGSAVE, SAVE, SPOP, SREM, RENAME, and DEBUG. This is not a comprehensive list, but renaming or disabling all of the commands in that list is a good starting point for enhancing your Redis server’s security.
Whether you should disable or rename a command depends on your specific needs or those of your site. If you know you will never use a command that could be abused, then you may disable it. Otherwise, it might be in your best interest to rename it.
To enable or disable Redis commands, open the configuration file once more:
- sudo nano /etc/redis/redis.conf
Warning: The following steps showing how to disable and rename commands are examples. You should only choose to disable or rename the commands that make sense for you. You can review the full list of commands for yourself and determine how they might be misused at redis.io/commands.
To disable a command, simply rename it to an empty string (signified by a pair of quotation marks with no characters between them), as shown below:
. . .
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command DEBUG ""
. . .
To rename a command, give it another name as shown in the examples below. Renamed commands should be difficult for others to guess, but easy for you to remember:
. . .
# rename-command CONFIG ""
rename-command SHUTDOWN SHUTDOWN_MENOT
rename-command CONFIG ASC12_CONFIG
. . .
Save your changes and close the file.
After renaming a command, apply the change by restarting Redis:
- sudo systemctl restart redis
To test the new command, enter the Redis command line:
- redis-cli
Then, authenticate:
- auth your_redis_password
OutputOK
Let’s assume that you renamed the CONFIG
command to ASC12_CONFIG
, as in the preceding example. First, try using the original CONFIG
command. It should fail, because you’ve renamed it:
- config get requirepass
Output(error) ERR unknown command 'config'
Calling the renamed command, however, will be successful. It is not case-sensitive:
- asc12_config get requirepass
Output1) "requirepass"
2) "your_redis_password"
Finally, you can exit from redis-cli
:
- exit
Note that if you’re already using the Redis command line and then restart Redis, you’ll need to re-authenticate. Otherwise, you’ll get this error if you type a command:
OutputNOAUTH Authentication required.
Regarding the practice of renaming commands, there’s a cautionary statement at the end of the SECURITY
section in /etc/redis/redis.conf
which reads:
Please note that changing the name of commands that are logged into the AOF file or transmitted to slaves may cause problems.
Note: The Redis project chooses to use the terms “master” and “slave” while DigitalOcean generally prefers alternative descriptors. In order to avoid confusion we’ve chosen to use the terms used in the Redis documentation here.
That means if the renamed command is not in the AOF file, or if it is but the AOF file has not been transmitted to slaves, then there should be no problem.
So, keep that in mind when you’re trying to rename commands. The best time to rename a command is when you’re not using AOF persistence, or right after installation, that is, before your Redis-using application has been deployed.
When you’re using AOF and dealing with a master-slave installation, consider this answer from the project’s GitHub issue page. The following is a reply to the author’s question:
The commands are logged to the AOF and replicated to the slave the same way they are sent, so if you try to replay the AOF on an instance that doesn’t have the same renaming, you may face inconsistencies as the command cannot be executed (same for slaves).
Thus, the best way to handle renaming in cases like that is to make sure that renamed commands are applied to all instances in master-slave installations.
In this tutorial, you installed and configured Redis, validated that your Redis installation is functioning correctly, and used its built-in security features to make it less vulnerable to attacks from malicious actors.
Keep in mind that once someone is logged in to your server, it’s very easy to circumvent the Redis-specific security features we’ve put in place. Therefore, the most important security feature on your Redis server is your firewall (which you configured if you followed the prerequisite Initial Server Setup tutorial), as this makes it extremely difficult for malicious actors to jump that fence.
]]>The mdadm
utility can be used to create and manage storage arrays using Linux’s software RAID capabilities. Administrators have great flexibility in coordinating their individual storage devices and creating logical storage devices that have greater performance or redundancy characteristics.
In this guide, we will go over a number of different RAID configurations that can be set up using a Debian 9 server.
In order to complete the steps in this guide, you should have:
sudo
privileges on a Debian 9 server: The steps in this guide will be completed with a sudo
user. To learn how to set up an account with these privileges, follow our Debian 9 initial server setup guide.Info: Due to the inefficiency of RAID setups on virtual private servers, we don’t recommend deploying a RAID setup on DigitalOcean droplets. The efficiency of datacenter disk replication makes the benefits of a RAID negligible, relative to a setup on baremetal hardware. This tutorial aims to be a reference for a conventional RAID setup.
Before we begin, we need to install mdadm
, the tool that allows us to set up and manage software RAID arrays in Linux. This is available in Debian’s default repositories.
Update the local package cache to retrieve an up-to-date list of available packages and then download and install the package:
- sudo apt update
- sudo apt install mdadm
This will install mdadm
and all of its dependencies. Verify that the utility is installed by typing:
- sudo mdadm -V
Outputmdadm - v3.4 - 28th January 2016
The application version should be displayed, indicating that mdadm
is installed and ready to use.
Throughout this guide, we will be introducing the steps to create a number of different RAID levels. If you wish to follow along, you will likely want to reuse your storage devices after each section. This section can be referenced to learn how to quickly reset your component storage devices prior to testing a new RAID level. Skip this section for now if you have not yet set up any arrays.
Warning: This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied off any data you need to retain prior to destroying the array.
Find the active arrays in the /proc/mdstat
file by typing:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 sdc[1] sdd[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
Unmount the array from the filesystem:
- sudo umount /dev/md0
Then, stop and remove the array by typing:
- sudo mdadm --stop /dev/md0
Find the devices that were used to build the array with the following command:
Warning: Keep in mind that the /dev/sd*
names can change any time you reboot! Check them every time to make sure you are operating on the correct devices.
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G linux_raid_member disk
sdd 100G linux_raid_member disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
After discovering the devices used to create an array, zero their superblock to remove the RAID metadata and reset them to normal:
- sudo mdadm --zero-superblock /dev/sdc
- sudo mdadm --zero-superblock /dev/sdd
You should remove any of the persistent references to the array. Edit the /etc/fstab
file and comment out or remove the reference to your array:
- sudo nano /etc/fstab
. . .
# /dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
Also, comment out or remove the array definition from the /etc/mdadm/mdadm.conf
file:
- sudo nano /etc/mdadm/mdadm.conf
. . .
# ARRAY /dev/md0 metadata=1.2 name=mdadmwrite:0 UUID=7261fb9c:976d0d97:30bc63ce:85e76e91
Finally, update the initramfs
again so that the early boot process does not try to bring an unavailable array online.
- sudo update-initramfs -u
At this point, you should be ready to reuse the storage devices individually, or as components of a different array.
The RAID 0 array works by breaking up data into chunks and striping it across the available disks. This means that each disk contains a portion of the data and that multiple disks will be referenced when retrieving information.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
and /dev/sdb
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 0 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
You can ensure that the RAID was successfully created by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0]
md0 : active raid0 sdb[1] sda[0]
209584128 blocks super 1.2 512k chunks
unused devices: <none>
As you can see in the highlighted line, the /dev/md0
device has been created in the RAID 0 configuration using the /dev/sda
and /dev/sdb
devices.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 0 array should now automatically be assembled and mounted each boot.
The RAID 1 array type is implemented by mirroring data across all available disks. Each disk in a RAID 1 array gets a full copy of the data, providing redundancy in the event of a device failure.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have two disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
and /dev/sdb
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 1 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
If the component devices you are using are not partitions with the boot
flag enabled, you will likely see the following warning. It is safe to type y to continue:
Outputmdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 104792064K
Continue creating array? y
The mdadm
tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb[1] sda[0]
104792064 blocks super 1.2 [2/2] [UU]
[>....................] resync = 1.5% (1629632/104792064) finish=8.4min speed=203704K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 1 configuration using the /dev/sda
and /dev/sdb
devices. The second highlighted line shows the progress on the mirroring. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 98G 61M 93G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. You can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 1 array should now automatically be assembled and mounted each boot.
The RAID 5 array type is implemented by striping data across the available devices. One component of each stripe is a calculated parity block. If a device fails, the parity block and the remaining blocks can be used to calculate the missing data. The device that receives the parity block is rotated so that each device has a balanced amount of parity information.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have three disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
, /dev/sdb
, and /dev/sdc
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 5 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
The mdadm
tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 0.9% (1031612/104792064) finish=10.0min speed=171935K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 5 configuration using the /dev/sda
, /dev/sdb
and /dev/sdc
devices. The second highlighted line shows the progress on the build.
Warning: Due to the way that mdadm
builds RAID 5 arrays, while the array is still building, the number of spares in the array will be inaccurately reported. This means that you must wait for the array to finish assembling before updating the /etc/mdadm/mdadm.conf
file. If you update the configuration file while the array is still building, the system will have incorrect information about the array state and will be unable to assemble it automatically at boot with the correct name.
You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file.
As mentioned above, before you adjust the configuration, check again to make sure the array has finished assembling. Completing this step before the array is built will prevent the system from assembling the array correctly on reboot:
- cat /proc/mdstat
OutputPersonalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdc[3] sdb[1] sda[0]
209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
The output above shows that the rebuild is complete. Now, we can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 5 array should now automatically be assembled and mounted each boot.
The RAID 6 array type is implemented by striping data across the available devices. Two components of each stripe are calculated parity blocks. If one or two devices fail, the parity blocks and the remaining blocks can be used to calculate the missing data. The devices that receive the parity blocks are rotated so that each device has a balanced amount of parity information. This is similar to a RAID 5 array, but allows for the failure of two drives.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have four disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 6 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices:
- sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm
tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid6 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
[>....................] resync = 0.3% (353056/104792064) finish=14.7min speed=117685K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 6 configuration using the /dev/sda
, /dev/sdb
, /dev/sdc
and /dev/sdd
devices. The second highlighted line shows the progress on the build. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. We can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 6 array should now automatically be assembled and mounted each boot.
The RAID 10 array type is traditionally implemented by creating a striped RAID 0 array composed of sets of RAID 1 arrays. This nested array type gives both redundancy and high performance, at the expense of large amounts of disk space. The mdadm
utility has its own RAID 10 type that provides the same type of benefits with increased flexibility. It is not created by nesting arrays, but has many of the same characteristics and guarantees. We will be using the mdadm
RAID 10 here.
mdadm
style RAID 10 is configurable.By default, two copies of each data block will be stored in what is called the “near” layout. The possible layouts that dictate how each data block is stored are:
You can find out more about these layouts by checking out the “RAID10” section of this man
page:
- man 4 md
You can also find this man
page online here.
To get started, find the identifiers for the raw disks that you will be using:
- lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
OutputNAME SIZE FSTYPE TYPE MOUNTPOINT
sda 100G disk
sdb 100G disk
sdc 100G disk
sdd 100G disk
vda 25G disk
├─vda1 24.9G ext4 part /
├─vda14 4M part
└─vda15 106M vfat part /boot/efi
As you can see above, we have four disks without a filesystem, each 100G in size. In this example, these devices have been given the /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
identifiers for this session. These will be the raw components we will use to build the array.
To create a RAID 10 array with these components, pass them in to the mdadm --create
command. You will have to specify the device name you wish to create (/dev/md0
in our case), the RAID level, and the number of devices.
You can set up two copies using the near layout by not specifying a layout and copy number:
- sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
If you want to use a different layout, or change the number of copies, you will have to use the --layout=
option, which takes a layout and copy identifier. The layouts are n for near, f for far, and o for offset. The number of copies to store is appended afterwards.
For instance, to create an array that has 3 copies in the offset layout, the command would look like this:
- sudo mdadm --create --verbose /dev/md0 --level=10 --layout=o3 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
The mdadm
tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat
file:
- cat /proc/mdstat
OutputPersonalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 sdd[3] sdc[2] sdb[1] sda[0]
209584128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[>....................] resync = 1.3% (2832768/209584128) finish=15.8min speed=217905K/sec
unused devices: <none>
As you can see in the first highlighted line, the /dev/md0
device has been created in the RAID 10 configuration using the /dev/sda
, /dev/sdb
, /dev/sdc
and /dev/sdd
devices. The second highlighted area shows the layout that was used for this example (2 copies in the near configuration). The third highlighted area shows the progress on the build. You can continue the guide while this process completes.
Next, create a filesystem on the array:
- sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
- sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
- sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
- df -h -x devtmpfs -x tmpfs
OutputFilesystem Size Used Avail Use% Mounted on
/dev/vda1 25G 1003M 23G 5% /
/dev/md0 196G 61M 186G 1% /mnt/md0
The new filesystem is mounted and accessible.
To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf
file. We can automatically scan the active array and append the file by typing:
- sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
- sudo update-initramfs -u
Add the new filesystem mount options to the /etc/fstab
file for automatic mounting at boot:
- echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
Your RAID 10 array should now automatically be assembled and mounted each boot.
In this guide, we demonstrated how to create various types of arrays using Linux’s mdadm
software RAID utility. RAID arrays offer some compelling redundancy and performance enhancements over using multiple disks individually.
Once you have settled on the type of array needed for your environment and created the device, you will need to learn how to perform day-to-day management with mdadm
. Our guide on how to manage RAID arrays with mdadm
can help get you started.
The Apache HTTP server is the most widely-used web server in the world. It provides many powerful features including dynamically loadable modules, robust media support, and extensive integration with other popular software.
In this guide, we’ll explain how to install an Apache web server on your Debian 9 server.
Before you begin this guide, you should have a regular, non-root user with sudo privileges configured on your server. Additionally, you will need to enable a basic firewall to block non-essential ports. You can learn how to configure a regular user account and set up a firewall for your server by following our initial server setup guide for Debian 9.
When you have an account available, log in as your non-root user to begin.
Apache is available within Debian’s default software repositories, making it possible to install it using conventional package management tools.
Let’s begin by updating the local package index to reflect the latest upstream changes:
- sudo apt update
Then, install the apache2
package:
- sudo apt install apache2
After confirming the installation, apt
will install Apache and all required dependencies.
Before testing Apache, it’s necessary to modify the firewall settings to allow outside access to the default web ports. Assuming that you followed the instructions in the prerequisites, you should have a UFW firewall configured to restrict access to your server.
During installation, Apache registers itself with UFW to provide a few application profiles that can be used to enable or disable access to Apache through the firewall.
List the ufw
application profiles by typing:
- sudo ufw app list
You will see a list of the application profiles:
OutputAvailable applications:
AIM
Bonjour
CIFS
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
The Apache profiles begin with WWW:
It is recommended that you enable the most restrictive profile that will still allow the traffic you’ve configured. Since we haven’t configured SSL for our server yet in this guide, we will only need to allow traffic on port 80:
- sudo ufw allow 'WWW'
You can verify the change by typing:
- sudo ufw status
You should see HTTP traffic allowed in the displayed output:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
WWW ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
WWW (v6) ALLOW Anywhere (v6)
As you can see, the profile has been activated to allow access to the web server.
At the end of the installation process, Debian 9 starts Apache. The web server should already be up and running.
Check with the systemd
init system to make sure the service is running by typing:
- sudo systemctl status apache2
Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 19:21:48 UTC; 13min ago
Main PID: 12849 (apache2)
CGroup: /system.slice/apache2.service
├─12849 /usr/sbin/apache2 -k start
├─12850 /usr/sbin/apache2 -k start
└─12852 /usr/sbin/apache2 -k start
Sep 05 19:21:48 apache systemd[1]: Starting The Apache HTTP Server...
Sep 05 19:21:48 apache systemd[1]: Started The Apache HTTP Server.
As you can see from this output, the service appears to have started successfully. However, the best way to test this is to request a page from Apache.
You can access the default Apache landing page to confirm that the software is running properly through your IP address. If you do not know your server’s IP address, you can get it a few different ways from the command line.
Try typing this at your server’s command prompt:
- hostname -I
You will get back a few addresses separated by spaces. You can try each in your web browser to see if they work.
An alternative is using the curl
tool, which should give you your public IP address as seen from another location on the internet.
First, install curl
using apt
:
- sudo apt install curl
Then, use curl
to retrieve icanhazip.com using IPv4:
- curl -4 icanhazip.com
When you have your server’s IP address, enter it into your browser’s address bar:
http://your_server_ip
You should see the default Debian 9 Apache web page:
This page indicates that Apache is working correctly. It also includes some basic information about important Apache files and directory locations.
Now that you have your web server up and running, let’s go over some basic management commands.
To stop your web server, type:
- sudo systemctl stop apache2
To start the web server when it is stopped, type:
- sudo systemctl start apache2
To stop and then start the service again, type:
- sudo systemctl restart apache2
If you are simply making configuration changes, Apache can often reload without dropping connections. To do this, use this command:
- sudo systemctl reload apache2
By default, Apache is configured to start automatically when the server boots. If this is not what you want, disable this behavior by typing:
- sudo systemctl disable apache2
To re-enable the service to start up at boot, type:
- sudo systemctl enable apache2
Apache should now start automatically when the server boots again.
When using the Apache web server, you can use virtual hosts (similar to server blocks in Nginx) to encapsulate configuration details and host more than one domain from a single server. We will set up a domain called example.com, but you should replace this with your own domain name. To learn more about setting up a domain name with DigitalOcean, see our Introduction to DigitalOcean DNS.
Apache on Debian 9 has one server block enabled by default that is configured to serve documents from the /var/www/html
directory. While this works well for a single site, it can become unwieldy if you are hosting multiple sites. Instead of modifying /var/www/html
, let’s create a directory structure within /var/www
for our example.com site, leaving /var/www/html
in place as the default directory to be served if a client request doesn’t match any other sites.
Create the directory for example.com as follows, using the -p
flag to create any necessary parent directories:
sudo mkdir -p /var/www/example.com/html
Next, assign ownership of the directory with the $USER
environmental variable:
- sudo chown -R $USER:$USER /var/www/example.com/html
The permissions of your web roots should be correct if you haven’t modified your unmask
value, but you can make sure by typing:
- sudo chmod -R 755 /var/www/example.com
Next, create a sample index.html
page using nano
or your favorite editor:
- nano /var/www/example.com/html/index.html
Inside, add the following sample HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com virtual host is working!</h1>
</body>
</html>
Save and close the file when you are finished.
In order for Apache to serve this content, it’s necessary to create a virtual host file with the correct directives. Instead of modifying the default configuration file located at /etc/apache2/sites-available/000-default.conf
directly, let’s make a new one at /etc/apache2/sites-available/example.com.conf
:
- sudo nano /etc/apache2/sites-available/example.com.conf
Paste in the following configuration block, which is similar to the default, but updated for our new directory and domain name:
<VirtualHost *:80>
ServerAdmin admin@example.com
ServerName example.com
ServerAlias www.example.com
DocumentRoot /var/www/example.com/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Notice that we’ve updated the DocumentRoot
to our new directory and ServerAdmin
to an email that the example.com site administrator can access. We’ve also added two directives: ServerName
, which establishes the base domain that should match for this virtual host definition, and ServerAlias
, which defines further names that should match as if they were the base name.
Save and close the file when you are finished.
Let’s enable the file with the a2ensite
tool:
- sudo a2ensite example.com.conf
Disable the default site defined in 000-default.conf
:
- sudo a2dissite 000-default.conf
Next, let’s test for configuration errors:
- sudo apache2ctl configtest
You should see the following output:
OutputSyntax OK
Restart Apache to implement your changes:
- sudo systemctl restart apache2
Apache should now be serving your domain name. You can test this by navigating to http://example.com
, where you should see something like this:
Now that you know how to manage the Apache service itself, you should take a few minutes to familiarize yourself with a few important directories and files.
/var/www/html
: The actual web content, which by default only consists of the default Apache page you saw earlier, is served out of the /var/www/html
directory. This can be changed by altering Apache configuration files./etc/apache2
: The Apache configuration directory. All of the Apache configuration files reside here./etc/apache2/apache2.conf
: The main Apache configuration file. This can be modified to make changes to the Apache global configuration. This file is responsible for loading many of the other files in the configuration directory./etc/apache2/ports.conf
: This file specifies the ports that Apache will listen on. By default, Apache listens on port 80 and additionally listens on port 443 when a module providing SSL capabilities is enabled./etc/apache2/sites-available/
: The directory where per-site virtual hosts can be stored. Apache will not use the configuration files found in this directory unless they are linked to the sites-enabled
directory. Typically, all server block configuration is done in this directory, and then enabled by linking to the other directory with the a2ensite
command./etc/apache2/sites-enabled/
: The directory where enabled per-site virtual hosts are stored. Typically, these are created by linking to configuration files found in the sites-available
directory with the a2ensite
. Apache reads the configuration files and links found in this directory when it starts or reloads to compile a complete configuration./etc/apache2/conf-available/
, /etc/apache2/conf-enabled/
: These directories have the same relationship as the sites-available
and sites-enabled
directories, but are used to store configuration fragments that do not belong in a virtual host. Files in the conf-available
directory can be enabled with the a2enconf
command and disabled with the a2disconf
command./etc/apache2/mods-available/
, /etc/apache2/mods-enabled/
: These directories contain the available and enabled modules, respectively. Files in ending in .load
contain fragments to load specific modules, while files ending in .conf
contain the configuration for those modules. Modules can be enabled and disabled using the a2enmod
and a2dismod
command./var/log/apache2/access.log
: By default, every request to your web server is recorded in this log file unless Apache is configured to do otherwise./var/log/apache2/error.log
: By default, all errors are recorded in this file. The LogLevel
directive in the Apache configuration specifies how much detail the error logs will contain.Now that you have your web server installed, you have many options for the type of content you can serve and the technologies you can use to create a richer experience.
If you’d like to build out a more complete application stack, you can look at this article on how to configure a LAMP stack on Debian 9.
]]>MongoDB is a free and open-source NoSQL document database used commonly in modern web applications.
In this tutorial you will install MongoDB, manage its service, and optionally enable remote access.
To follow this tutorial, you will need one Debian 9 server set up by following this initial server setup tutorial, including a sudo-enabled non-root user and a firewall.
Debian 9’s official package repositories include a slightly-out-of-date version of MongoDB, which means we’ll install from the official MongoDB repo instead.
First, we need to add the MongoDB signing key with apt-key add
. We’ll need to make sure the curl
command is installed before doing so:
- sudo apt install curl
Next we download the key and pass it to apt-key add
:
- curl https://www.mongodb.org/static/pgp/server-4.0.asc | sudo apt-key add -
Next we’ll create a source list for the MongoDB repo, so apt
knows where to download from. First open the source list file in a text editor:
- sudo nano /etc/apt/sources.list.d/mongodb-org-4.0.list
This will open a new blank file. Paste in the following:
deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/4.0 main
Save and close the file, then update your package cache:
- sudo apt update
Install the mongodb-org
package to install the server and some supporting tools:
- sudo apt-get install mongodb-org
Finally, enable and start the mongod
service to get your MongoDB database running:
- sudo systemctl enable mongod
- sudo systemctl start mongod
We’ve now installed and started the latest stable version of MongoDB, along with helpful management tools for the MongoDB server.
Next, let’s verify that the server is running and works correctly.
We started MongoDB service in the previous step, now let’s verify that it is started and the database is working.
First, check the service’s status:
- sudo systemctl status mongod
You’ll see this output:
Output● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 16:59:56 UTC; 3s ago
Docs: https://docs.mongodb.org/manual
Main PID: 4321 (mongod)
Tasks: 26
CGroup: /system.slice/mongod.service
└─4321 /usr/bin/mongod --config /etc/mongod.conf
According to systemd
, the MongoDB server is up and running.
We can verify this further by actually connecting to the database server and executing a diagnostic command
Execute this command:
- mongo --eval 'db.runCommand({ connectionStatus: 1 })'
This will output the current database version, the server address and port, and the output of the status command:
OutputMongoDB shell version v4.0.2
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.2
{
"authInfo" : {
"authenticatedUsers" : [ ],
"authenticatedUserRoles" : [ ]
},
"ok" : 1
}
A value of 1
for the ok
field in the response indicates that the server is working properly.
Next, we’ll look at how to manage the server instance.
MongoDB installs as a systemd service, which means that you can manage it using standard systemd
commands alongside all other system services in Ubuntu.
To verify the status of the service, type:
- sudo systemctl status mongod
You can stop the server anytime by typing:
- sudo systemctl stop mongod
To start the server when it is stopped, type:
- sudo systemctl start mongod
You can also restart the server with a single command:
- sudo systemctl restart mongod
In the previous step we enabled MongoDB to start automatically with the server. If you wish to disable the automatic startup, type:
- sudo systemctl disable mongod
It’s just as easy to enable it again. To do this, use:
- sudo systemctl enable mongod
Next, let’s adjust the firewall settings for our MongoDB installation.
Assuming you have followed the initial server setup tutorial instructions to enable the firewall on your server, the MongoDB server will be inaccessible from the internet.
If you intend to use the MongoDB server only locally with applications running on the same server, this is the recommended and secure setting. However, if you would like to be able to connect to your MongoDB server from the internet, you have to allow the incoming connections in ufw
.
To allow access to MongoDB on its default port 27017
from everywhere, you could use sudo ufw allow 27017
. However, enabling internet access to MongoDB server on a default installation gives anyone unrestricted access to the database server and its data.
In most cases, MongoDB should be accessed only from certain trusted locations, such as another server hosting an application. To accomplish this task, you can allow access on MongoDB’s default port while specifying the IP address of another server that will be explicitly allowed to connect:
- sudo ufw allow from your_other_server_ip/32 to any port 27017
You can verify the change in firewall settings with ufw
:
- sudo ufw status
You should see traffic to port 27017
allowed in the output:
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
27017 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
27017 (v6) ALLOW Anywhere (v6)
If you have decided to allow only a certain IP address to connect to MongoDB server, the IP address of the allowed location will be listed instead of Anywhere
in the output.
You can find more advanced firewall settings for restricting access to services in UFW Essentials: Common Firewall Rules and Commands.
Even though the port is open, MongoDB is currently only listening on the local address 127.0.0.1
. To allow remote connections, add your server’s publicly-routable IP address to the mongod.conf
file.
Open the MongoDB configuration file in your editor:
- sudo nano /etc/mongod.conf
Add your server’s IP address to the bindIP
value:
. . .
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,your_server_ip
. . .
Be sure to place a comma between the existing IP address and the one you added.
Save the file, exit the editor, and restart MongoDB:
- sudo systemctl restart mongod
MongoDB is now listening for remote connections, but anyone can access it. Follow Part 2 of How to Install and Secure MongoDB on Ubuntu 16.04 to add an administrative user and lock things down further.
You can find more in-depth tutorials on how to configure and use MongoDB in these DigitalOcean community articles. The official MongoDB documentation is also a great resource on the possibilities that MongoDB provides.
]]>Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.
In this tutorial, you will use Certbot to obtain a free SSL certificate for Nginx on Debian 9 and set up your certificate to renew automatically.
This tutorial will use a separate Nginx server block file instead of the default file. We recommend creating new Nginx server block files for each domain because it helps to avoid common mistakes and maintains the default files as a fallback configuration.
To follow this tutorial, you will need:
One Debian 9 server, set up by following this initial server setup for Debian 9 tutorial, along with a sudo non-root user and a firewall.
A fully registered domain name. This tutorial will use example.com throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
example.com
pointing to your server’s public IP address.www.example.com
pointing to your server’s public IP address.Nginx installed by following How To Install Nginx on Debian 9. Be sure that you have a server block for your domain. This tutorial will use /etc/nginx/sites-available/example.com
as an example.
The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on your server.
Certbot is in very active development, so the Certbot packages provided by Debian with current stable releases tend to be outdated. However, we can obtain a more up-to-date package by enabling the Debian 9 backports repository in /etc/apt/sources.list
, where the apt
package manager looks for package sources. The backports repository includes recompiled packages that can be run without new libraries on stable Debian distributions.
To add the backports repository, first open /etc/apt/sources.list
:
- sudo nano /etc/apt/sources.list
At the bottom of the file, add the following mirrors from the Debian project:
...
deb http://deb.debian.org/debian stretch-backports main contrib non-free
deb-src http://deb.debian.org/debian stretch-backports main contrib non-free
This includes the main
packages, which are Debian Free Software Guidelines (DFSG)- compliant, as well as the non-free
and contrib
components, which are either not DFSG-compliant themselves or include dependencies in this category.
Save and close the file when you are finished.
Update the package list to pick up the new repository’s package information:
- sudo apt update
And finally, install Certbot’s Nginx package with apt
:
- sudo apt install python-certbot-nginx -t stretch-backports
Certbot is now ready to use, but in order for it to configure SSL for Nginx, we need to verify some of Nginx’s configuration.
Certbot needs to be able to find the correct server
block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name
directive that matches your requested domain.
If you followed the server block setup step in the Nginx installation tutorial, you should have a server block for your domain at /etc/nginx/sites-available/example.com
with the server_name
directive already set appropriately.
To check, open the server block file for your domain using nano
or your favorite text editor:
- sudo nano /etc/nginx/sites-available/example.com
Find the existing server_name
line. It should look like this:
...
server_name example.com www.example.com;
...
If it does, exit your editor and move on to the next step.
If it doesn’t, update it to match. Then save the file, quit your editor, and verify the syntax of your configuration edits:
- sudo nginx -t
If you get an error, reopen the server block file and check for any typos or missing characters. Once your configuration file syntax is correct, reload Nginx to load the new configuration:
- sudo systemctl reload nginx
Certbot can now find the correct server
block and update it.
Next, let’s update the firewall to allow HTTPS traffic.
If you have the ufw
firewall enabled, as recommended in the prerequisite guides, you’ll need to adjust the settings to allow for HTTPS traffic.
You can see the current setting by typing:
- sudo ufw status
It will probably look like this, meaning that only HTTP traffic is allowed to the web server:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
To let in HTTPS traffic, allow the Nginx Full profile and delete the redundant Nginx HTTP profile allowance:
- sudo ufw allow 'Nginx Full'
- sudo ufw delete allow 'Nginx HTTP'
Your status should now look like this:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
Next, let’s run Certbot and fetch our certificates.
Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following:
- sudo certbot --nginx -d example.com -d www.example.com
This runs certbot
with the --nginx
plugin, using -d
to specify the names we’d like the certificate to be valid for.
If this is your first time running certbot
, you will be prompted to enter an email address and agree to the terms of service. After doing so, certbot
will communicate with the Let’s Encrypt server, then run a challenge to verify that you control the domain you’re requesting a certificate for.
If that’s successful, certbot
will ask how you’d like to configure your HTTPS settings.
OutputPlease choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):
Select your choice then hit ENTER
. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot
will wrap up with a message telling you the process was successful and where your certificates are stored:
OutputIMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/example.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/example.com/privkey.pem
Your cert will expire on 2018-07-23. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Your certificates are downloaded, installed, and loaded. Try reloading your website using https://
and notice your browser’s security indicator. It should indicate that the site is properly secured, usually with a green lock icon. If you test your server using the SSL Labs Server Test, it will get an A grade.
Let’s finish by testing the renewal process.
Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot
package we installed takes care of this for us by adding a renew script to /etc/cron.d
. This script runs twice a day and will automatically renew any certificate that’s within thirty days of expiration.
To test the renewal process, you can do a dry run with certbot
:
- sudo certbot renew --dry-run
If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.
In this tutorial, you installed the Let’s Encrypt client certbot
, downloaded SSL certificates for your domain, configured Nginx to use these certificates, and set up automatic certificate renewal. If you have further questions about using Certbot, their documentation is a good place to start.
Virtual Network Computing, or VNC, is a connection system that allows you to use your keyboard and mouse to interact with a graphical desktop environment on a remote server. It makes managing files, software, and settings on a remote server easier for users who are not yet comfortable with the command line.
In this guide, you’ll set up a VNC server on a Debian 9 server and connect to it securely through an SSH tunnel. You’ll use TightVNC, a fast and lightweight remote control package. This choice will ensure that our VNC connection will be smooth and stable even on slower internet connections.
##Prerequisites
To complete this tutorial, you’ll need:
sudo
access and a firewall.By default, a Debian 9 server does not come with a graphical desktop environment or a VNC server installed, so we’ll begin by installing those. Specifically, we will install packages for the latest Xfce desktop environment and the TightVNC package available in the official Debian repository.
On your server, update your list of packages:
- sudo apt update
Now install the Xfce desktop environment on your server:
- sudo apt install xfce4 xfce4-goodies
During the installation, you’ll be prompted to select your keyboard layout from a list of possible options. Choose the one that’s appropriate for your language and press Enter
. The installation will continue.
Once that installation completes, install the TightVNC server:
- sudo apt install tightvncserver
To complete the VNC server’s initial configuration after installation, use the vncserver
command to set up a secure password and create the initial configuration files:
- vncserver
You’ll be prompted to enter and verify a password to access your machine remotely:
OutputYou will require a password to access your desktops.
Password:
Verify:
The password must be between six and eight characters long. Passwords more than 8 characters will be truncated automatically.
Once you verify the password, you’ll have the option to create a a view-only password. Users who log in with the view-only password will not be able to control the VNC instance with their mouse or keyboard. This is a helpful option if you want to demonstrate something to other people using your VNC server, but this isn’t required.
The process then creates the necessary default configuration files and connection information for the server:
OutputWould you like to enter a view-only password (y/n)? n
xauth: file /home/sammy/.Xauthority does not exist
New 'X' desktop is your_hostname:1
Creating default startup script /home/sammy/.vnc/xstartup
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
Now let’s configure the VNC server.
##Step 2 — Configuring the VNC Server
The VNC server needs to know which commands to execute when it starts up. Specifically, VNC needs to know which graphical desktop it should connect to.
These commands are located in a configuration file called xstartup
in the .vnc
folder under your home directory. The startup script was created when you ran the vncserver
in the previous step, but we’ll create our own to launch the Xfce desktop.
When VNC is first set up, it launches a default server instance on port 5901
. This port is called a display port, and is referred to by VNC as :1
. VNC can launch multiple instances on other display ports, like :2
, :3
, and so on.
Because we are going to be changing how the VNC server is configured, first stop the VNC server instance that is running on port 5901
with the following command:
- vncserver -kill :1
The output should look like this, although you’ll see a different PID:
OutputKilling Xtightvnc process ID 17648
Before you modify the xstartup
file, back up the original:
- mv ~/.vnc/xstartup ~/.vnc/xstartup.bak
Now create a new xstartup
file and open it in your text editor:
- nano ~/.vnc/xstartup
Commands in this file are executed automatically whenever you start or restart the VNC server. We need VNC to start our desktop environment if it’s not already started. Add these commands to the file:
~/.vnc/xstartup#!/bin/bash
xrdb $HOME/.Xresources
startxfce4 &
The first command in the file, xrdb $HOME/.Xresources
, tells VNC’s GUI framework to read the server user’s .Xresources
file. .Xresources
is where a user can make changes to certain settings of the graphical desktop, like terminal colors, cursor themes, and font rendering. The second command tells the server to launch Xfce, which is where you will find all of the graphical software that you need to comfortably manage your server.
To ensure that the VNC server will be able to use this new startup file properly, we’ll need to make it executable.
- sudo chmod +x ~/.vnc/xstartup
Now, restart the VNC server.
- vncserver
You’ll see output similar to this:
OutputNew 'X' desktop is your_hostname:1
Starting applications specified in /home/sammy/.vnc/xstartup
Log file is /home/sammy/.vnc/your_hostname:1.log
With the configuration in place, let’s connect to the server from our local machine.
VNC itself doesn’t use secure protocols when connecting. We’ll use an SSH tunnel to connect securely to our server, and then tell our VNC client to use that tunnel rather than making a direct connection.
Create an SSH connection on your local computer that securely forwards to the localhost
connection for VNC. You can do this via the terminal on Linux or macOS with the following command:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
The -L
switch specifies the port bindings. In this case we’re binding port 5901
of the remote connection to port 5901
on your local machine. The -C
switch enables compression, while the -N
switch tells ssh
that we don’t want to execute a remote command. The -l
switch specifies the remote login name.
Remember to replace sammy
and your_server_ip
with the sudo non-root username and IP address of your server.
If you are using a graphical SSH client, like PuTTY, use your_server_ip
as the connection IP, and set localhost:5901
as a new forwarded port in the program’s SSH tunnel settings.
Once the tunnel is running, use a VNC client to connect to localhost:5901
. You’ll be prompted to authenticate using the password you set in Step 1.
Once you are connected, you’ll see the default Xfce desktop.
Select Use default config to configure your desktop quickly.
You can access files in your home directory with the file manager or from the command line, as seen here:
On your local machine, press CTRL+C
in your terminal to stop the SSH tunnel and return to your prompt. This will disconnect your VNC session as well.
Next let’s set up the VNC server as a service.
Next, we’ll set up the VNC server as a systemd service so we can start, stop, and restart it as needed, like any other service. This will also ensure that VNC starts up when your server reboots.
First, create a new unit file called /etc/systemd/system/vncserver@.service
using your favorite text editor:
- sudo nano /etc/systemd/system/vncserver@.service
The @
symbol at the end of the name will let us pass in an argument we can use in the service configuration. We’ll use this to specify the VNC display port we want to use when we manage the service.
Add the following lines to the file. Be sure to change the value of User, Group, WorkingDirectory, and the username in the value of PIDFILE to match your username:
/etc/systemd/system/vncserver@.service[Unit]
Description=Start TightVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=sammy
Group=sammy
WorkingDirectory=/home/sammy
PIDFile=/home/sammy/.vnc/%H:%i.pid
ExecStartPre=-/usr/bin/vncserver -kill :%i > /dev/null 2>&1
ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :%i
ExecStop=/usr/bin/vncserver -kill :%i
[Install]
WantedBy=multi-user.target
The ExecStartPre
command stops VNC if it’s already running. The ExecStart
command starts VNC and sets the color depth to 24-bit color with a resolution of 1280x800. You can modify these startup options as well to meet your needs.
Save and close the file.
Next, make the system aware of the new unit file.
- sudo systemctl daemon-reload
Enable the unit file.
- sudo systemctl enable vncserver@1.service
The 1
following the @
sign signifies which display number the service should appear over, in this case the default :1
as was discussed in Step 2…
Stop the current instance of the VNC server if it’s still running.
- vncserver -kill :1
Then start it as you would start any other systemd service.
- sudo systemctl start vncserver@1
You can verify that it started with this command:
- sudo systemctl status vncserver@1
If it started correctly, the output should look like this:
Output● vncserver@1.service - Start TightVNC server at startup
Loaded: loaded (/etc/systemd/system/vncserver@.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 16:47:40 UTC; 3s ago
Process: 4977 ExecStart=/usr/bin/vncserver -depth 24 -geometry 1280x800 :1 (code=exited, status=0/SUCCESS)
Process: 4971 ExecStartPre=/usr/bin/vncserver -kill :1 > /dev/null 2>&1 (code=exited, status=0/SUCCESS)
Main PID: 4987 (Xtightvnc)
...
Your VNC server will now be available when you reboot the machine.
Start your SSH tunnel again:
- ssh -L 5901:127.0.0.1:5901 -C -N -l sammy your_server_ip
Then make a new connection using your VNC client software to localhost:5901
to connect to your machine.
You now have a secured VNC server up and running on your Debian 9 server. Now you’ll be able to manage your files, software, and settings with an easy-to-use and familiar graphical interface, and you’ll be able to run graphical software like web browsers remotely.
]]>MySQL is a prominent open source database management system used to store and retrieve data for a wide variety of popular applications. MySQL is the M in the LAMP stack, a commonly used set of open source software that also includes Linux, the Apache web server, and the PHP programming language.
In Debian 9, MariaDB, a community fork of the MySQL project, is packaged as the default MySQL variant. While, MariaDB works well in most cases, if you need features found only in Oracle’s MySQL, you can install and use packages from a repository maintained by the MySQL developers.
To install the latest version of MySQL, we’ll add this repository, install the MySQL software itself, secure the install, and finally we’ll test that MySQL is running and responding to commands.
Before starting this tutorial, you will need:
sudo
privileges and a firewall.The MySQL developers provide a .deb
package that handles configuring and installing the official MySQL software repositories. Once the repositories are set up, we’ll be able to use Ubuntu’s standard apt
command to install the software. We’ll download this .deb
file with wget
and then install it with the dpkg
command.
First, load the MySQL download page in your web browser. Find the Download button in the lower-right corner and click through to the next page. This page will prompt you to log in or sign up for an Oracle web account. We can skip that and instead look for the link that says No thanks, just start my download. Right-click the link and select Copy Link Address (this option may be worded differently, depending on your browser).
Now we’re going to download the file. On your server, move to a directory you can write to. Download the file using wget
, remembering to paste the address you just copied in place of the highlighted portion below:
- cd /tmp
- wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
The file should now be downloaded in our current directory. List the files to make sure:
- ls
You should see the filename listed:
Outputmysql-apt-config_0.8.10-1_all.deb
. . .
Now we’re ready to install:
- sudo dpkg -i mysql-apt-config*
dpkg
is used to install, remove, and inspect .deb
software packages. The -i
flag indicates that we’d like to install from the specified file.
During the installation, you’ll be presented with a configuration screen where you can specify which version of MySQL you’d prefer, along with an option to install repositories for other MySQL-related tools. The defaults will add the repository information for the latest stable version of MySQL and nothing else. This is what we want, so use the down arrow to navigate to the Ok
menu option and hit ENTER
.
The package will now finish adding the repository. Refresh your apt
package cache to make the new software packages available:
- sudo apt update
Now that we’ve added the MySQL repositories, we’re ready to install the actual MySQL server software. If you ever need to update the configuration of these repositories, just run sudo dpkg-reconfigure mysql-apt-config
, select new options, and then sudo apt-get update
to refresh your package cache.
Having added the repository and with our package cache freshly updated, we can now use apt
to install the latest MySQL server package:
- sudo apt install mysql-server
apt
will look at all available mysql-server
packages and determine that the MySQL provided package is the newest and best candidate. It will then calculate package dependencies and ask you to approve the installation. Type y
then ENTER
. The software will install.
You will be asked to set a root password during the configuration phase of the installation. Choose and confirm a secure password to continue. Next, a prompt will appear asking for you to select a default authentication plugin. Read the display to understand the choices. If you are not sure, choosing Use Strong Password Encryption is safer.
MySQL should be installed and running now. Let’s check using systemctl
:
- sudo systemctl status mysql
Output● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-09-05 15:58:21 UTC; 30s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Main PID: 12805 (mysqld)
Status: "SERVER_OPERATING"
CGroup: /system.slice/mysql.service
└─12805 /usr/sbin/mysqld
Sep 05 15:58:15 mysql1 systemd[1]: Starting MySQL Community Server...
Sep 05 15:58:21 mysql1 systemd[1]: Started MySQL Community Server.
The Active: active (running)
line means MySQL is installed and running. Now we’ll make the installation a little more secure.
MySQL comes with a command we can use to perform a few security-related updates on our new install. Let’s run it now:
- mysql_secure_installation
This will ask you for the MySQL root password that you set during installation. Type it in and press ENTER
. Now we’ll answer a series of yes or no prompts. Let’s go through them:
First, we are asked about the validate password plugin, a plugin that can automatically enforce certain password strength rules for your MySQL users. Enabling this is a decision you’ll need to make based on your individual security needs. Type y
and ENTER
to enable it, or just hit ENTER
to skip it. If enabled, you will also be prompted to choose a level from 0–2 for how strict the password validation will be. Choose a number and hit ENTER
to continue.
Next you’ll be asked if you want to change the root password. Since we just created the password when we installed MySQL, we can safely skip this. Hit ENTER
to continue without updating the password.
The rest of the prompts can be answered yes. You will be asked about removing the anonymous MySQL user, disallowing remote root login, removing the test database, and reloading privilege tables to ensure the previous changes take effect properly. These are all a good idea. Type y
and hit ENTER
for each.
The script will exit after all the prompts are answered. Now our MySQL installation is reasonably secured. Let’s test it again by running a client that connects to the server and returns some information.
mysqladmin
is a command line administrative client for MySQL. We’ll use it to connect to the server and output some version and status information:
- mysqladmin -u root -p version
The -u root
portion tells mysqladmin
to log in as the MySQL root user, -p
instructs the client to ask for a password, and version
is the actual command we want to run.
The output will let us know what version of the MySQL server is running, its uptime, and some other status information:
Outputmysqladmin Ver 8.0.12 for Linux on x86_64 (MySQL Community Server - GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Server version 8.0.12
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 6 min 42 sec
Threads: 2 Questions: 12 Slow queries: 0 Opens: 123 Flush tables: 2 Open tables: 99 Queries per second avg: 0.029
If you received similar output, congrats! You’ve successfully installed the latest MySQL server and secured it.
You’ve now completed a basic install of the latest version of MySQL, which should work for many popular applications.
]]>Docker is an application that simplifies the process of managing application processes in containers. Containers let you run your applications in resource-isolated processes. They’re similar to virtual machines, but containers are more portable, more resource-friendly, and more dependent on the host operating system.
For a detailed introduction to the different components of a Docker container, check out The Docker Ecosystem: An Introduction to Common Components.
In this tutorial, you’ll install and use Docker Community Edition (CE) on Debian 9. You’ll install Docker itself, work with containers and images, and push an image to a Docker Repository.
To follow this tutorial, you will need the following:
The Docker installation package available in the official Debian repository may not be the latest version. To ensure we get the latest version, we’ll install Docker from the official Docker repository. To do that, we’ll add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.
First, update your existing list of packages:
- sudo apt update
Next, install a few prerequisite packages which let apt
use packages over HTTPS:
- sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
Then add the GPG key for the official Docker repository to your system:
- curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
Add the Docker repository to APT sources:
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
Next, update the package database with the Docker packages from the newly added repo:
- sudo apt update
Make sure you are about to install from the Docker repo instead of the default Debian repo:
- apt-cache policy docker-ce
You’ll see output like this, although the version number for Docker may be different:
docker-ce:
Installed: (none)
Candidate: 18.06.1~ce~3-0~debian
Version table:
18.06.1~ce~3-0~debian 500
500 https://download.docker.com/linux/debian stretch/stable amd64 Packages
Notice that docker-ce
is not installed, but the candidate for installation is from the Docker repository for Debian 9 (stretch
).
Finally, install Docker:
- sudo apt install docker-ce
Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running:
- sudo systemctl status docker
The output should be similar to the following, showing that the service is active and running:
Output● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago
Docs: https://docs.docker.com
Main PID: 21319 (dockerd)
CGroup: /system.slice/docker.service
├─21319 /usr/bin/dockerd -H fd://
└─21326 docker-containerd --config /var/run/docker/containerd/containerd.toml
Installing Docker now gives you not just the Docker service (daemon) but also the docker
command line utility, or the Docker client. We’ll explore how to use the docker
command later in this tutorial.
By default, the docker
command can only be run the root user or by a user in the docker group, which is automatically created during Docker’s installation process. If you attempt to run the docker
command without prefixing it with sudo
or without being in the docker group, you’ll get an output like this:
Outputdocker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See 'docker run --help'.
If you want to avoid typing sudo
whenever you run the docker
command, add your username to the docker
group:
- sudo usermod -aG docker ${USER}
To apply the new group membership, log out of the server and back in, or type the following:
- su - ${USER}
You will be prompted to enter your user’s password to continue.
Confirm that your user is now added to the docker group by typing:
- id -nG
Outputsammy sudo docker
If you need to add a user to the docker
group that you’re not logged in as, declare that username explicitly using:
- sudo usermod -aG docker username
The rest of this article assumes you are running the docker
command as a user in the docker group. If you choose not to, please prepend the commands with sudo
.
Let’s explore the docker
command next.
Using docker
consists of passing it a chain of options and commands followed by arguments. The syntax takes this form:
- docker [option] [command] [arguments]
To view all available subcommands, type:
- docker
As of Docker 18, the complete list of available subcommands includes:
Output
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
To view the options available to a specific command, type:
- docker docker-subcommand --help
To view system-wide information about Docker, use:
- docker info
Let’s explore some of these commands. We’ll start by working with images.
Docker containers are built from Docker images. By default, Docker pulls these images from Docker Hub, a Docker registry managed by Docker, the company behind the Docker project. Anyone can host their Docker images on Docker Hub, so most applications and Linux distributions you’ll need will have images hosted there.
To check whether you can access and download images from Docker Hub, type:
- docker run hello-world
The output will indicate that Docker in working correctly:
OutputUnable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
Docker was initially unable to find the hello-world
image locally, so it downloaded the image from Docker Hub, which is the default repository. Once the image downloaded, Docker created a container from the image and the application within the container executed, displaying the message.
You can search for images available on Docker Hub by using the docker
command with the search
subcommand. For example, to search for the Ubuntu image, type:
- docker search ubuntu
The script will crawl Docker Hub and return a listing of all images whose name match the search string. In this case, the output will be similar to this:
OutputNAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 8320 [OK]
dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 214 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 170 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with "headless" VNC session… 128 [OK]
ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 95 [OK]
ubuntu-upstart Upstart is an event-based replacement for th… 88 [OK]
neurodebian NeuroDebian provides neuroscience research s… 53 [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 43 [OK]
ubuntu-debootstrap debootstrap --variant=minbase --components=m… 39 [OK]
nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK]
tutum/ubuntu Simple Ubuntu docker images with SSH access 18
i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13
1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 12 [OK]
ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12
eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK]
darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK]
codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK]
1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK]
pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 2
1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK]
ossobv/ubuntu Custom ubuntu image from scratch (based on o… 0
smartentry/ubuntu ubuntu with smartentry 0 [OK]
1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK]
pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0
paasmule/bosh-tools-ubuntu Ubuntu based bosh-cli 0 [OK]
...
In the OFFICIAL column, OK indicates an image built and supported by the company behind the project. Once you’ve identified the image that you would like to use, you can download it to your computer using the pull
subcommand.
Execute the following command to download the official ubuntu
image to your computer:
- docker pull ubuntu
You’ll see the following output:
OutputUsing default tag: latest
latest: Pulling from library/ubuntu
6b98dfc16071: Pull complete
4001a1209541: Pull complete
6319fc68c576: Pull complete
b24603670dc3: Pull complete
97f170c87c6f: Pull complete
Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d
Status: Downloaded newer image for ubuntu:latest
After an image has been downloaded, you can then run a container using the downloaded image with the run
subcommand. As you saw with the hello-world
example, if an image has not been downloaded when docker
is executed with the run
subcommand, the Docker client will first download the image, then run a container using it.
To see the images that have been downloaded to your computer, type:
- docker images
The output should look similar to the following:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 16508e5c265d 13 days ago 84.1MB
hello-world latest 2cb0d9787c4d 7 weeks ago 1.85kB
As you’ll see later in this tutorial, images that you use to run containers can be modified and used to generate new images, which may then be uploaded (pushed is the technical term) to Docker Hub or other Docker registries.
Let’s look at how to run containers in more detail.
The hello-world
container you ran in the previous step is an example of a container that runs and exits after emitting a test message. Containers can be much more useful than that, and they can be interactive. After all, they are similar to virtual machines, only more resource-friendly.
As an example, let’s run a container using the latest image of Ubuntu. The combination of the -i and -t switches gives you interactive shell access into the container:
- docker run -it ubuntu
Your command prompt should change to reflect the fact that you’re now working inside the container and should take this form:
Outputroot@d9b100f2f636:/#
Note the container id in the command prompt. In this example, it is d9b100f2f636
. You’ll need that container ID later to identify the container when you want to remove it.
Now you can run any command inside the container. For example, let’s update the package database inside the container. You don’t need to prefix any command with sudo
, because you’re operating inside the container as the root user:
- apt update
Then install any application in it. Let’s install Node.js:
- apt install nodejs
This installs Node.js in the container from the official Ubuntu repository. When the installation finishes, verify that Node.js is installed:
- node -v
You’ll see the version number displayed in your terminal:
Outputv8.10.0
Any changes you make inside the container only apply to that container.
To exit the container, type exit
at the prompt.
Let’s look at managing the containers on our system next.
After using Docker for a while, you’ll have many active (running) and inactive containers on your computer. To view the active ones, use:
- docker ps
You will see output similar to the following:
OutputCONTAINER ID IMAGE COMMAND CREATED
In this tutorial, you started two containers; one from the hello-world
image and another from the ubuntu
image. Both containers are no longer running, but they still exist on your system.
To view all containers — active and inactive, run docker ps
with the -a
switch:
- docker ps -a
You’ll see output similar to this:
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 8 minutes ago sharp_volhard
01c950718166 hello-world "/hello" About an hour ago Exited (0) About an hour ago festive_williams
To view the latest container you created, pass it the -l
switch:
- docker ps -l
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard
To start a stopped container, use docker start
, followed by the container ID or the container’s name. Let’s start the Ubuntu-based container with the ID of d9b100f2f636
:
- docker start d9b100f2f636
The container will start, and you can use docker ps
to see its status:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9b100f2f636 ubuntu "/bin/bash" About an hour ago Up 8 seconds sharp_volhard
To stop a running container, use docker stop
, followed by the container ID or name. This time, we’ll use the name that Docker assigned the container, which is sharp_volhard
:
- docker stop sharp_volhard
Once you’ve decided you no longer need a container anymore, remove it with the docker rm
command, again using either the container ID or the name. Use the docker ps -a
command to find the container ID or name for the container associated with the hello-world
image and remove it.
- docker rm festive_williams
You can start a new container and give it a name using the --name
switch. You can also use the --rm
switch to create a container that removes itself when it’s stopped. See the docker run help
command for more information on these options and others.
Containers can be turned into images which you can use to build new containers. Let’s look at how that works.
When you start up a Docker image, you can create, modify, and delete files just like you can with a virtual machine. The changes that you make will only apply to that container. You can start and stop it, but once you destroy it with the docker rm
command, the changes will be lost for good.
This section shows you how to save the state of a container as a new Docker image.
After installing Node.js inside the Ubuntu container, you now have a container running off an image, but the container is different from the image you used to create it. But you might want to reuse this Node.js container as the basis for new images later.
Then commit the changes to a new Docker image instance using the following command.
- docker commit -m "What you did to the image" -a "Author Name" container_id repository/new_image_name
The -m switch is for the commit message that helps you and others know what changes you made, while -a is used to specify the author. The container_id
is the one you noted earlier in the tutorial when you started the interactive Docker session. Unless you created additional repositories on Docker Hub, the repository
is usually your Docker Hub username.
For example, for the user sammy, with the container ID of d9b100f2f636
, the command would be:
- docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs
When you commit an image, the new image is saved locally on your computer. Later in this tutorial, you’ll learn how to push an image to a Docker registry like Docker Hub so others can access it.
Listing the Docker images again will show the new image, as well as the old one that it was derived from:
- docker images
You’ll see output like this:
OutputREPOSITORY TAG IMAGE ID CREATED SIZE
sammy/ubuntu-nodejs latest 7c1f35226ca6 7 seconds ago 179MB
ubuntu latest 113a43faa138 4 weeks ago 81.2MB
hello-world latest e38bc07ac18e 2 months ago 1.85kB
In this example, ubuntu-nodejs
is the new image, which was derived from the existing ubuntu
image from Docker Hub. The size difference reflects the changes that were made. And in this example, the change was that NodeJS was installed. So next time you need to run a container using Ubuntu with NodeJS pre-installed, you can just use the new image.
You can also build Images from a Dockerfile
, which lets you automate the installation of software in a new image. However, that’s outside the scope of this tutorial.
Now let’s share the new image with others so they can create containers from it.
The next logical step after creating a new image from an existing image is to share it with a select few of your friends, the whole world on Docker Hub, or other Docker registry that you have access to. To push an image to Docker Hub or any other Docker registry, you must have an account there.
This section shows you how to push a Docker image to Docker Hub. To learn how to create your own private Docker registry, check out How To Set Up a Private Docker Registry on Ubuntu 14.04.
To push your image, first log into Docker Hub.
- docker login -u docker-registry-username
You’ll be prompted to authenticate using your Docker Hub password. If you specified the correct password, authentication should succeed.
Note: If your Docker registry username is different from the local username you used to create the image, you will have to tag your image with your registry username. For the example given in the last step, you would type:
- docker tag sammy/ubuntu-nodejs docker-registry-username/ubuntu-nodejs
Then you may push your own image using:
- docker push docker-registry-username/docker-image-name
To push the ubuntu-nodejs
image to the sammy repository, the command would be:
- docker push sammy/ubuntu-nodejs
The process may take some time to complete as it uploads the images, but when completed, the output will look like this:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Pushed
5f70bf18a086: Pushed
a3b5c80a4eba: Pushed
7f18b442972b: Pushed
3ce512daaf78: Pushed
7aae4540b42d: Pushed
...
After pushing an image to a registry, it should be listed on your account’s dashboard, like that show in the image below.
If a push attempt results in an error of this sort, then you likely did not log in:
OutputThe push refers to a repository [docker.io/sammy/ubuntu-nodejs]
e3fbbfb44187: Preparing
5f70bf18a086: Preparing
a3b5c80a4eba: Preparing
7f18b442972b: Preparing
3ce512daaf78: Preparing
7aae4540b42d: Waiting
unauthorized: authentication required
Log in with docker login
and repeat the push attempt. Then verify that it exists on your Docker Hub repository page.
You can now use docker pull sammy/ubuntu-nodejs
to pull the image to a new machine and use it to run a new container.
In this tutorial you installed Docker, worked with images and containers, and pushed a modified image to Docker Hub. Now that you know the basics, explore the other Docker tutorials in the DigitalOcean Community.
]]>Accurate timekeeping has become a critical component of modern software deployments. Whether it’s making sure logs are recorded in the right order or database updates are applied correctly, out-of-sync time can cause errors, data corruption, and other hard to debug issues.
Debian 9 has time synchronization built in and activated by default using the standard ntpd time server, provided by the ntp
package. In this article we will look at some basic time-related commands, verify that ntpd is active and connected to peers, and learn how to activate the alternate systemd-timesyncd network time service.
Before starting this tutorial, you will need a Debian 9 server with a non-root, sudo-enabled user, as described in this Debian 9 server setup tutorial.
The most basic command for finding out the time on your server is date
. Any user can type this command to print out the date and time:
- date
OutputTue Sep 4 17:51:49 UTC 2018
Most often your server will default to the UTC time zone, as highlighted in the above output. UTC is Coordinated Universal Time, the time at zero degrees longitude. Consistently using Universal Time reduces confusion when your infrastructure spans multiple time zones.
If you have different requirements and need to change the time zone, you can use the timedatectl
command to do so.
First, list the available time zones:
- timedatectl list-timezones
A list of time zones will print to your screen. You can press SPACE
to page down, and b
to page up. Once you find the correct time zone, make note of it then type q
to exit the list.
Now set the time zone with timedatectl set-timezone
, making sure to replace the highlighted portion below with the time zone you found in the list. You’ll need to use sudo
with timedatectl
to make this change:
- sudo timedatectl set-timezone America/New_York
You can verify your changes by running date
again:
- date
OutputTue Sep 4 13:52:57 EDT 2018
The time zone abbreviation should reflect the newly chosen value.
Now that we know how to check the clock and set time zones, let’s make sure our time is being synchronized properly.
By default, Debian 9 runs the standard ntpd server to keep your system time synchronized with a pool of external time servers. We can check that it’s running with the systemctl
command:
- sudo systemctl status ntp
Output● ntp.service - LSB: Start NTP daemon
Loaded: loaded (/etc/init.d/ntp; generated; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 15:07:03 EDT; 30min ago
Docs: man:systemd-sysv-generator(8)
Process: 876 ExecStart=/etc/init.d/ntp start (code=exited, status=0/SUCCESS)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/ntp.service
└─904 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:109
. . .
The active (running)
status indicates that ntpd started up properly. To get more information about the status of ntpd we can use the ntpq
command:
- ntpq -p
Output remote refid st t when poll reach delay offset jitter
==============================================================================
0.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
1.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
2.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
3.debian.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.000
-eterna.binary.n 204.9.54.119 2 u 240 256 377 35.392 0.142 0.211
-static-96-244-9 192.168.10.254 2 u 60 256 377 10.242 1.297 2.412
+minime.fdf.net 83.157.230.212 3 u 99 256 377 24.042 0.128 0.250
*t1.time.bf1.yah 98.139.133.62 2 u 31 256 377 11.112 0.621 0.186
+x.ns.gin.ntt.ne 249.224.99.213 2 u 108 256 377 1.290 -0.073 0.132
-ord1.m-d.net 142.66.101.13 2 u 473 512 377 19.930 -1.764 0.293
ntpq
is a query tool for ntpd. The -p
flag asks for information about the NTP servers (or peers) ntpd is connected to. Your output will be slightly different, but should list the default Debian pool servers plus a few others. Bear in mind that it can take a few minutes for ntpd to establish connections.
It is possible to use systemd’s built-in timesyncd component to replace ntpd. timesyncd is a lighter-weight alternative to ntpd that is more integrated with systemd. Note however that it doesn’t support running as a time server, and it is slightly less sophisticated in the techniques it uses to keep your system time in sync. If you are running complex real-time distributed systems, you may want to stick with ntpd.
To use timesyncd, we must first uninstall ntpd:
- sudo apt purge ntp
Then, start up the timesyncd service:
- sudo systemctl start systemd-timesyncd
Finally, check the status of the service to make sure it’s running:
- sudo systemctl status systemd-timesyncd
Output● systemd-timesyncd.service - Network Time Synchronization
Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
└─disable-with-time-daemon.conf
Active: active (running) since Tue 2018-09-04 16:14:23 EDT; 1s ago
Docs: man:systemd-timesyncd.service(8)
Main PID: 3399 (systemd-timesyn)
Status: "Synchronized to time server 198.60.22.240:123 (0.debian.pool.ntp.org)."
Tasks: 2 (limit: 4915)
CGroup: /system.slice/systemd-timesyncd.service
└─3399 /lib/systemd/systemd-timesyncd
We can use timedatectl
to print out systemd’s current understanding of the time:
- timedatectl
Output Local time: Tue 2018-09-04 16:15:34 EDT
Universal time: Tue 2018-09-04 20:15:34 UTC
RTC time: Tue 2018-09-04 20:15:33
Time zone: America/New_York (EDT, -0400)
Network time on: yes
NTP synchronized: yes
RTC in local TZ: no
This prints out the local time, universal time (which may be the same as local time, if you didn’t switch from the UTC time zone), and some network time status information. Network time on: yes
means that timesyncd is enabled, and NTP synchronized: yes
indicates that the time has been successfully synced.
In this article we’ve shown how to view the system time, change time zones, work with ntpd, and switch to systemd’s timesyncd service. If you have more sophisticated timekeeping needs than what we’ve covered here, you might refer to the offical NTP documentation, and also take a look at the NTP Pool Project, a global group of volunteers providing much of the world’s NTP infrastructure.
]]>Nginx is one of the most popular web servers in the world and responsible for hosting some of the largest and highest-traffic sites on the internet. It is more resource-friendly than Apache in most cases and can be used as a web server or reverse proxy.
In this guide, we’ll discuss how to install Nginx on your Debian 9 server.
Before you begin this guide, you should have a regular, non-root user with sudo privileges configured on your server and an active firewall. You can learn how to set these up by following our initial server setup guide for Debian 9.
When you have an account available, log in as your non-root user to begin.
Because Nginx is available in Debian’s default repositories, it is possible to install it from these repositories using the apt
packaging system.
Since this is our first interaction with the apt
packaging system in this session, let’s also update our local package index so that we have access to the most recent package listings. Afterwards, we can install nginx
:
- sudo apt update
- sudo apt install nginx
After accepting the procedure, apt
will install Nginx and any required dependencies to your server.
Before testing Nginx, the firewall software needs to be adjusted to allow access to the service.
List the application configurations that ufw
knows how to work with by typing:
- sudo ufw app list
You should get a listing of the application profiles:
OutputAvailable applications:
...
Nginx Full
Nginx HTTP
Nginx HTTPS
...
As you can see, there are three profiles available for Nginx:
80
(normal, unencrypted web traffic) and port 443
(TLS/SSL encrypted traffic)80
(normal, unencrypted web traffic)443
(TLS/SSL encrypted traffic)It is recommended that you enable the most restrictive profile that will still allow the traffic you’ve configured. Since we haven’t configured SSL for our server yet in this guide, we will only need to allow traffic on port 80
.
You can enable this by typing:
- sudo ufw allow 'Nginx HTTP'
You can verify the change by typing:
- sudo ufw status
You should see HTTP traffic allowed in the displayed output:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Nginx HTTP ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Nginx HTTP (v6) ALLOW Anywhere (v6)
At the end of the installation process, Debian 9 starts Nginx. The web server should already be up and running.
We can check with the systemd
init system to make sure the service is running by typing:
- systemctl status nginx
Output● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:15:57 UTC; 3min 28s ago
Docs: man:nginx(8)
Process: 2402 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 2399 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 2404 (nginx)
Tasks: 2 (limit: 4915)
CGroup: /system.slice/nginx.service
├─2404 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─2405 nginx: worker process
As you can see above, the service appears to have started successfully. However, the best way to test this is to actually request a page from Nginx.
You can access the default Nginx landing page to confirm that the software is running properly by navigating to your server’s IP address. If you do not know your server’s IP address, try typing this at your server’s command prompt:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
You will get back a few lines. You can try each in your web browser to see if they work.
When you have your server’s IP address, enter it into your browser’s address bar:
http://your_server_ip
You should see the default Nginx landing page:
This page is included with Nginx to show you that the server is running correctly.
Now that you have your web server up and running, let’s review some basic management commands.
To stop your web server, type:
- sudo systemctl stop nginx
To start the web server when it is stopped, type:
- sudo systemctl start nginx
To stop and then start the service again, type:
- sudo systemctl restart nginx
If you are simply making configuration changes, Nginx can often reload without dropping connections. To do this, type:
- sudo systemctl reload nginx
By default, Nginx is configured to start automatically when the server boots. If this is not what you want, you can disable this behavior by typing:
- sudo systemctl disable nginx
To re-enable the service to start up at boot, you can type:
- sudo systemctl enable nginx
When using the Nginx web server, server blocks (similar to virtual hosts in Apache) can be used to encapsulate configuration details and host more than one domain from a single server. We will set up a domain called example.com, but you should replace this with your own domain name. To learn more about setting up a domain name with DigitalOcean, see our introduction to DigitalOcean DNS.
Nginx on Debian 9 has one server block enabled by default that is configured to serve documents out of a directory at /var/www/html
. While this works well for a single site, it can become unwieldy if you are hosting multiple sites. Instead of modifying /var/www/html
, let’s create a directory structure within /var/www
for our example.com site, leaving /var/www/html
in place as the default directory to be served if a client request doesn’t match any other sites.
Create the directory for example.com as follows, using the -p
flag to create any necessary parent directories:
- sudo mkdir -p /var/www/example.com/html
Next, assign ownership of the directory with the $USER
environment variable:
- sudo chown -R $USER:$USER /var/www/example.com/html
The permissions of your web roots should be correct if you haven’t modified your umask
value, but you can make sure by typing:
- sudo chmod -R 755 /var/www/example.com
Next, create a sample index.html
page using nano
or your favorite editor:
- nano /var/www/example.com/html/index.html
Inside, add the following sample HTML:
<html>
<head>
<title>Welcome to Example.com!</title>
</head>
<body>
<h1>Success! The example.com server block is working!</h1>
</body>
</html>
Save and close the file when you are finished.
In order for Nginx to serve this content, it’s necessary to create a server block with the correct directives. Instead of modifying the default configuration file directly, let’s make a new one at /etc/nginx/sites-available/example.com
:
- sudo nano /etc/nginx/sites-available/example.com
Paste in the following configuration block, which is similar to the default, but updated for our new directory and domain name:
server {
listen 80;
listen [::]:80;
root /var/www/example.com/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
}
Notice that we’ve updated the root
configuration to our new directory, and the server_name
to our domain name.
Next, let’s enable the file by creating a link from it to the sites-enabled
directory, which Nginx reads from during startup:
- sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
Two server blocks are now enabled and configured to respond to requests based on their listen
and server_name
directives (you can read more about how Nginx processes these directives here):
example.com
: Will respond to requests for example.com
and www.example.com
.default
: Will respond to any requests on port 80
that do not match the other two blocks.To avoid a possible hash bucket memory problem that can arise from adding additional server names, it is necessary to adjust a single value in the /etc/nginx/nginx.conf
file. Open the file:
- sudo nano /etc/nginx/nginx.conf
Find the server_names_hash_bucket_size
directive and remove the #
symbol to uncomment the line:
...
http {
...
server_names_hash_bucket_size 64;
...
}
...
Save and close the file when you are finished.
Next, test to make sure that there are no syntax errors in any of your Nginx files:
- sudo nginx -t
If there aren’t any problems, you will see the following output:
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Once your configuration test passes, restart Nginx to enable your changes:
- sudo systemctl restart nginx
Nginx should now be serving your domain name. You can test this by navigating to http://example.com
, where you should see something like this:
Now that you know how to manage the Nginx service itself, you should take a few minutes to familiarize yourself with a few important directories and files.
/var/www/html
: The actual web content, which by default only consists of the default Nginx page you saw earlier, is served out of the /var/www/html
directory. This can be changed by altering Nginx configuration files./etc/nginx
: The Nginx configuration directory. All of the Nginx configuration files reside here./etc/nginx/nginx.conf
: The main Nginx configuration file. This can be modified to make changes to the Nginx global configuration./etc/nginx/sites-available/
: The directory where per-site server blocks can be stored. Nginx will not use the configuration files found in this directory unless they are linked to the sites-enabled
directory. Typically, all server block configuration is done in this directory, and then enabled by linking to the other directory./etc/nginx/sites-enabled/
: The directory where enabled per-site server blocks are stored. Typically, these are created by linking to configuration files found in the sites-available
directory./etc/nginx/snippets
: This directory contains configuration fragments that can be included elsewhere in the Nginx configuration. Potentially repeatable configuration segments are good candidates for refactoring into snippets./var/log/nginx/access.log
: Every request to your web server is recorded in this log file unless Nginx is configured to do otherwise./var/log/nginx/error.log
: Any Nginx errors will be recorded in this log.Now that you have your web server installed, you have many options for the type of content you can serve and the technologies you can use to create a richer experience for your users.
]]>MariaDB is an open-source database management system, commonly installed in place of MySQL as part of the popular LAMP (Linux, Apache, MySQL, PHP/Python/Perl) stack. It uses a relational database and SQL (Structured Query Language) to manage its data. MariaDB was forked from MySQL in 2009 due to licensing concerns.
The short version of the installation is simple: update your package index, install the mariadb-server
package (which points to MariaDB), and then run the included security script.
- sudo apt update
- sudo apt install mariadb-server
- sudo mysql_secure_installation
This tutorial will explain how to install MariaDB version 10.1 on a Debian 9 server.
To follow this tutorial, you will need:
sudo
privileges and a firewall.On Debian 9, MariaDB version 10.1 is included in the APT package repositories by default. It is marked as the default MySQL variant by the Debian MySQL/MariaDB packaging team.
To install it, update the package index on your server with apt
:
- sudo apt update
Then install the package:
- sudo apt install mariadb-server
This will install MariaDB, but will not prompt you to set a password or make any other configuration changes. Because this leaves your installation of MariaDB insecure, we will address this next.
For fresh installations, you’ll want to run the included security script. This changes some of the less secure default options for things like remote root logins and sample users.
Run the security script:
- sudo mysql_secure_installation
This will take you through a series of prompts where you can make some changes to your MariaDB installation’s security options. The first prompt will ask you to enter the current database root password. Since we have not set one up yet, press ENTER
to indicate “none”.
The next prompt asks you whether you’d like to set up a database root password. Type N
and then press ENTER
. In Debian, the root account for MariaDB is tied closely to automated system maintenance, so we should not change the configured authentication methods for that account. Doing so would make it possible for a package update to break the database system by removing access to the administrative account. Later, we will cover how to optionally set up an additional administrative account for password access if socket authentication is not appropriate for your use case.
From there, you can press Y
and then ENTER
to accept the defaults for all the subsequent questions. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MariaDB immediately respects the changes you have made.
In Debian systems running MariaDB 10.1, the root MariaDB user is set to authenticate using the unix_socket
plugin by default rather than with a password. This allows for some greater security and usability in many cases, but it can also complicate things when you need to allow an external program (e.g., phpMyAdmin) administrative rights.
Because the server uses the root account for tasks like log rotation and starting and stopping the server, it is best not to change the root account’s authentication details. Changing the account credentials in the /etc/mysql/debian.cnf
may work initially, but package updates could potentially overwrite those changes. Instead of modifying the root account, the package maintainers recommend creating a separate administrative account if you need to set up password-based access.
To do so, we will be creating a new account called admin
with the same capabilities as the root account, but configured for password authentication. To do this, open up the MariaDB prompt from your terminal:
- sudo mysql
Now, we can create a new user with root privileges and password-based access. Change the username and password to match your preferences:
- GRANT ALL ON *.* TO 'admin'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
Flush the privileges to ensure that they are saved and available in the current session:
- FLUSH PRIVILEGES;
Following this, exit the MariaDB shell:
- exit
Finally, let’s test the MariaDB installation.
When installed from the default repositories, MariaDB should start running automatically. To test this, check its status.
- sudo systemctl status mariadb
You’ll see output similar to the following:
● mariadb.service - MariaDB database server
Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 16:22:47 UTC; 2h 35min ago
Process: 15596 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSIT
Process: 15594 ExecStartPost=/etc/mysql/debian-start (code=exited, status=0/SUCCESS)
Process: 15478 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= ||
Process: 15474 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITI
Process: 15471 ExecStartPre=/usr/bin/install -m 755 -o mysql -g root -d /var/run/mysql
Main PID: 15567 (mysqld)
Status: "Taking your SQL requests now..."
Tasks: 27 (limit: 4915)
CGroup: /system.slice/mariadb.service
└─15567 /usr/sbin/mysqld
Sep 04 16:22:45 deb-mysql1 systemd[1]: Starting MariaDB database server...
Sep 04 16:22:46 deb-mysql1 mysqld[15567]: 2018-09-04 16:22:46 140183374869056 [Note] /usr/sbin/mysqld (mysqld 10.1.26-MariaDB-0+deb9u1) starting as process 15567 ...
Sep 04 16:22:47 deb-mysql1 systemd[1]: Started MariaDB database server.
If MariaDB isn’t running, you can start it with sudo systemctl start mariadb
.
For an additional check, you can try connecting to the database using the mysqladmin
tool, which is a client that lets you run administrative commands. For example, this command says to connect to MariaDB as root and return the version using the Unix socket:
- sudo mysqladmin version
You should see output similar to this:
Outputmysqladmin Ver 9.1 Distrib 10.1.26-MariaDB, for debian-linux-gnu on x86_64
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Server version 10.1.26-MariaDB-0+deb9u1
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 2 hours 44 min 46 sec
Threads: 1 Questions: 36 Slow queries: 0 Opens: 21 Flush tables: 1 Open tables: 15 Queries per second avg: 0.003
If you configured a separate administrative user with password authentication, you could perform the same operation by typing:
- mysqladmin -u admin -p version
This means MariaDB is up and running and that your user is able to authenticate successfully.
You now have a basic MariaDB setup installed on your server. Here are a few examples of next steps you can take:
]]>A “LAMP” stack is a group of open source software that is typically installed together to enable a server to host dynamic websites and web apps. This term is actually an acronym which represents the Linux operating system, with the Apache web server. The site data is stored in a MariaDB database, and dynamic content is processed by PHP.
In this guide, we will install a LAMP stack on a Debian 9 server.
In order to complete this tutorial, you will need to have a Debian 9 server with a non-root sudo
-enabled user account and a basic firewall. This can be configured using our initial server setup guide for Debian 9.
The Apache web server is among the most popular web servers in the world. It’s well-documented and has been in wide use for much of the history of the web, which makes it a great default choice for hosting a website.
Install Apache using Debian’s package manager, apt
:
- sudo apt update
- sudo apt install apache2
Since this is a sudo
command, these operations are executed with root privileges. It will ask you for your regular user’s password to verify your intentions.
Once you’ve entered your password, apt
will tell you which packages it plans to install and how much extra disk space they’ll take up. Press Y
and hit ENTER
to continue, and the installation will proceed.
Next, assuming that you have followed the initial server setup instructions by installing and enabling the UFW firewall, make sure that your firewall allows HTTP and HTTPS traffic.
When installed on Debian 9, UFW comes loaded with app profiles which you can use to tweak your firewall settings. View the full list of application profiles by running:
- sudo ufw app list
The WWW
profiles are used to manage ports used by web servers:
OutputAvailable applications:
. . .
WWW
WWW Cache
WWW Full
WWW Secure
. . .
If you inspect the WWW Full
profile, it shows that it enables traffic to ports 80
and 443
:
- sudo ufw app info "WWW Full"
OutputProfile: WWW Full
Title: Web Server (HTTP,HTTPS)
Description: Web Server (HTTP,HTTPS)
Ports:
80,443/tcp
Allow incoming HTTP and HTTPS traffic for this profile:
- sudo ufw allow in "WWW Full"
You can do a spot check right away to verify that everything went as planned by visiting your server’s public IP address in your web browser:
http://your_server_ip
You will see the default Debian 9 Apache web page, which is there for informational and testing purposes. It should look something like this:
If you see this page, then your web server is now correctly installed and accessible through your firewall.
If you do not know what your server’s public IP address is, there are a number of ways you can find it. Usually, this is the address you use to connect to your server through SSH.
There are a few different ways to do this from the command line. First, you could use the iproute2
tools to get your IP address by typing this:
- ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'
This will give you two or three lines back. They are all correct addresses, but your computer may only be able to use one of them, so feel free to try each one.
An alternative method is to use the curl
utility to contact an outside party to tell you how it sees your server. This is done by asking a specific server what your IP address is:
- sudo apt install curl
- curl http://icanhazip.com
Regardless of the method you use to get your IP address, type it into your web browser’s address bar to view the default Apache page.
Now that you have your web server up and running, it is time to install MariaDB. MariaDB is a database management system. Basically, it will organize and provide access to databases where your site can store information.
MariaDB is a community-built fork of MySQL. In Debian 9, the default MySQL server is MariaDB 10.1, and the mysql-server
package, which is normally used to install MySQL, is a transitional package that will actually install MariaDB. However, it’s recommended that you install MariaDB using the program’s actual package, mariadb-server
.
Again, use apt
to acquire and install this software:
- sudo apt install mariadb-server
Note: In this case, you do not have to run sudo apt update
prior to the command. This is because you recently ran it in the commands above to install Apache, and the package index on your computer should already be up-to-date.
This command, too, will show you a list of the packages that will be installed, along with the amount of disk space they’ll take up. Enter Y
to continue.
When the installation is complete, run a simple security script that comes pre-installed with MariaDB which will remove some insecure default settings and lock down access to your database system. Start the interactive script by running:
- sudo mysql_secure_installation
This will take you through a series of prompts where you can make some changes to your MariaDB installation’s security options. The first prompt will ask you to enter the current database root password. This is an administrative account in MariaDB that has increased privileges. Think of it as being similar to the root account for the server itself (although the one you are configuring now is a MariaDB-specific account). Because you just installed MariaDB and haven’t made any configuration changes yet, this password will be blank, so just press ENTER
at the prompt.
The next prompt asks you whether you’d like to set up a database root password. Type N
and then press ENTER
. In Debian, the root account for MariaDB is tied closely to automated system maintenance, so we should not change the configured authentication methods for that account. Doing so would make it possible for a package update to break the database system by removing access to the administrative account. Later, we will cover how to optionally set up an additional administrative account for password access if socket authentication is not appropriate for your use case.
From there, you can press Y
and then ENTER
to accept the defaults for all the subsequent questions. This will remove some anonymous users and the test database, disable remote root logins, and load these new rules so that MariaDB immediately respects the changes you have made.
In new installs on Debian systems, the root MariaDB user is set to authenticate using the unix_socket
plugin by default rather than with a password. This allows for some greater security and usability in many cases, but it can also complicate things when you need to allow an external program (e.g., phpMyAdmin) administrative rights.
Because the server uses the root account for tasks like log rotation and starting and stopping the server, it is best not to change the root account’s authentication details. Changing the account credentials in the /etc/mysql/debian.cnf
may work initially, but package updates could potentially overwrite those changes. Instead of modifying the root account, the package maintainers recommend creating a separate administrative account if you need to set up password-based access.
To do so, we will be creating a new account called admin
with the same capabilities as the root account, but configured for password authentication. To do this, open up the MariaDB prompt from your terminal:
- sudo mariadb
Now, we can create a new user with root privileges and password-based access. Change the username and password to match your preferences:
- GRANT ALL ON *.* TO 'admin'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION;
Flush the privileges to ensure that they are saved and available in the current session:
- FLUSH PRIVILEGES;
Following this, exit the MariaDB shell:
- exit
Now, any time you want to access your database as your new administrative user, you’ll need to authenticate as that user with the password you just set using the following command:
- mariadb -u admin -p
At this point, your database system is set up and you can move on to installing PHP, the final component of the LAMP stack.
PHP is the component of your setup that will process code to display dynamic content. It can run scripts, connect to your MariaDB databases to get information, and hand the processed content over to your web server to display.
Once again, leverage the apt
system to install PHP. In addition, include some helper packages this time so that PHP code can run under the Apache server and talk to your MariaDB database:
- sudo apt install php libapache2-mod-php php-mysql
This should install PHP without any problems. We’ll test this in a moment.
In most cases, you will want to modify the way that Apache serves files when a directory is requested. Currently, if a user requests a directory from the server, Apache will first look for a file called index.html
. We want to tell the web server to prefer PHP files over others, so make Apache look for an index.php
file first.
To do this, type this command to open the dir.conf
file in a text editor with root privileges:
- sudo nano /etc/apache2/mods-enabled/dir.conf
It will look like this:
<IfModule mod_dir.c>
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</IfModule>
Move the PHP index file (highlighted above) to the first position after the DirectoryIndex
specification, like this:
<IfModule mod_dir.c>
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</IfModule>
When you are finished, save and close the file by pressing CTRL+X
. Confirm the save by typing Y
and then hit ENTER
to verify the file save location.
After this, restart the Apache web server in order for your changes to be recognized. Do this by typing this:
- sudo systemctl restart apache2
You can also check on the status of the apache2
service using systemctl
:
- sudo systemctl status apache2
Sample Output● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-09-04 18:23:03 UTC; 9s ago
Process: 22209 ExecStop=/usr/sbin/apachectl stop (code=exited, status=0/SUCCESS)
Process: 22216 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 22221 (apache2)
Tasks: 6 (limit: 4915)
CGroup: /system.slice/apache2.service
├─22221 /usr/sbin/apache2 -k start
├─22222 /usr/sbin/apache2 -k start
├─22223 /usr/sbin/apache2 -k start
├─22224 /usr/sbin/apache2 -k start
├─22225 /usr/sbin/apache2 -k start
└─22226 /usr/sbin/apache2 -k start
To enhance the functionality of PHP, you have the option to install some additional modules. To see the available options for PHP modules and libraries, pipe the results of apt search
into less
, a pager which lets you scroll through the output of other commands:
- apt search php- | less
Use the arrow keys to scroll up and down, and press Q
to quit.
The results are all optional components that you can install. It will give you a short description for each:
OutputSorting...
Full Text Search...
bandwidthd-pgsql/stable 2.0.1+cvs20090917-10 amd64
Tracks usage of TCP/IP and builds html files with graphs
bluefish/stable 2.2.9-1+b1 amd64
advanced Gtk+ text editor for web and software development
cacti/stable 0.8.8h+ds1-10 all
web interface for graphing of monitoring systems
cakephp-scripts/stable 2.8.5-1 all
rapid application development framework for PHP (scripts)
ganglia-webfrontend/stable 3.6.1-3 all
cluster monitoring toolkit - web front-end
haserl/stable 0.9.35-2+b1 amd64
CGI scripting program for embedded environments
kdevelop-php-docs/stable 5.0.3-1 all
transitional package for kdevelop-php
kdevelop-php-docs-l10n/stable 5.0.3-1 all
transitional package for kdevelop-php-l10n
…
:
To learn more about what each module does, you could search the internet for more information about them. Alternatively, look at the long description of the package by typing:
- apt show package_name
There will be a lot of output, with one field called Description
which will have a longer explanation of the functionality that the module provides.
For example, to find out what the php-cli
module does, you could type this:
- apt show php-cli
Along with a large amount of other information, you’ll find something that looks like this:
Output…
Description: command-line interpreter for the PHP scripting language (default)
This package provides the /usr/bin/php command interpreter, useful for
testing PHP scripts from a shell or performing general shell scripting tasks.
.
PHP (recursive acronym for PHP: Hypertext Preprocessor) is a widely-used
open source general-purpose scripting language that is especially suited
for web development and can be embedded into HTML.
.
This package is a dependency package, which depends on Debian's default
PHP version (currently 7.0).
…
If, after researching, you decide you would like to install a package, you can do so by using the apt install
command like you have been doing for the other software.
If you decided that php-cli
is something that you need, you could type:
- sudo apt install php-cli
If you want to install more than one module, you can do that by listing each one, separated by a space, following the apt install
command, like this:
- sudo apt install package1 package2 ...
At this point, your LAMP stack is installed and configured. Before making any more changes or deploying an application, though, it would be helpful to proactively test out your PHP configuration in case there are any issues that should be addressed.
In order to test that your system is configured properly for PHP, create a very basic PHP script called info.php
. In order for Apache to find this file and serve it correctly, it must be saved to a very specific directory called the web root.
In Debian 9, this directory is located at /var/www/html/
. Create the file at that location by running:
- sudo nano /var/www/html/info.php
This will open a blank file. Add the following text, which is valid PHP code, inside the file:
<?php
phpinfo();
?>
When you are finished, save and close the file.
Now you can test whether your web server is able to correctly display content generated by this PHP script. To try this out, visit this page in your web browser. You’ll need your server’s public IP address again.
The address you will want to visit is:
http://your_server_ip/info.php
The page that you come to should look something like this:
This page provides some basic information about your server from the perspective of PHP. It is useful for debugging and to ensure that your settings are being applied correctly.
If you can see this page in your browser, then your PHP is working as expected.
You probably want to remove this file after this test because it could actually give information about your server to unauthorized users. To do this, run the following command:
- sudo rm /var/www/html/info.php
You can always recreate this page if you need to access the information again later.
Now that you have a LAMP stack installed, you have many choices for what to do next. Basically, you’ve installed a platform that will allow you to install most kinds of websites and web software on your server.
]]>Node.js is a JavaScript platform for general-purpose programming that allows users to build network applications quickly. By leveraging JavaScript on both the front and backend, Node.js makes development more consistent and integrated.
In this guide, we’ll show you how to get started with Node.js on a Debian 9 server.
This guide assumes that you are using Debian 9. Before you begin, you should have a non-root user account with sudo privileges set up on your system. You can learn how to set this up by following the initial server setup for Debian 9.
Debian contains a version of Node.js in its default repositories. At the time of writing, this version is 4.8.2, which will reach end-of-life at the end of April 2018. If you would like to experiment with the language using a stable and sufficient option, then installing from the repositories may make sense. It is recommended, however, that for development and production use cases you install a more recent version with a PPA. We will discuss how to install from a PPA in the next step.
To get the distro-stable version of Node.js, you can use the apt
package manager. First, refresh your local package index:
- sudo apt update
Then install the Node.js package from the repositories:
- sudo apt install nodejs
If the package in the repositories suits your needs, then this is all you need to do to get set up with Node.js.
To check which version of Node.js you have installed after these initial steps, type:
- nodejs -v
Because of a conflict with another package, the executable from the Debian repositories is called nodejs
instead of node
. Keep this in mind as you are running software.
Once you have established which version of Node.js you have installed from the Debian repositories, you can decide whether or not you would like to work with different versions, package archives, or version managers. Next, we’ll discuss these elements, along with more flexible and robust methods of installation.
To work with a more recent version of Node.js, you can add the PPA (personal package archive) maintained by NodeSource. This will have more up-to-date versions of Node.js than the official Debian repositories, and will allow you to choose between Node.js v4.x (the older long-term support version, which will be supported until the end of April 2018), Node.js v6.x (supported until April of 2019), Node.js v8.x (the current LTS version, supported until December of 2019), and Node.js v10.x (the latest version, supported until April of 2021).
Let’s first update the local package index and install curl
, which you will use to access the PPA:
- sudo apt update
- sudo apt install curl
Next, let’s install the PPA in order to get access to its contents. From your home directory, use curl
to retrieve the installation script for your preferred version, making sure to replace 10.x
with your preferred version string (if different):
- cd ~
- curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
You can inspect the contents of this script with nano
or your preferred text editor:
- nano nodesource_setup.sh
Run the script under sudo
:
- sudo bash nodesource_setup.sh
The PPA will be added to your configuration and your local package cache will be updated automatically. After running the setup script, you can install the Node.js package in the same way you did above:
- sudo apt install nodejs
To check which version of Node.js you have installed after these initial steps, type:
- nodejs -v
Outputv10.9.0
The nodejs
package contains the nodejs
binary as well as npm
, so you don’t need to install npm
separately.
npm
uses a configuration file in your home directory to keep track of updates. It will be created the first time you run npm
. Execute this command to verify that npm
is installed and to create the configuration file:
- npm -v
Output6.2.0
In order for some npm
packages to work (those that require compiling code from source, for example), you will need to install the build-essential
package:
- sudo apt install build-essential
You now have the necessary tools to work with npm
packages that require compiling code from source.
An alternative to installing Node.js through apt
is to use a tool called nvm
, which stands for “Node.js Version Manager”. Rather than working at the operating system level, nvm
works at the level of an independent directory within your home directory. This means that you can install multiple self-contained versions of Node.js without affecting the entire system.
Controlling your environment with nvm
allows you to access the newest versions of Node.js and retain and manage previous releases. It is a different utility from apt
, however, and the versions of Node.js that you manage with it are distinct from those you manage with apt
.
To download the nvm
installation script from the project’s GitHub page, you can use curl
. Note that the version number may differ from what is highlighted here:
- curl -sL https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh -o install_nvm.sh
Inspect the installation script with nano
:
- nano install_nvm.sh
Run the script with bash
:
- bash install_nvm.sh
It will install the software into a subdirectory of your home directory at ~/.nvm
. It will also add the necessary lines to your ~/.profile
file to use the file.
To gain access to the nvm
functionality, you’ll need to either log out and log back in again or source the ~/.profile
file so that your current session knows about the changes:
- source ~/.profile
With nvm
installed, you can install isolated Node.js versions. For information about the versions of Node.js that are available, type:
- nvm ls-remote
Output...
v8.11.1 (Latest LTS: Carbon)
v9.0.0
v9.1.0
v9.2.0
v9.2.1
v9.3.0
v9.4.0
v9.5.0
v9.6.0
v9.6.1
v9.7.0
v9.7.1
v9.8.0
v9.9.0
v9.10.0
v9.10.1
v9.11.0
v9.11.1
v10.0.0
v10.1.0
v10.2.0
v10.2.1
v10.3.0
v10.4.0
v10.4.1
v10.5.0
v10.6.0
v10.7.0
v10.8.0
v10.9.0
As you can see, the current LTS version at the time of this writing is v8.11.1. You can install that by typing:
- nvm install 8.11.1
Usually, nvm
will switch to use the most recently installed version. You can tell nvm
to use the version you just downloaded by typing:
- nvm use 8.11.1
When you install Node.js using nvm
, the executable is called node
. You can see the version currently being used by the shell by typing:
- node -v
Outputv8.11.1
If you have multiple Node.js versions, you can see what is installed by typing:
- nvm ls
If you wish to default one of the versions, type:
- nvm alias default 8.11.1
This version will be automatically selected when a new session spawns. You can also reference it by the alias like this:
- nvm use default
Each version of Node.js will keep track of its own packages and has npm
available to manage these.
You can also have npm
install packages to the Node.js project’s ./node_modules
directory. Use the following syntax to install the express
module:
- npm install express
If you’d like to install the module globally, making it available to other projects using the same version of Node.js, you can add the -g
flag:
- npm install -g express
This will install the package in:
~/.nvm/versions/node/node_version/lib/node_modules/express
Installing the module globally will let you run commands from the command line, but you’ll have to link the package into your local sphere to require it from within a program:
- npm link express
You can learn more about the options available to you with nvm by typing:
- nvm help
You can uninstall Node.js using apt
or nvm
, depending on the version you want to target. To remove versions installed from the repositories or from the PPA, you will need to work with the apt
utility at the system level.
To remove either of these versions, type the following:
- sudo apt remove nodejs
This command will remove the package and the configuration files.
To uninstall a version of Node.js that you have enabled using nvm
, first determine whether or not the version you would like to remove is the current active version:
- nvm current
If the version you are targeting is not the current active version, you can run:
- nvm uninstall node_version
This command will uninstall the selected version of Node.js.
If the version you would like to remove is the current active version, you must first deactivate nvm
to enable your changes:
- nvm deactivate
You can now uninstall the current version using the uninstall
command above, which will remove all files associated with the targeted version of Node.js except the cached files that can be used for reinstallation.
There are a quite a few ways to get up and running with Node.js on your Debian 9 server. Your circumstances will dictate which of the above methods is best for your needs. While using the packaged version in the Debian repository is an option for experimentation, installing from a PPA and working with npm
or nvm
offers additional flexibility.
When you first create a new Debian 9 server, there are a few configuration steps that you should take early on as part of the basic setup. This will increase the security and usability of your server and will give you a solid foundation for subsequent actions.
To log into your server, you will need to know your server’s public IP address. You will also need the password or, if you installed an SSH key for authentication, the private key for the root user’s account. If you have not already logged into your server, you may want to follow our guide on how to connect to your Droplet with SSH, which covers this process in detail.
If you are not already connected to your server, go ahead and log in as the root user using the following command (substitute the highlighted portion of the command with your server’s public IP address):
- ssh root@your_server_ip
Accept the warning about host authenticity if it appears. If you are using password authentication, provide your root password to log in. If you are using an SSH key that is passphrase protected, you may be prompted to enter the passphrase the first time you use the key each session. If this is your first time logging into the server with a password, you may also be prompted to change the root password.
The root user is the administrative user in a Linux environment that has very broad privileges. Because of the heightened privileges of the root account, you are discouraged from using it on a regular basis. This is because part of the power inherent with the root account is the ability to make very destructive changes, even by accident.
The next step is to set up an alternative user account with a reduced scope of influence for day-to-day work. We’ll teach you how to gain increased privileges during the times when you need them.
Once you are logged in as root, we’re prepared to add the new user account that we will use to log in from now on.
Note: In some environments, a package called unscd
may be installed by default in order to speed up requests to name servers like LDAP. The most recent version currently available in Debian contains a bug that causes certain commands (like the adduser
command below) to produce additional output that looks like this:
sent invalidate(passwd) request, exiting
sent invalidate(group) request, exiting
These messages are harmless, but if you wish to avoid them, it is safe to remove the unscd
package if you do not not plan on using systems like LDAP for user information:
- apt remove unscd
This example creates a new user called sammy, but you should replace it with a username that you like:
- adduser sammy
You will be asked a few questions, starting with the account password.
Enter a strong password and, optionally, fill in any of the additional information if you would like. This is not required and you can just hit ENTER
in any field you wish to skip.
Now, we have a new user account with regular account privileges. However, we may sometimes need to do administrative tasks.
To avoid having to log out of our normal user and log back in as the root account, we can set up what is known as “superuser” or root privileges for our normal account. This will allow our normal user to run commands with administrative privileges by putting the word sudo
before each command.
To add these privileges to our new user, we need to add the new user to the sudo group. By default, on Debian 9, users who belong to the sudo group are allowed to use the sudo
command.
As root, run this command to add your new user to the sudo group (substitute the highlighted word with your new user):
- usermod -aG sudo sammy
Now, when logged in as your regular user, you can type sudo
before commands to perform actions with superuser privileges.
Debian servers can use firewalls to make sure only connections to certain services are allowed. Although the iptables
firewall is installed by default, Debian does not strongly recommend any specific firewall. In this guide, we will install and use the UFW firewall to help set policies and manage exceptions.
We can use the apt
package manager to install UFW. Update the local index to retrieve the latest information about available packages and then install the firewall by typing:
- apt update
- apt install ufw
Note: If your servers are running on DigitalOcean, you can optionally use DigitalOcean Cloud Firewalls instead of the UFW firewall. We recommend using only one firewall at a time to avoid conflicting rules that may be difficult to debug.
Firewall profiles allow UFW to manage sets of firewall rules for applications by name. Profiles for some common software are bundled with UFW by default and packages can register additional profiles with UFW during the installation process. OpenSSH, the service allowing us to connect to our server now, has a firewall profile that we can use.
You can see this by typing:
- ufw app list
OutputAvailable applications:
. . .
OpenSSH
. . .
We need to make sure that the firewall allows SSH connections so that we can log back in next time. We can allow these connections by typing:
- ufw allow OpenSSH
Afterwards, we can enable the firewall by typing:
- ufw enable
Type “y
” and press ENTER
to proceed. You can see that SSH connections are still allowed by typing:
- ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
As the firewall is currently blocking all connections except for SSH, if you install and configure additional services, you will need to adjust the firewall settings to allow acceptable traffic in. You can learn some common UFW operations in this guide.
Now that we have a regular user for daily use, we need to make sure we can SSH into the account directly.
Note: Until verifying that you can log in and use sudo
with your new user, we recommend staying logged in as root. This way, if you have problems, you can troubleshoot and make any necessary changes as root. If you are using a DigitalOcean Droplet and experience problems with your root SSH connection, you can log into the Droplet using the DigitalOcean Console.
The process for configuring SSH access for your new user depends on whether your server’s root account uses a password or SSH keys for authentication.
If you logged in to your root account using a password, then password authentication is enabled for SSH. You can SSH to your new user account by opening up a new terminal session and using SSH with your new username:
- ssh sammy@your_server_ip
After entering your regular user’s password, you will be logged in. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
To enhance your server’s security, we strongly recommend setting up SSH keys instead of using password authentication. Follow our guide on setting up SSH keys on Debian 9 to learn how to configure key-based authentication.
If you logged in to your root account using SSH keys, then password authentication is disabled for SSH. You will need to add a copy of your local public key to the new user’s ~/.ssh/authorized_keys
file to log in successfully.
Since your public key is already in the root account’s ~/.ssh/authorized_keys
file on the server, we can copy that file and directory structure to our new user account in our existing session with the cp
command. Afterwards, we can adjust ownership of the files using the chown
command.
Make sure to change the highlighted portions of the command below to match your regular user’s name:
- cp -r ~/.ssh /home/sammy
- chown -R sammy:sammy /home/sammy/.ssh
Now, open up a new terminal session and using SSH with your new username:
- ssh sammy@your_server_ip
You should be logged in to the new user account without using a password. Remember, if you need to run a command with administrative privileges, type sudo
before it like this:
- sudo command_to_run
You will be prompted for your regular user password when using sudo
for the first time each session (and periodically afterwards).
Now that we have a strong baseline configuration, we can consider a few optional steps to make the system more accessible. The following sections cover a few additional tweaks focused on usability.
Debian provides extensive manuals for most software in the form of man
pages. However, the man
command is not always included by default on minimal installations.
Install the man-db
package to install the man
command and the manual databases:
- sudo apt install man-db
Now, to view the manual for a component, you can type:
- man command
For example, to view the manual for the top
command, type:
- man top
Most packages in the Debian repositories include manual pages as part of their installation.
Debian offers a wide variety of text editors, some of which are included in the base system. Commands with integrated editor support, like visudo
and systemctl edit
, pass text to the editor
command, which is mapped to the system default editor. Setting the default editor according to your preferences can help you configure your system more easily and avoid frustration.
If your preferred editor is not installed by default, use apt
to install it first:
- sudo apt install your_preferred_editor
Next, you can view the current default and modify the selection using the update-alternatives
command:
- sudo update-alternatives --config editor
The command displays a table of the editors it knows about with a prompt to change the default:
OutputThere are 8 choices for the alternative editor (providing /usr/bin/editor).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/bin/joe 70 auto mode
1 /bin/nano 40 manual mode
2 /usr/bin/jmacs 50 manual mode
3 /usr/bin/joe 70 manual mode
4 /usr/bin/jpico 50 manual mode
5 /usr/bin/jstar 50 manual mode
6 /usr/bin/rjoe 25 manual mode
7 /usr/bin/vim.basic 30 manual mode
8 /usr/bin/vim.tiny 15 manual mode
Press <enter> to keep the current choice[*], or type selection number:
The asterisk in the far left column indicates the current selection. To change the default, type the “Selection” number for your preferred editor and press Enter
. For example, to use nano
as the default editor given the above table, we would choose 1
:
OutputPress <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /bin/nano to provide /usr/bin/editor (editor) in manual mode
From now on, your preferred editor will be used by commands like visudo
and systemctl edit
, or when the editor
command is called.
At this point, you have a solid foundation for your server. You can install any of the software you need on your server now.
]]>SSH, or secure shell, is an encrypted protocol used to administer and communicate with servers. When working with a Debian server, chances are you will spend most of your time in a terminal session connected to your server through SSH.
In this guide, we’ll focus on setting up SSH keys for a vanilla Debian 9 installation. SSH keys provide an easy, secure way of logging into your server and are recommended for all users.
The first step is to create a key pair on the client machine (usually your computer):
- ssh-keygen
By default ssh-keygen
will create a 2048-bit RSA key pair, which is secure enough for most use cases (you may optionally pass in the -b 4096
flag to create a larger 4096-bit key).
After entering the command, you should see the following output:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/your_home/.ssh/id_rsa):
Press enter to save the key pair into the .ssh/
subdirectory in your home directory, or specify an alternate path.
If you had previously generated an SSH key pair, you may see the following prompt:
Output/home/your_home/.ssh/id_rsa already exists.
Overwrite (y/n)?
If you choose to overwrite the key on disk, you will not be able to authenticate using the previous key anymore. Be very careful when selecting yes, as this is a destructive process that cannot be reversed.
You should then see the following prompt:
OutputEnter passphrase (empty for no passphrase):
Here you optionally may enter a secure passphrase, which is highly recommended. A passphrase adds an additional layer of security to prevent unauthorized users from logging in. To learn more about security, consult our tutorial on How To Configure SSH Key-Based Authentication on a Linux Server.
You should then see the following output:
OutputYour identification has been saved in /your_home/.ssh/id_rsa.
Your public key has been saved in /your_home/.ssh/id_rsa.pub.
The key fingerprint is:
a9:49:2e:2a:5e:33:3e:a9:de:4e:77:11:58:b6:90:26 username@remote_host
The key's randomart image is:
+--[ RSA 2048]----+
| ..o |
| E o= . |
| o. o |
| .. |
| ..S |
| o o. |
| =o.+. |
|. =++.. |
|o=++. |
+-----------------+
You now have a public and private key that you can use to authenticate. The next step is to place the public key on your server so that you can use SSH-key-based authentication to log in.
The quickest way to copy your public key to the Debian host is to use a utility called ssh-copy-id
. Due to its simplicity, this method is highly recommended if available. If you do not have ssh-copy-id
available to you on your client machine, you may use one of the two alternate methods provided in this section (copying via password-based SSH, or manually copying the key).
ssh-copy-id
The ssh-copy-id
tool is included by default in many operating systems, so you may have it available on your local system. For this method to work, you must already have password-based SSH access to your server.
To use the utility, you simply need to specify the remote host that you would like to connect to and the user account that you have password SSH access to. This is the account to which your public SSH key will be copied.
The syntax is:
- ssh-copy-id username@remote_host
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Next, the utility will scan your local account for the id_rsa.pub
key that we created earlier. When it finds the key, it will prompt you for the password of the remote user’s account:
Output/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@203.0.113.1's password:
Type in the password (your typing will not be displayed for security purposes) and press ENTER
. The utility will connect to the account on the remote host using the password you provided. It will then copy the contents of your ~/.ssh/id_rsa.pub
key into a file in the remote account’s home ~/.ssh
directory called authorized_keys
.
You should see the following output:
OutputNumber of key(s) added: 1
Now try logging into the machine, with: "ssh 'username@203.0.113.1'"
and check to make sure that only the key(s) you wanted were added.
At this point, your id_rsa.pub
key has been uploaded to the remote account. You can continue on to Step 3.
If you do not have ssh-copy-id
available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.
We can do this by using the cat
command to read the contents of the public SSH key on our local computer and piping that through an SSH connection to the remote server.
On the other side, we can make sure that the ~/.ssh
directory exists and has the correct permissions under the account we’re using.
We can then output the content we piped over into a file called authorized_keys
within this directory. We’ll use the >>
redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.
The full command looks like this:
- cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
You may see the following message:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. This will happen the first time you connect to a new host. Type “yes” and press ENTER
to continue.
Afterwards, you should be prompted to enter the remote user account password:
Outputusername@203.0.113.1's password:
After entering your password, the content of your id_rsa.pub
key will be copied to the end of the authorized_keys
file of the remote user’s account. Continue on to Step 3 if this was successful.
If you do not have password-based SSH access to your server available, you will have to complete the above process manually.
We will manually append the content of your id_rsa.pub
file to the ~/.ssh/authorized_keys
file on your remote machine.
To display the content of your id_rsa.pub
key, type this into your local computer:
- cat ~/.ssh/id_rsa.pub
You will see the key’s content, which should look something like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCqql6MzstZYh1TmWWv11q5O3pISj2ZFl9HgH1JLknLLx44+tXfJ7mIrKNxOOwxIxvcBF8PXSYvobFYEZjGIVCEAjrUzLiIxbyCoxVyle7Q+bqgZ8SeeM8wzytsY+dVGcBxF6N4JS+zVk5eMcV385gG3Y6ON3EG112n6d+SMXY0OEBIcO6x+PnUSGHrSgpBgX7Ks1r7xqFa7heJLLt2wWwkARptX7udSq05paBhcpB0pHtA1Rfz3K2B+ZVIpSDfki9UVKzT8JUmwW6NNzSgxUfQHGwnW7kj4jp4AT0VZk3ADw497M2G/12N0PPB5CnhHf7ovgy6nL1ikrygTKRFmNZISvAcywB9GVqNAVE+ZHDSCuURNsAInVzgYo9xgJDW8wUw2o8U77+xiFxgI5QSZX3Iq7YLMgeksaO4rBJEa54k8m5wEiEE1nUhLuJ0X/vh2xPff6SQ1BL/zkOhvJCACK6Vb15mDOeCSq54Cr7kvS46itMosi/uS66+PujOO+xt/2FWYepz6ZlN70bRly57Q06J+ZJoc9FfBCbCyYH7U/ASsmY095ywPsBo1XQ9PqhnN1/YOorJ068foQDNVpm146mUpILVxmq41Cj55YKHEazXGsdBIbXWhcrRf4G2fJLRcGUr9q8/lERo9oxRm5JFX6TCmj6kmiFqv+Ow9gI0x8GvaQ== demo@test
Access your remote host using whichever method you have available.
Once you have access to your account on the remote server, you should make sure the ~/.ssh
directory exists. This command will create the directory if necessary, or do nothing if it already exists:
- mkdir -p ~/.ssh
Now, you can create or modify the authorized_keys
file within this directory. You can add the contents of your id_rsa.pub
file to the end of the authorized_keys
file, creating it if necessary, using this command:
- echo public_key_string >> ~/.ssh/authorized_keys
In the above command, substitute the public_key_string
with the output from the cat ~/.ssh/id_rsa.pub
command that you executed on your local system. It should start with ssh-rsa AAAA...
.
Finally, we’ll ensure that the ~/.ssh
directory and authorized_keys
file have the appropriate permissions set:
- chmod -R go= ~/.ssh
This recursively removes all “group” and “other” permissions for the ~/.ssh/
directory.
If you’re using the root
account to set up keys for a user account, it’s also important that the ~/.ssh
directory belongs to the user and not to root
:
- chown -R sammy:sammy ~/.ssh
In this tutorial our user is named sammy but you should substitute the appropriate username into the above command.
We can now attempt passwordless authentication with our Debian server.
If you have successfully completed one of the procedures above, you should be able to log into the remote host without the remote account’s password.
The basic process is the same:
- ssh username@remote_host
If this is your first time connecting to this host (if you used the last method above), you may see something like this:
OutputThe authenticity of host '203.0.113.1 (203.0.113.1)' can't be established.
ECDSA key fingerprint is fd:fd:d4:f9:77:fe:73:84:e1:55:00:ad:d6:6d:22:fe.
Are you sure you want to continue connecting (yes/no)? yes
This means that your local computer does not recognize the remote host. Type “yes” and then press ENTER
to continue.
If you did not supply a passphrase for your private key, you will be logged in immediately. If you supplied a passphrase for the private key when you created the key, you will be prompted to enter it now (note that your keystrokes will not display in the terminal session for security). After authenticating, a new shell session should open for you with the configured account on the Debian server.
If key-based authentication was successful, continue on to learn how to further secure your system by disabling password authentication.
If you were able to log into your account using SSH without a password, you have successfully configured SSH-key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.
Before completing the steps in this section, make sure that you either have SSH-key-based authentication configured for the root account on this server, or preferably, that you have SSH-key-based authentication configured for a non-root account on this server with sudo
privileges. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is crucial.
Once you’ve confirmed that your remote account has administrative privileges, log into your remote server with SSH keys, either as root or with an account with sudo
privileges. Then, open up the SSH daemon’s configuration file:
- sudo nano /etc/ssh/sshd_config
Inside the file, search for a directive called PasswordAuthentication
. This may be commented out. Uncomment the line and set the value to “no”. This will disable your ability to log in via SSH using account passwords:
...
PasswordAuthentication no
...
Save and close the file when you are finished by pressing CTRL
+ X
, then Y
to confirm saving the file, and finally ENTER
to exit nano. To actually implement these changes, we need to restart the sshd
service:
- sudo systemctl restart ssh
As a precaution, open up a new terminal window and test that the SSH service is functioning correctly before closing this session:
- ssh username@remote_host
Once you have verified your SSH service, you can safely close all current server sessions.
The SSH daemon on your Debian server now only responds to SSH keys. Password-based authentication has successfully been disabled.
You should now have SSH-key-based authentication configured on your server, allowing you to sign in without providing an account password.
If you’d like to learn more about working with SSH, take a look at our SSH Essentials Guide.
]]>