GitLab is an open-source application primarily used to host Git repositories, with additional development-related features like issue tracking. It is designed to be hosted using your own infrastructure, and provides flexibility in deploying as an internal repository store for your development team, a public way to interface with users, or a means for contributors to host their own projects.
The GitLab project enables you to create a GitLab instance on your own hardware with a minimal installation mechanism. This guide will teach you how to install and configure GitLab Community Edition on an Ubuntu server.
If you are using Ubuntu version 16.04 or below, we recommend you upgrade to a more latest version since Ubuntu no longer provides support for these versions. This collection of guides will help you in upgrading your Ubuntu version.
To follow along with this tutorial, you will need:
sudo
privileges and an active firewall. For guidance on how to set these up, please choose your distribution from this list and follow our Initial Server Setup Guide.The published GitLab hardware requirements recommend using a server with a minimum of:
4 cores for your CPU
4GB of RAM for memory
Although you may be able to get by with substituting some swap space for RAM, it is not recommended. The following examples in this guide will use these minimum resources.
your_domain
as an example, but be sure to replace this with your domain name.Before installing GitLab, it is important to install the software that it leverages during installation and on an ongoing basis. The required software can be installed from Ubuntu’s default package repositories.
First, refresh the local package index:
- sudo apt update
Then install the dependencies by entering this command:
- sudo apt install ca-certificates curl openssh-server postfix tzdata perl
You will likely have some of this software installed already. For the postfix
installation, select Internet Site when prompted. On the next screen, enter your server’s domain name to configure how the system will send mail.
Now that you have the dependencies installed, you’re ready to install GitLab.
With the dependencies in place, you can install GitLab. This process leverages an installation script to configure your system with the GitLab repositories.
First, move into the /tmp
directory:
- cd /tmp
Then download the installation script:
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Feel free to examine the downloaded script to ensure that you are comfortable with the actions it will take. You can also find a hosted version of the script on the GitLab installation instructions:
- less /tmp/script.deb.sh
Once you are satisfied with the safety of the script, run the installer:
- sudo bash /tmp/script.deb.sh
The script sets up your server to use the GitLab maintained repositories. This lets you manage GitLab with the same package management tools you use for your other system packages. Once this is complete, you can install the actual GitLab application with apt
:
- sudo apt install gitlab-ce
This installs the necessary components on your system and may take some time to complete.
Before you configure GitLab, you need to ensure that your firewall rules are permissive enough to allow web traffic. If you followed the guide linked in the prerequisites, you will already have a ufw
firewall enabled.
View the current status of your active firewall by running:
- sudo ufw status
-
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
The current rules allow SSH traffic through, but access to other services is restricted. Since GitLab is a web application, you need to allow HTTP access. Because you will be taking advantage of GitLab’s ability to request and enable a free TLS/SSL certificate from Let’s Encrypt, also allow HTTPS access.
The protocol to port mapping for HTTP and HTTPS are available in the /etc/services
file, so you can allow that traffic in by name. If you didn’t already have OpenSSH traffic enabled, you should allow that traffic:
- sudo ufw allow http
- sudo ufw allow https
- sudo ufw allow OpenSSH
You can check the ufw status
again to ensure that you granted access to at least these two services:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
This output indicates that the GitLab web interface is now accessible once you configure the application.
Before you can use the application, update the configuration file and run a reconfiguration command. First, open GitLab’s configuration file with your preferred text editor. This example uses nano
:
- sudo nano /etc/gitlab/gitlab.rb
Search for the external_url
configuration line. Update it to match your domain and make sure to change http
to https
to automatically redirect users to the site protected by the Let’s Encrypt certificate:
...
## GitLab URL
##! URL on which GitLab will be reachable.
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
##!
##! Note: During installation/upgrades, the value of the environment variable
##! EXTERNAL_URL will be used to populate/replace this value.
##! On AWS EC2 instances, we also attempt to fetch the public hostname/IP
##! address from AWS. For more details, see:
##! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
external_url 'https://your_domain'
...
Next, find the letsencrypt['contact_emails']
setting. If you’re using nano
, you can enable a search prompt by pressing CTRL+W
. Write letsencrypt['contact_emails']
into the prompt, then press ENTER
. This setting defines a list of email addresses that the Let’s Encrypt project can use to contact you if there are problems with your domain. It’s recommended to uncomment and fill this out to inform yourself of any issues that may occur:
letsencrypt['contact_emails'] = ['sammy@example.com']
Once you’re done making changes, save and close the file. If you’re using nano
, you can do this by pressing CTRL+X
, then Y
, then ENTER
.
Run the following command to reconfigure GitLab:
- sudo gitlab-ctl reconfigure
This will initialize GitLab using the information it can find about your server. This is a completely automated process, so you will not have to answer any prompts. The process will also configure a Let’s Encrypt certificate for your domain.
With GitLab running, you can perform an initial configuration of the application through the web interface.
Visit the domain name of your GitLab server in your web browser:
https://your_domain
On your first visit, you’ll be greeted with a login page:
GitLab generates an initial secure password for you. It is stored in a folder that you can access as an administrative sudo
user:
- sudo nano /etc/gitlab/initial_root_password
# WARNING: This value is valid only in the following conditions
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the firs$
# 2. Password hasn't been changed manually, either via UI or via command line.
#
# If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
Password: YOUR_PASSWORD
# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
Back on the login page, enter the following:
Username: root
Password: [the password listed on /etc/gitlab/initial_root_password
]
Enter these values into the fields and click the Sign in button. You will be signed in to the application and taken to a landing page that prompts you to begin adding projects:
You can now fine tune your GitLab instance.
One of the first things you should do after logging in, is change your password. To make this change, click on the icon in the upper-right corner of the navigation bar and select Edit Profile:
You’ll then enter a User Settings page. On the left navigation bar, select Password to change your GitLab generated password, to a secure password, then click on the Save password button when you’re finished with your updates:
You’ll be taken back to the login screen with a notification that your password has been changed. Enter your new password to log back into your GitLab instance:
GitLab selects some reasonable defaults, but these are not usually appropriate once you start using the software.
To make the necessary modifications, click on the user icon in the upper-right corner of the navigation bar and select Edit Profile.
You can adjust the Name and Email address from “Administrator” and “admin@example.com” to something more accurate. The name you select will be displayed to other users, while the email will be used for default avatar detection, notifications, Git actions through the interface, and more:
Click on the Update Profile settings button at the bottom when you are finished with your updates. You’ll be prompted to enter your password to confirm changes.
A confirmation email will be sent to the address you provided. Follow the instructions in the email to confirm your account so that you can begin using it with GitLab.
Next, select Account in the left navigation bar:
Here, you can enable two-factor authentication and change your username. By default, the first administrative account is given the name root. Since this is a known account name, it is more secure to change this to a different name. You will still have administrative privileges; the only thing that will change is the name. Replace root with your preferred username:
Click on the Update username button to make the change. You’ll be prompted to confirm the change thereafter.
Next time you log into GitLab, remember to use your new username.
You can enable SSH keys with Git to interact with your GitLab projects. To do this, you need to add your SSH public key to your GitLab account.
In the left navigation bar, select SSH Keys:
If you already have an SSH key pair created on your local computer, you can view the public key by typing:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copy this text and enter it into the Key text box inside your GitLab instance.
If, instead, you get a different message, you do not yet have an SSH key pair configured on your machine:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
If this is the case, you can create an SSH key pair by entering the following command:
-
- [environment local]
-
- ssh-keygen
-
Accept the defaults and optionally provide a password to secure the key locally:
Output
[environment local]
Generating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Once you have this, you can display your public key as the previous example by entering this command:
- cat ~/.ssh/id_rsa.pub
Output
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Insert this block of text in the output and enter it into the Key text box inside your GitLab instance. Give it a descriptive title, and click the Add key button.
Now you’re able to manage your GitLab projects and repositories from your local machine without having to provide your GitLab account credentials.
With your current setup, it is possible for anyone to sign up for an account when you visit your GitLab instance’s landing page. This may be what you want if you are seeking to host a public project. However, many times, more restrictive settings are desirable.
To begin, navigate to the administrative area by clicking on the hamburger menu in the top navigation bar and select Admin from the drop-down:
Select Settings from the left navigation bar:
You will be taken to the global settings for your GitLab instance. Here, you can adjust a number of settings that affect whether new users can sign up and their level of access.
If you wish to disable sign-ups completely, scroll to the Sign-up Restrictions section and press Expand to view the options.
Then deselect the Sign-up enabled check box:
Remember to click on the Save changes button after making your changes.
The sign-up section is now removed from the GitLab landing page.
If you are using GitLab as part of an organization that provides email addresses associated with a domain, you can restrict sign-ups by domain instead of completely disabling them.
In the Sign-up Restrictions section, select the Send confirmation email on sign-up box, which will allow users to log in only after they’ve confirmed their email.
Next, add your domain or domains to the Whitelisted domains for sign-ups box, one domain per line. You can use the asterisk “*” to specify wildcard domains:
When you’re finished, click on the Save changes button.
The sign-up section is now removed from the GitLab landing page.
By default, new users can create up to 10 projects. If you wish to allow new users from the outside for visibility and participation, but want to restrict their access to creating new projects, you can do so in the Account and Limit Settings section.
Inside, you can change the Default projects limit to 0 to completely disable new users from creating projects:
New users can still be added to projects manually and have access to internal or public projects created by other users.
After your updates, remember to click on the Save changes button.
New users will now be able to create accounts, but unable to create projects.
By default, GitLab has a scheduled task set up to renew Let’s Encrypt certificates after midnight every fourth day, with the exact minute based on your external_url
. You can modify these settings in the /etc/gitlab/gitlab.rb
file.
For example, if you wanted to renew every 7th day at 12:30, you can configure it to do so. First, navigate to the configuration file:
- sudo nano /etc/gitlab/gitlab.rb
Then, find the following lines in the file and remove the #
and update it with following:
...
################################################################################
# Let's Encrypt integration
################################################################################
# letsencrypt['enable'] = nil
letsencrypt['contact_emails'] = ['sammy@digitalocean'] # This should be an array of email addresses to add as contacts
# letsencrypt['group'] = 'root'
# letsencrypt['key_size'] = 2048
# letsencrypt['owner'] = 'root'
# letsencrypt['wwwroot'] = '/var/opt/gitlab/nginx/www'
# See http://docs.gitlab.com/omnibus/settings/ssl.html#automatic-renewal for more on these settings
letsencrypt['auto_renew'] = true
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
...
You can also disable auto-renewal by setting the letsencrypt['auto_renew']
to false
:
...
letsencrypt['auto_renew'] = false
...
With auto-renewals in place, you don’t need to worry about service interruptions.
You now have a working GitLab instance hosted on your own server. You can begin to import or create new projects and configure the appropriate level of access for a team. GitLab is regularly adding features and making updates to their platform, so be sure to check out the project’s home page to stay up-to-date on any improvements or important notices.
]]>Argo CD is a popular open source implementation for doing GitOps continuous delivery on top of Kubernetes. Your applications, definitions, configurations, and environments should be declarative and version controlled. Also application deployment and lifecycle management should be automated, auditable, and easy to understand. All this can be done using Argo.
Argo CD adheres to the same GitOps patterns and principles, thus maintaining your cluster state using a declarative approach. Synchronization happens via a Git repository, where your Kubernetes manifests are being stored. Kubernetes manifests can be specified in several ways:
As with every application that runs in a Kubernetes cluster, Argo CD is configured via custom resource definitions (CRDs) stored inside YAML manifests. The most important one is the Application CRD. In an Argo CD application, you define which Git repository should be used to synchronize which Kubernetes cluster. It can be the same Kubernetes cluster where Argo CD is deployed, or an external one.
Argo CD is implemented as a Kubernetes Controller which continuously monitors running applications and compares the current (or live) state against the desired target state (as specified in the Git repo). A deployed application whose live state deviates from the target state is considered OutOfSync
. Argo CD reports and visualizes the differences, while providing facilities to automatically or manually sync the live state back to the desired target state.
Argo CD offers many features, most notable being:
Kustomize
, Helm
, Ksonnet
, Jsonnet
, plain-YAML
.OIDC
, OAuth2
, LDAP
, SAML 2.0
, GitHub
, GitLab
, Microsoft
, LinkedIn
).In this tutorial, you will learn to:
Helm
to provision Argo CD
to your DOKS
cluster.Kubernetes
cluster applications state synchronized with a Git
repository (use GitOps
principles).After finishing all the steps from this tutorial, you should have a DOKS
cluster with Argo CD
deployed, that will:
Below diagram shows how Argo CD manages Helm applications hosted using a Git repository:
To complete this tutorial, you will need:
DOKS
cluster that you have access to. Please follow the Starter Kit DOKS Setup Guide to find out more.to find out more.Starter Kit
repository.Kubernetes
interaction. Follow these instructions to connect to your cluster with kubectl
and doctl
.Argo CD
using the command line interface.Argo CD
releases and upgrades (optional, but recommended in general for production systems).Argo CD is using the Application core concept to manage applications deployment and lifecycle. Inside an Argo CD application manifest you define the Git repository hosting your application definitions, as well as the corresponding Kubernetes cluster to deploy applications. In other words, an Argo CD application defines the relationship between a source repository and a Kubernetes cluster. It’s a very concise and scalable design, where you can associate multiple sources (Git repositories) and corresponding Kubernetes clusters.
A major benefit of using applications is that you don’t need to deploy Argo to each cluster individually. You can use a dedicated cluster for Argo, and deploy applications to all clusters at once from a single place. This way, you avoid Argo CD downtime or loss, in case other environments have issues or get decommissioned.
On top of that, you can group similar applications into a Project. Projects permit logical grouping of applications and associated roles/permissions, when working with multiple teams. When not specified, each new application belongs to the default
project. The default
project is created automatically, and it doesn’t have any restrictions. The default project can be modified, but not deleted.
Starter Kit
is using the default
project for a quick jump start using Argo CD. Then, you will learn how to create an Application
for each Starter Kit component
, and use Helm charts
as the application source
. Argo CD is not limited to Helm sources only, and you can also leverage the power of Kustomize, Ksonnet, Jsonnet, etc. Please take a look at the application sources page for more details.
Although you can use the graphical UI (web interface) of Argo CD to create applications, Starter Kit relies on the GitOps declarative way, via YAML manifests. Each YAML configuration acts as a recipe for each application, thus it can be stored in a Git repository. It means, you can always recreate your Argo CD setup if you re-create your environment, or move to another cluster. More important, you can perform audits and track each change via Git history. It’s best practice to also have the Argo CD configuration files in a separate Git repository, than the one used for your application development. You can read the best practices page from the Argo CD official documentation website for more information on the topic.
Important note:
An important aspect to keep in mind is that by default Argo CD doesn’t automatically synchronize your new applications. When an ArgoCD Application is first created, its state is OutOfSync
. It means the Git repository state pointed by the ArgoCD Application doesn’t match the Kubernetes cluster state. Creating a new ArgoCD Application doesn’t trigger an automatic deployment on the target cluster.
To enable automatic synchronization and deletion of orphaned resources (pruning), you need to create a syncPolicy
. You can also configure Argo CD to automatically revert manual changes made via kubectl
. You can read more about auto sync policies on the official documentation website.
Typical Application CRD
using a Git repository source looks like below:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-apps
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myrepo/my-apps.git
targetRevision: HEAD
path: apps
destination:
server: https://kubernetes.default.svc
namespace: my-apps
syncPolicy:
automated:
prune: true
selfHeal: true
Explanations for the above configuration:
spec.project
: Tells Argo CD what project to use for the application (default
in this example).spec.source.repoURL
: Git repository URL used for synchronizing cluster state.spec.source.targetRevision
: Git repository revision used for synchronization (can be a branch or tag name as well).spec.source.path
: Git repository path where source files (YAML manifests) are stored.spec.destination.server
: Target Kubernetes cluster address. Usually points to https://kubernetes.default.svc
, if Argo CD is using the same cluster where it’s deployed.spec.destination.namespace
: Kubernetes namespace to use for your application.spec.syncPolicy.automated
: Enables automated syncing of applications in your cluster with a Git repository.spec.syncPolicy.automated.prune
: Prune specifies whether to delete resources from the cluster that are not found in the sources anymore as part of the automated sync.spec.syncPolicy.automated.selfHeal
: Specifies whether to revert resources back to their desired state upon manual modification in the cluster (e.g. via kubectl
).You can also use Helm repositories as a source for installing applications in your cluster. Typical Application CRD
using a Helm repository source, looks like below (similar to the Git repository example, except a Helm chart repository is used instead):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sealed-secrets
namespace: argocd
spec:
project: default
source:
chart: sealed-secrets
repoURL: https://bitnami-labs.github.io/sealed-secrets
targetRevision: 2.4.0
helm:
releaseName: sealed-secrets
values: |
replicaCount: 2
destination:
server: "https://kubernetes.default.svc"
namespace: kubeseal
Explanations for the above configuration:
spec.source.chart
: Helm chart to use as a source for the application.spec.source.repoURL
: Helm chart repository URL.spec.source.targetRevision
: Helm chart version to use for the application.spec.source.helm.releaseName
: Helm release name to create in your Kubernetes cluster.spec.source.helm.values
: Specifies Helm values to be passed to helm template, typically defined as a block.spec.destination.server
: Target Kubernetes cluster address. Usually points to https://kubernetes.default.svc
, if Argo CD is using the same cluster where it’s deployed.spec.destination.namespace
: Kubernetes namespace to use for your application.Please go ahead and read more about Argo CD core concepts on the official documentation website. Next, you’re going to discover the available install options to deploy Argo CD in your Kubernetes cluster.
Argo CD can be installed either using kubectl
, or Helm
:
kubectl
and an install manifest file. This method doesn’t offer direct control for various install parameters. If you’re not very familiar with Helm based installations, this is the most straightforward option to start with.HA
(High Availability) setups and if Argo CD is used in production
.Next, depending on the features you want available, you have two options:
Multi-Tenant
mode. This type of installation is typically used to service multiple application developer teams in the organization and is maintained by a platform team. The end-users can access Argo CD via the API server
using the Web UI
or argocd
CLI.Core
only mode. This is a trimmed-down install, without the graphical user interface, API server, SSO, etc, and installs the lightweight (non-HA) version of each component.Starter Kit is using the Multi-Tenant
and High Availability
modes to install Argo CD in your DOKS cluster. This way, you will have a reliable setup and explore all the available features, including the user interface. Please visit the install methods documentation page, for more information on the topic.
This method requires kubectl
, and it’s a two steps process:
namespace
, to deploy Argo CD itself.kubectl
.Please run the below commands in order:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yaml
Now, please go ahead and check if the installation was successful. First, check if all Argo CD deployments are healthy:
kubectl get deployments -n argocd
The output looks similar to (check the READY
column - all Pods
must be running):
OutputNAME READY UP-TO-DATE AVAILABLE AGE
argocd-applicationset-controller 1/1 1 1 51s
argocd-dex-server 1/1 1 1 50s
argocd-notifications-controller 1/1 1 1 50s
argocd-redis-ha-haproxy 3/3 3 3 50s
argocd-repo-server 2/2 2 2 49s
argocd-server 2/2 2 2 49s
Argo CD server must have a replicaset
minimum value of 2
for the HA
mode. If for some reason some deployments are not healthy, please check Kubernetes events and logs for the affected component Pods.
This method requires Helm to be installed on your local machine. Starter Kit provides a ready to use Helm values file to start with. and installs Argo CD in HA mode (without autoscaling).
Please follow the below steps to complete the Helm based installation:
git clone https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers.git
cd Kubernetes-Starter-Kit-Developers
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update argo
argo
Helm repository for available charts to install:helm search repo argo
The output looks similar to:
OutputNAME CHART VERSION APP VERSION DESCRIPTION
argo/argo 1.0.0 v2.12.5 A Helm chart for Argo Workflows
argo/argo-cd 4.9.4 v2.4.0 A Helm chart for Argo CD, a declarative, GitOps...
...
code 14-continuous-delivery-using-gitops/assets/manifests/argocd/argocd-values-v4.9.4.yaml
HELM_CHART_VERSION="4.9.4"
helm install argocd argo/argo-cd --version "${HELM_CHART_VERSION}" \
--namespace argocd \
--create-namespace \
-f "14-continuous-delivery-using-gitops/assets/manifests/argocd/argocd-values-v${HELM_CHART_VERSION}.yaml"
Note:
A specific
version for the Helm
chart is used. In this case 4.9.4
is picked, which maps to the 2.4.0
version of the application. It’s good practice in general, to lock on a specific version. This helps to have predictable results, and allows versioning control via Git
.
Now, check if the Helm release was successful:
helm ls -n argocd
The output looks similar to (STATUS
column value should be set to deployed
):
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
argocd argocd 1 2022-03-23 11:22:48.486199 +0200 EET deployed argo-cd-4.9.4 v2.4.0
Finally, verify Argo CD application deployment status:
kubectl get deployments -n argocd
The output looks similar to (check the READY
column - all Pods
must be running):
OutputNAME READY UP-TO-DATE AVAILABLE AGE
argocd-applicationset-controller 1/1 1 1 2m9s
argocd-dex-server 1/1 1 1 2m9s
argocd-notifications-controller 1/1 1 1 2m9s
argocd-redis-ha-haproxy 3/3 3 3 2m9s
argocd-repo-server 2/2 2 2 2m9s
argocd-server 2/2 2 2 2m9s
Argo CD server must have a replicaset
minimum value of 2
for the HA
mode. If, for some reason, some deployments are not healthy, please check Kubernetes events and logs for the affected component Pods.
You can also find more information about the Argo CD Helm chart by accessing the community-maintained repository.
Next, you’re going to learn how to access and explore the main features of the graphical user interface provided by Argo CD.
One of the neat features that Argo CD has to offer is the web interface, used to perform various administrative tasks and view application deployment status. You can create applications using the graphical user interface and interact with Argo CD in various ways. Another important feature is the ability to inspect each application state and access Kubernetes events, as well as your application logs. On top of that, Argo CD provides a visual representation of all Kubernetes objects (replicasets, pods, etc) each application deployment is using.
The web interface can be accessed by port-forwarding the argocd-server
Kubernetes service. Please run below command in a shell terminal:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Now, open a web browser and navigate to localhost:8080 (please ignore the invalid TLS certificates for now). You will be greeted with the Argo CD login page. The default administrator username is admin
, and the password is generated randomly at installation time. You can fetch it by running the below command:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Next, you will be redirected to the applications dashboard page. From here, you can view, create, or manage applications via the UI (a YAML editor is also available), as well as perform sync or refresh operations:
If you click on any application tile, a visual representation of all involved objects is also shown:
In the next section, you can manage your application projects, repositories and clusters:
Finally, the user info section shows the available users and allows for administrator password updates:
You can play around and explore each section and sub-section in detail to see all the available features. Next, you will learn how to use the CLI counterpart, named argocd
.
Argo CD allows the same set of features to be used either via the web interface or via the CLI. To use the argocd
CLI, you need to open a separate shell window and just type argocd
without any arguments. By default, it will display the available commands and options:
argocd
Usage:
argocd [flags]
argocd [command]
Available Commands:
account Manage account settings
admin Contains a set of commands useful for Argo CD administrators and requires direct Kubernetes access
app Manage applications
cert Manage repository certificates and SSH known hosts entries
cluster Manage cluster credentials
completion output shell completion code for the specified shell (bash or zsh)
context Switch between contexts
gpg Manage GPG keys used for signature verification
help Help about any command
...
For any command or sub-command, you can invoke the corresponding help page using the following pattern: argocd <command/subcommand> --help
. For example, if you want to check what options are available for the app
command:
argocd app --help
The output looks similar to:
Manage Applications
Usage:
argocd app [flags]
argocd app [command]
Examples:
# List all the applications.
argocd app list
# Get the details of a application
argocd app get my-app
...
Please go ahead and explore other commands/subcommands as well to see all the available options. Next, you will learn how to bootstrap your first Argo CD application, which will automatically deploy all Starter Kit components.
On a fresh install, Argo CD doesn’t know where to sync your applications from, or what Git repositories are available for sourcing application manifests. So, the first step is to perform a one time operation called bootstrapping. You can perform all the operations presented in this section by either using the argocd CLI or the graphical user interface.
There are multiple ways of bootstrapping your cluster (e.g., via scripts), but usually, Argo CD users make use of the app of apps
pattern. It means you will start by creating a parent application using the good
CLI (or the web interface), which in turn will reference and bootstrap the rest of the applications in your Kubernetes cluster.
First you need to prepare your Git repository to use a consistent layout. In the following example, you will create a Git repository layout structure similar to:
clusters
└── dev
└── helm
├── cert-manager-v1.8.0.yaml
├── nginx-v4.1.3.yaml
├── prometheus-stack-v35.5.1.yaml
├── sealed-secrets-v2.4.0.yaml
└── velero-v2.29.7.yaml
Please open a terminal and follow the below steps to create the layout for your Git repository:
<>
placeholders accordingly):git clone <YOUR_ARGOCD_GIT_REPOSITORY_ADDRESS>
<>
placeholders accordingly):cd <YOUR_GIT_REPO_LOCAL_COPY_DIRECTORY>
mkdir -p clusters/dev/helm
CERT_MANAGER_CHART_VERSION="1.8.0"
NGINX_CHART_VERSION="4.1.3"
PROMETHEUS_CHART_VERSION="35.5.1"
SEALED_SECRETS_CHART_VERSION="2.4.0"
VELERO_CHART_VERSION="2.29.7"
curl "https://raw.githubusercontent.com/digitalocean/Kubernetes-Starter-Kit-Developers/main/14-continuous-delivery-using-gitops/assets/manifests/argocd/applications/helm/cert-manager-v${CERT_MANAGER_CHART_VERSION}.yaml" > "clusters/dev/helm/cert-manager-v${CERT_MANAGER_CHART_VERSION}.yaml"
curl "https://raw.githubusercontent.com/digitalocean/Kubernetes-Starter-Kit-Developers/main/14-continuous-delivery-using-gitops/assets/manifests/argocd/applications/helm/nginx-v${NGINX_CHART_VERSION}.yaml" > "clusters/dev/helm/nginx-v${NGINX_CHART_VERSION}.yaml"
curl "https://raw.githubusercontent.com/digitalocean/Kubernetes-Starter-Kit-Developers/main/14-continuous-delivery-using-gitops/assets/manifests/argocd/applications/helm/prometheus-stack-v${PROMETHEUS_CHART_VERSION}.yaml" > "clusters/dev/helm/prometheus-stack-v${PROMETHEUS_CHART_VERSION}.yaml"
curl "https://raw.githubusercontent.com/digitalocean/Kubernetes-Starter-Kit-Developers/main/14-continuous-delivery-using-gitops/assets/manifests/argocd/applications/helm/sealed-secrets-v${SEALED_SECRETS_CHART_VERSION}.yaml" > "clusters/dev/helm/sealed-secrets-v${SEALED_SECRETS_CHART_VERSION}.yaml"
curl "https://raw.githubusercontent.com/digitalocean/Kubernetes-Starter-Kit-Developers/main/14-continuous-delivery-using-gitops/assets/manifests/argocd/applications/helm/velero-v${VELERO_CHART_VERSION}.yaml" > "clusters/dev/helm/velero-v${VELERO_CHART_VERSION}.yaml"
Next, you will create the parent application deployment and let Argo CD synchronize all Starter Kit applications automatically to your DOKS cluster.
In this section, you will learn how to use the argocd
CLI to create and make use of the app of apps
pattern to deploy all Starter Kit components in your DOKS cluster. The below picture illustrates the main concept:
First, you need to port-forward the Argo CD main server on your local machine in a separate terminal window:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Next, Argo CD API server access is required for argocd
CLI to work. Using another terminal window, you need to authenticate the argocd
client with your Argo CD server instance:
ADMIN_USER="admin"
ADMIN_PASSWD="$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)"
argocd login localhost:8080 --username $ADMIN_USER --password $ADMIN_PASSWD --insecure
The output looks similar to:
Output'admin:login' logged in successfully
Context 'localhost:8080' updated
Then, please run the below command to create the starter-kit-apps
parent application (make sure to replace the <>
placeholders accordingly):
argocd app create starter-kit-apps \
--dest-namespace argocd \
--dest-server https://kubernetes.default.svc \
--repo https://github.com/<YOUR_GITHUB_USERNAME>/<YOUR_ARGOCD_GITHUB_REPO_NAME>.git \
--path clusters/dev/helm
The above command will create a new Argo CD application
named starter-kit-apps
in the argocd
namespace, configured to:
--dest-server
is set to https://kubernetes.default.svc
.--repo
argument to synchronize your cluster.clusters/dev/helm
directory (--path
argument).Next, you need to sync the starter-kit-apps
application (remember that Argo CD doesn’t sync anything by default unless specified):
argocd app sync starter-kit-apps
The output looks similar to:
OutputTIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH ...
2022-03-23T17:39:38+02:00 argoproj.io Application argocd sealed-secrets-controller OutOfSync Missing ...
2022-03-23T17:39:38+02:00 argoproj.io Application argocd velero OutOfSync Missing ...
2022-03-23T17:39:38+02:00 argoproj.io Application argocd ingress-nginx OutOfSync Missing ...
...
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
argoproj.io Application argocd sealed-secrets-controller Synced application.argoproj.io/sealed-secrets-controller created
argoproj.io Application argocd ingress-nginx Synced application.argoproj.io/ingress-nginx created
argoproj.io Application argocd kube-prometheus-stack Synced application.argoproj.io/kube-prometheus-stack created
argoproj.io Application argocd velero Synced application.argoproj.io/velero created
argoproj.io Application argocd cert-manager Synced application.argoproj.io/cert-manager created
After the above command finishes, you should see a new application present in the main dashboard of your Argo CD server. Please open a web browser and navigate to http://localhost:8080. Then select the Applications
tab, and click on the starter-kit-apps
tile (notice the app of apps
pattern by looking at the composition graph):
You can also inspect the new applications via the CLI:
argocd app list
The output looks similar to:
OutputNAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY ...
ingress-nginx https://kubernetes.default.svc ingress-nginx default OutOfSync Missing Auto-Prune ...
cert-manager https://kubernetes.default.svc cert-manager default OutOfSync Missing Auto-Prune ...
kube-prometheus-stack https://kubernetes.default.svc monitoring default OutOfSync Missing Auto-Prune ...
sealed-secrets-controller https://kubernetes.default.svc sealed-secrets default OutOfSync Missing Auto-Prune ...
starter-kit-apps https://kubernetes.default.svc argocd default Synced Healthy <none> ...
velero https://kubernetes.default.svc velero default OutOfSync Missing Auto-Prune ...
The starter-kit-apps
parent application will appear as in-sync but the child apps will be out of sync. Next, you can either sync everything by using the web interface or via the CLI:
argocd app sync -l argocd.argoproj.io/instance=starter-kit-apps
The sync operation may take a while to complete (even up to 5-10 minutes), depending on the complexity and number of Kubernetes objects of all applications being deployed.
After a while, please list all applications again:
argocd app list
The output looks similar to (notice that all applications are synced now):
OutputNAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS ...
ingress-nginx https://kubernetes.default.svc ingress-nginx default Synced Healthy Auto-Prune <none> ...
cert-manager https://kubernetes.default.svc cert-manager default Synced Healthy Auto-Prune <none> ...
kube-prometheus-stack https://kubernetes.default.svc monitoring default Synced Healthy Auto-Prune <none> ...
sealed-secrets-controller https://kubernetes.default.svc sealed-secrets default Synced Healthy Auto-Prune <none> ...
starter-kit-apps https://kubernetes.default.svc argocd default Synced Healthy <none> <none> ...
velero https://kubernetes.default.svc velero default OutOfSync Missing Auto-Prune SyncError ...
The Velero application deployment will fail and be left on purpose in the SyncError
state as an exercise for the reader to get familiar with and learn how to diagnose application problems in Argo CD. Please consult the Hints section below to see how to diagnose Argo CD application issues.
Bootstrapping the parent application is a one-time operation. On subsequent Git changes for each application, Argo CD will detect the drift and apply the required changes. Argo CD is using a polling mechanism
by default to detect changes in your Git repository. The default refresh interval
is set to 3 minutes
. Instead of relying on a polling mechanism, you can also leverage the power of Git webhooks. Please visit the official documentation website to learn how to create and configure Argo CD to use Git webhooks.
Hints: If desired, you can configure the parent application to be synced automatically (and also enable self-healing and automatic pruning), you can use the following command (don’t forget to replace the <>
placeholders accordingly):
argocd app create starter-kit-apps \
--dest-namespace argocd \
--dest-server https://kubernetes.default.svc \
-repo https://github.com/<YOUR_GITHUB_USERNAME>/<YOUR_ARGOCD_GITHUB_REPO_NAME>.git \
--path clusters/dev/helm \
--sync-policy automated \
--auto-prune \
--self-heal
argocd app get <application_name>
):argocd app get velero
The output looks similar to:
Output Name: velero
Project: default
Server: https://kubernetes.default.svc
Namespace: velero
URL: https://argocd.example.com/applications/velero
Repo: https://vmware-tanzu.github.io/helm-charts
Target: 2.27.3
Path:
SyncWindow: Sync Allowed
Sync Policy: Automated (Prune)
Sync Status: OutOfSync from 2.27.3
Health Status: Missing
CONDITION MESSAGE LAST TRANSITION
SyncError Failed sync attempt to 2.27.3: one or more objects failed to apply (dry run) (retried 5 times). 2022-03-24 12:14:21 +0200 EET
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
velero.io VolumeSnapshotLocation velero default Failed SyncFailed PostSync error validating data: ValidationError(VolumeSnapshotLocation.spec): missing required field "provider" in io.velero.v1.VolumeSnapshotLocation.spec
velero.io BackupStorageLocation velero default Failed SyncFailed PostSync error validating data: [ValidationError(BackupStorageLocation.spec.objectStorage): missing required field "bucket" in io.velero.v1.BackupStorageLocation.spec.objectStorage, ValidationError(BackupStorageLocation.spec): missing required field "provider" in io.velero.v1.BackupStorageLocation.spec]
...
Next, you will learn how to use the app of apps patern
and perform the same steps via the Argo CD graphical user interface.
In this section, you will learn how to use the Argo CD web interface to create and make use of the app of apps
pattern to deploy all Starter Kit components in your DOKS cluster. Below picture illustrates the main concept:
As the above diagram shows, bootstrapping a new application via the web interface is very similar to the CLI counterpart. The only difference is that you will navigate between different panels/windows and use point-and-click operations. Behind the scenes, Argo CD will create the required application CRDs and apply changes to your Kubernetes cluster.
First, please open a web browser and log in to the Argo CD web console. The default user name is admin
, and the default password is obtained via:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Once logged in, you will be redirected to the applications dashboard page (on a fresh install, the dashboard is empty). Next, click on the Create Application
button. A new panel pops up asking for application details:
Please fill in each field appropriately:
Application Name
: The new application name (e.g. starter-kit-apps
).Project
: The project name this application belongs to (when using Argo CD for the first time, you can use default
).Sync Policy
and Sync Options
: Configures sync policy and options (e.g. Manual
, Automatic
, number of retries, interval between retries, etc).Repository URL
: Your GitHub repository URL address - e.g. https://github.com/<YOUR_GITHUB_USERNAME>/<YOUR_ARGOCD_GITHUB_REPO_NAME>.git
.Path
: GitHub repository directory path where application manifests are stored (e.g. clusters/dev/helm
).Cluster URL
: Target Kubernetes cluster to synchronize with your GitHub repository (e.g. https://kubernetes.default.svc
for the local cluster where Argo CD is deployed).Namespace
: Target Kubernetes cluster namespace to use for Argo CD applications (argocd
, usually).After filling in all application details, click on the Create button at the top. A new application tile shows up on the dashboard page:
If you click on the application tile, you can observe the app of apps pattern
, by looking at the composition graph:
If you look at the above picture, you will notice that all applications are marked as OutOfSync
. The next step is to trigger a sync operation on the parent application. Then, all child applications will be synced as well. Please go ahead and press the Sync
button on the parent application tile. A new panel pops up on the right side (notice that all child apps are selected down below):
Leave on the default values, then press on the Synchronize
button at the top and watch how Argo CD cascades the sync operation to all applications:
The Velero application deployment will fail and be left on purpose in the SyncError
state as an exercise for the reader to get familiar with and learn how to diagnose application problems in Argo CD. Please consult the Hints section below to see how to diagnose Argo CD application issues.
If everything goes well, all applications should have a green border and status should be Healthy
and Synced
. The bootstrapping process is a one-time operation. On subsequent Git changes for each application, Argo CD will detect the drift and apply the required changes. Argo CD uses a polling mechanism
by default to detect changes in your Git repository. The default refresh interval
is set to 3 minutes
. Instead of relying on a polling mechanism, you can also leverage the power of Git webhooks. Please visit the official documentation website to learn how to create and configure Argo CD to use Git webhooks.
Hints:
If desired, you can configure the parent application to be synced automatically by setting the SYNC POLICY
field value to Automatic
. To enable self-healing and automatic pruning, tick the PRUNE RESOURCES
and SELF HEAL
checkboxes:
In case of any synchronization failures, you can always inspect the Kubernetes events for the application in question. Using the web interface, you can navigate to the affected application tile:
Then, click on the Sync failed
message link flagged in red color, from the LAST SYNC RESULT
section in the application page header. A new panel pops up, showing useful information about why the sync operation failed:
In the next section, you will learn how to manage multiple applications at once using a single CRD - the ApplicationSet
.
Application Sets is another powerful feature offered by Argo CD. The ApplicationSet Controller is a sub-project of Argo CD, which adds application automation via templated definitions. This feature helps you avoid repetitions in your application manifests (make use of the DRY principle).
The ApplicationSet controller is installed alongside Argo CD (within the same namespace), and it automatically generates Argo CD Applications based on the contents of a new ApplicationSet Custom Resource (CR).
Note:
Starting with version 2.3.x
of Argo CD, you don’t need to install the ApplicationSet Controller
separately because it’s part of the Argo CD main installation. Starter Kit is using version >= 2.3.1
, so you don’t need to touch anything.
The main idea of an ApplicationSet
is based on having a list of values acting as a generator
, and a template
which gets populated by the input list values. For each item from the list, a new application template is generated in sequence. Basically, you define one ApplicationSet CRD and then let it generate for you as many ArgoCD Application CRDs, as you want, based on the input values. Thus, instead of creating and dealing with multiple Application manifests
, you manage everything via a single manifest
- the ApplicationSet
.
This concept also simplifies the management of multi-cluster
and multi-environment
setups by using parameterized application templates. Application sets include other generators as well, besides List Generators:
Typical ApplicationSet CRD
using a List Generator
, looks like below:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: my-app
spec:
generators:
- list:
elements:
- cluster: dev
url: https://kubernetes.dev.svc
- cluster: qa
url: https://kubernetes.qa.svc
- cluster: prod
url: https://kubernetes.prod.svc
template:
metadata:
name: '{{cluster}}-app'
spec:
project: default
source:
repoURL: https://github.com/myrepo/my-applicationset.git
targetRevision: HEAD
path: clusters/{{cluster}}/my-apps
destination:
server: '{{url}}'
namespace: argocd
Applying the above ApplicationSet
to your Kubernetes cluster will render three Argo CD applications. For example, the dev
environment application is rendered as shown below:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: dev-app
spec:
project: default
source:
repoURL: https://github.com/myrepo/my-applicationset.git
targetRevision: HEAD
path: clusters/dev/my-apps
destination:
server: https://kubernetes.dev.svc
namespace: argocd
Template engines are very powerful in nature and offer lots of possibilities. Please visit the main ApplicationSet documentation website to learn more about this feature.
Uninstalling (or deleting) applications managed by Argo CD is accomplished by deleting the corresponding manifest from the Git repository source. In case of applications created using the app of apps pattern
, you need to delete the parent app only (either via the CLI or web interface). Then, all child applications will be deleted as well as part of the process.
How to delete the starter-kit-apps
parent application (including child-apps) using argocd
CLI:
argocd app delete starter-kit-apps
If you want to ensure that child apps and all of their resources are deleted when the parent app is deleted, please make sure to add the appropriate finalizer
to your Application definition
:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
...
Notice the new finalizers
field added in the custom resource manifest metadata
section. When you delete the application, the associated Kubernetes objects get deleted as well.
In this tutorial, you learned the automation basics for a GitOps
based setup using Argo CD
. Then, you configured Argo CD
applications to perform Helm releases
for you automatically, and deploy all the Starter Kit
components in a GitOps
fashion. You also learned how to bootstrap new Argo CD applications by using the app of apps
pattern, as well as how to use ApplicationSets
to simplify and speed up the creation of parameterized applications.
To estimate the resource usage of the Starter Kit, please follow the Starter Kit Resource Usage chapter.
I am running a software built on DO App Platform. I have two private github repos that I am installing on the app platform manually. The problem is, app platform does not recognize the changes made on my github packages. When I make a change in my local and update the version of the package, then push it to GitHub, I can see the new files on Github.com. But when I try to pip install or pip upgrade the packages on DO App Platform, it does not pull the new files. It updates the version (for example increases from 0.2.4 to 0.2.5) but the new files are not there. This happens either I made changes in the existing files content (whether it is .py file or .xlsx) or even if I add new files that weren’t there before. I tried installing it with --no-cache-dir as well but still the same result. The only solution I found is deleting the private repo entirely, then forcing a redeploy then re-installing it but this is not ideal.
I want to simply update the package as this is the entire point of having a github package, being able to easily implement updates. It is really strange because I can see the changes I made on Github.com, but app platform does not work properly and does something completely different than what is available on Github.com. It only updates the version of the package and doesn’t recognize the changes made.
Does anyone have any idea how to solve this?
]]>Is here any possibility to do this, without using Docker Hub/DO Registry, external GitLab runners, etc?
]]>GitLab is an open-source software development program primarily used to host Git repositories. It provides features such as version control, issue tracking, code review, and more. GitLab is also flexible when it comes to your preferred hosting method. It can be hosted within your own infrastructure and can even be deployed as an internal repository for your development team or publicly for users, as well as a way for contributors to host their own projects. There are also features such as security and monitoring available with the GitLab Enterprise Edition.
This tutorial will guide you through spinning up a DigitalOcean Droplet with GitLab Enterprise pre-installed using the DigitalOcean GitLab Enterprise Edition 1-Click App. After creating your Droplet, you’ll learn how to log in to your GitLab server, navigate the web interface, and some common commands. Since this 1-Click Droplet is based on a standard Ubuntu 20.04 Droplet, you can read more of our GitLab and Git-relevant tutorials after you finish this guide.
The GitLab Enterprise Edition 1-Click comes pre-installed with GitLab EE Latest version on a Ubuntu 20.04 Droplet. As of this writing, the package is the GitLab 15.8 version.
To get your GitLab Enterprise Edition 1-Click up and running on your browser, you need a domain name. You can use the DNS quickstart guide to learn how to set one up using DigitalOcean DNS.
To create your GitLab Enterprise Edition 1-Click Droplet, first, locate it in our list of Marketplace Applications and select the GitLab Enterprise Edition application. This will take you to the DigitalOcean Control Panel.
To get started creating your Droplet, press the Create GitLab Enterprise Edition Droplet button:
If you are not already logged into your DigitalOcean account, you will need to log in to proceed. If you don’t have an account, you will be prompted to sign up.
Next, you’ll be taken to the Create Droplets page. Here you can customize your server settings before creating your GitLab Enterprise Edition Droplet. Our documentation on How to Create your First Droplet describes all the choices you need to make in detail, but the following sections discuss key settings to consider.
Your image will already be set to the Marketplace tab with GitLab Enterprise Edition Latest on Ubuntu 20.04 selected. If it’s not set, switch to the Marketplace tab and search for GitLab Enterprise Edition in the keyword search box. When properly set, your Control Panel will be similar to the following:
Once the GitLab Enterprise Edition image is properly selected, you can accept the defaults or adjust settings according to your use case. We typically recommend the following changes:
To avoid potential latency, it is recommended that you select a datacenter region closest to your user base. In some regions, we have more than one datacenter. There is no difference between these same region datacenters (e.g., SFO3 and SFO2).
Select a plan that works for you. Keep in mind that you can resize your Droplet depending on your needs. To run a Droplet with GitLab Enterprise Edition, a minimum memory size of 4GB of RAM is required to run the application successfully. Additionally, the recommended minimum for CPU hardware is 4 cores. This can support up to 500 users. To learn more, read the system requirements from the official documentation.
When choosing an authentication method, the SSH Key option is recommended rather than Password for your Droplet. Authentication using SSH Keys is typically more secure. Keep in mind that when opening GitLab Enterprise Edition for the first time in your browser, you will be directed to a login and password screen. You can use the default account username for root to log in. We will discuss where to find the file for the initial root password information in a later step.
Adding improved metrics monitoring and alerting to your Droplet helps you follow your Droplet resource usage over time. You may also want to consider enabling automated backups. If you prefer, you can come back later to enable backup functionality on Droplets you’ve already created.
Provide your Droplet with an identifying name that you will remember, such as “gitlab-ee-droplet-1” or naming it after the application you will be using it for.
After making all your selections, press Create Droplet button at the bottom of the Control Panel screen. Once the Droplet is created, its IP address will be displayed:
This IP address is important for connecting to your Droplet, as well as for any future configuration you may want to do. When you hover over the IP address, you can copy it to your clipboard.
Droplets created through the 1-Click Marketplace also come with additional resources that you can access by pressing the Get started link:
This toggles a new panel, where you can gain additional information that is specific to your chosen 1-Click. This includes an overview, further steps to get started using your Droplet, and links to relevant tutorials from our Community site. There are also useful links for where to get support and find more resources for GitLab Enterprise Edition. You can also get support by reviewing the official GitLab documentation:
Next, you will access your GitLab Enterprise Edition Droplet via the terminal using the SSH authentication method you set up earlier. Please note that it can take up to 10 minutes for your Droplet to be functional.
Once you’ve spun up your GitLab Enterprise Edition Droplet, you’ll need to connect to your Droplet via SSH. That means you’ll connect to the server from the command line. If you haven’t used a terminal program like SSH or PuTTY before, check out How To Connect To Your Droplet with SSH.
When you’re ready, open a terminal on your computer and log into your Droplet as root via SSH with the following command, substituting the IP address with your Droplet’s IP address:
- ssh root@your_server_ip
After you log in, double-check that the firewall settings are set up to allow for HTTP/HTTPS access via port 80
and port 443
by running the following command:
- ufw status
Status: active
To Action From
-- ------ ----
22/tcp LIMIT Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
22/tcp (v6) LIMIT Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
This output lists all of the allowed and limited port connections. Since ports 80
and 443
access are already listed, you do not need to add this rule. However, if you did set up SSH authentication, you will want to ensure that this rule is added. You can do that with the following command:
- ufw allow OpenSSH
After, you can check the status again with ufw status
to ensure it’s been added to the list. Now that you’ve successfully signed in and verified your firewall settings, in the next step you’ll edit GitLab’s configuration file.
If you want GitLab Enterprise to redirect to your domain, edit the configuration file with that information and then run a reconfiguration command. To get started, open up the following file with your preferred text editor. This example uses nano
:
- nano /etc/gitlab/gitlab.rb
Once you’re in the file, search for the external_url
line. This will likely appear similar to the following with the http
and your IP address information:
…
## GitLab URL
##! URL on which GitLab will be reachable.
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the>
##!
##! Note: During installation/upgrades, the value of the environment variable
##! EXTERNAL_URL will be used to populate/replace this value.
##! On AWS EC2 instances, we also attempt to fetch the public hostname/IP
##! address from AWS. For more details, see:
##! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retri>
external_url 'http://your_ip_address'
…
Now, update this line by changing http
to https
and write in your domain name in place of the IP address, so that you’ll automatically be redirected to your site and protected by a Let’s Encrypt certificate:
…
## GitLab URL
##! URL on which GitLab will be reachable.
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the>
##!
##! Note: During installation/upgrades, the value of the environment variable
##! EXTERNAL_URL will be used to populate/replace this value.
##! On AWS EC2 instances, we also attempt to fetch the public hostname/IP
##! address from AWS. For more details, see:
##! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retri>
external_url 'https://your_domain'
…
Next, you need to add your email for Let’s Encrypt. This is useful in the case that there is an issue with your domain, and Let’s Encrypt can contact you. Find the letsencrypt['contact_emails']
line, uncomment it by deleting the hash symbol #
, and then add your information:
…
letsencrypt['contact_emails'] = ['sammy@example.com']
…
After you’ve made these changes, save and close the file. If you used nano
you can do this by pressing CTRL + X
, Y
, and then ENTER
.
Then run the following command to reconfigure GitLab and account for these updates:
- gitlab-ctl reconfigure
This initializes GitLab and uses the information you updated about your server. It may take a few minutes and is an automated process with no prompts. This also configures a Let’s Encrypt certificate for your domain.
Now that you’ve updated your configuration file, next you’ll complete the configuration setup on your browser.
As mentioned earlier, you need to use your GitLab Enterprise root password for your initial log in on the browser. To find these credentials, open up the following file:
- sudo nano /etc/gitlab/initial_root_password
Once you’ve opened this file, located the Password
line to find that information:
# WARNING: This value is valid only in the following conditions
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the firs$
# 2. Password hasn't been changed manually, either via UI or via command line.
#
# If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
Password: your_password
# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
Save this password information for later. Now when you go to the log in page from your browser make sure to have the following credentials available:
/etc/gitlab/initial_root_password
fileBegin the initial configuration by going to the domain name for your GitLab server in the browser:
https://your_domain
You will receive the login page and can input your username and password credentials from the file you opened recently:
After inputting your information and pressing Sign in, you’ll be taken to the GitLab’s dashboard:
Once you’re in, first navigate to the icon in the far top right and press on it for the drop-down menu. From there, select Edit profile:
Then select Password from the list on the left panel. Update your password to something secure to replace the GitLab password that was generated for you. When you’re finished, press Save password:
This will return you to the original login screen, and a note stating the password was updated. Enter your username and new password to access your GitLab server again:
If you want to update your username, you can do so by navigating to the Edit profile option again and then selecting Account:
After, you can update your name in the section Change username. You’re given the username root by default when creating the account, so it’s recommended you change it for security reasons. This does not remove any administrative privileges, it only changes the name. When you’ve made the change, press Update username. You will be asked to confirm this change, and then it will be implemented. Keep in mind the updates to both your password and username next time you log in:
There are many other settings you can adjust for your profile as well, such as updating your avatar photo, current status, name, pronouns, pronunciation, email, and more. You can also add SSH keys to your account, renew your Let’s Encrypt certificates, and restrict or disable public sign-ups. Learn more from our tutorial on How To Install and Configure GitLab on Ubuntu 20.04.
Your GitLab Enterprise Edition 1-Click Droplet is now ready to go. To learn more about the settings and features of GitLab, check out our tutorial on How To Install and Configure GitLab on Ubuntu 20.04. For more general information on Git, GitHub, and open source, check out our the Introduction to GitHub and Open-Source Projects series.
]]>gitlab-ci.yml
"Deploy to DigitalOcean":
image: digitalocean/doctl:latest
stage: deploy
needs:
- job: "Test Docker build"
script:
- doctl auth init
- /app/doctl apps create-deployment XXX
only:
- master
Also I added the DIGITALOCEAN_ACCESS_TOKEN
variable with my personal token.
But I got an error
``Successfully extracted cache ``Executing "step_script" stage of the job script 00:00
``Using docker image sha256:16287ffaf3b998b10c7348803f2372d99b9daadc1f5ded9aab957f6b2b34a7e3 for digitalocean/doctl:latest with digest digitalocean/doctl@sha256:d612c781afeb2720393e32312ef19af545b5c69b933a02bcd83e3e2bda874802 ...
``Error: unknown command "sh" for "doctl"
``Run 'doctl --help' for usage.
``Cleaning up project directory and file based variables 00:01
``ERROR: Job failed: exit code 255
How can I trigger DigitalOcean from my CI pipeline? I don’t want to use deploy_on_push
feature, because it ignores my pipelines and deploy app even when my CI/CD is red.
GraphQL is a query language for APIs that consists of a schema definition language and a query language, which allows API consumers to fetch only the data they need to support flexible querying. GraphQL enables developers to evolve the API while meeting the different needs of multiple clients, for example iOS, Android, and web variants of an app. Moreover, the GraphQL schema adds a degree of type safety to the API while also serving as a form of documentation for your API.
Prisma is an open-source database toolkit with three main tools:
Prisma facilitates working with databases for application developers who want to focus on implementing value-adding features instead of spending time on complex database workflows (such as schema migrations or writing complicated SQL queries).
In this tutorial, you will use GraphQL and Prisma in combination as their responsibilities complement each other. GraphQL provides a flexible interface to your data for use in clients, such as frontends and mobile apps—GraphQL isn’t tied to any specific database. This is where Prisma comes in to handle the interaction with the database where your data will be stored.
DigitalOcean’s App Platform provides a seamless way to deploy applications and provision databases in the cloud without worrying about infrastructure. This reduces the operational overhead of running an application in the cloud; especially with the ability to create a managed PostgreSQL database with daily backups and automated failover. App Platform has native Node.js support streamlining deployment.
You’ll build a GraphQL API for a blogging application in JavaScript using Node.js. You will first use Apollo Server to build the GraphQL API backed by in-memory data structures. You will then deploy the API to the DigitalOcean App Platform. Finally you will use Prisma to replace the in-memory storage and persist the data in a PostgreSQL database and deploy the application again.
At the end of the tutorial, you will have a Node.js GraphQL API deployed to DigitalOcean, which handles GraphQL requests sent over HTTP and performs CRUD operations against the PostgreSQL database.
You can find the code for this project in the DigitalOcean Community respository.
Before you begin this guide you’ll need the following:
Basic familiarity with JavaScript, Node.js, GraphQL, and PostgreSQL is helpful, but not strictly required for this tutorial.
In this step, you will set up a Node.js project with npm and install the dependencies apollo-server
and graphql
. This project will be the foundation for the GraphQL API that you’ll build and deploy throughout this tutorial.
First, create a new directory for your project:
- mkdir prisma-graphql
Next, navigate into the directory and initialize an empty npm project:
- cd prisma-graphql
- npm init --yes
This command creates a minimal package.json
file that is used as the configuration file for your npm project.
You will receive the following output:
OutputWrote to /Users/your_username/workspace/prisma-graphql/package.json:
{
"name": "prisma-graphql",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
You’re now ready to configure TypeScript in your project.
Install the necessary dependencies:
- npm install apollo-server graphql --save
This command installs two packages as dependencies in your project:
apollo-server
is the HTTP library that you use to define how GraphQL requests are resolved and how to fetch data.graphql
is the library you’ll use to build the GraphQL schema.You’ve created your project and installed the dependencies. In the next step, you will define the GraphQL schema.
In this step, you will define the GraphQL schema and corresponding resolvers. The schema will define the operations that the API can handle. The resolvers will define the logic for handling those requests using in-memory data structures, which you will replace with database queries in the next step.
First, create a new directory called src
that will contain your source files:
- mkdir src
Then run the following command to create the file for the schema:
- nano src/schema.js
Add the following code to the file:
const { gql } = require('apollo-server')
const typeDefs = gql`
type Post {
content: String
id: ID!
published: Boolean!
title: String!
}
type Query {
feed: [Post!]!
post(id: ID!): Post
}
type Mutation {
createDraft(content: String, title: String!): Post!
publish(id: ID!): Post
}
`
You define the GraphQL schema using the gql
tagged template. A schema is a collection of type definitions (hence typeDefs
) that together define the shape of queries that can be executed against your API. This will convert the GraphQL schema string into the format that Apollo expects.
The schema introduces three types:
Post
defines the type for a post in your blogging app and contains four fields where each field is followed by its type: for example, String
.Query
defines the feed
query which returns multiple posts as denoted by the square brackets and the post
query which accepts a single argument and returns a single Post
.Mutation
defines the createDraft
mutation for creating a draft Post
and the publish
mutation which accepts an id
and returns a Post
.Every GraphQL API has a query type and may or may not have a mutation type. These types are the same as a regular object type, but they are special because they define the entry point of every GraphQL query.
Next, add the posts
array to the src/schema.js
file, below the typeDefs
variable:
...
const posts = [
{
id: 1,
title: 'Subscribe to GraphQL Weekly for community news ',
content: 'https://graphqlweekly.com/',
published: true,
},
{
id: 2,
title: 'Follow DigitalOcean on Twitter',
content: 'https://twitter.com/digitalocean',
published: true,
},
{
id: 3,
title: 'What is GraphQL?',
content: 'GraphQL is a query language for APIs',
published: false,
},
]
You define the posts
array with three pre-defined posts. The structure of each post
object matches the Post
type you defined in the schema. This array holds the posts that will be served by the API. In a subsequent step, you will replace the array once the database and Prisma Client are introduced.
Next, define the resolvers
object by adding the following code below the posts
array you just defined:
...
const resolvers = {
Query: {
feed: (parent, args) => {
return posts.filter((post) => post.published)
},
post: (parent, args) => {
return posts.find((post) => post.id === Number(args.id))
},
},
Mutation: {
createDraft: (parent, args) => {
posts.push({
id: posts.length + 1,
title: args.title,
content: args.content,
published: false,
})
return posts[posts.length - 1]
},
publish: (parent, args) => {
const postToPublish = posts.find((post) => post.id === Number(args.id))
postToPublish.published = true
return postToPublish
},
},
Post: {
content: (parent) => parent.content,
id: (parent) => parent.id,
published: (parent) => parent.published,
title: (parent) => parent.title,
},
}
module.exports = {
resolvers,
typeDefs,
}
You define the resolvers following the same structure as the GraphQL schema. Every field in the schema’s types has a corresponding resolver function whose responsibility is to return the data for that field in your schema. For example, the Query.feed()
resolver will return the published posts by filtering the posts
array.
Resolver functions receive four arguments:
parent
is the return value of the previous resolver in the resolver chain. For top-level resolvers, the parent is undefined
, because no previous resolver is called. For example, when making a feed
query, the query.feed()
resolver will be called with parent
’s value undefined
and then the resolvers of Post
will be called where parent
is the object returned from the feed
resolver.args
carries the parameters for the query. For example, the post
query, will receive the id
of the post to be fetched.context
is an object that gets passed through the resolver chain that each resolver can write to and read from, which allows the resolvers to share information.info
is an AST representation of the query or mutation. You can read more about the details in this Prisma series on GraphQL Basics.Since context
and info
are not necessary in these resolvers, only parent
and args
are defined.
Save and exit the file once you’re done.
Note: When a resolver returns the same field as the resolver’s name, like the four resolvers for Post
, Apollo Server will automatically resolve those. This means you don’t have to explicitly define those resolvers.
- Post: {
- content: (parent) => parent.content,
- id: (parent) => parent.id,
- published: (parent) => parent.published,
- title: (parent) => parent.title,
- },
You export the schema and resolvers so that you can use them in the next step to instantiate the server with Apollo Server.
In this step, you will create the GraphQL server with Apollo Server and bind it to a port so that the server can accept connections.
First, run the following command to create the file for the server:
- nano src/server.js
Add the following code to the file:
const { ApolloServer } = require('apollo-server')
const { resolvers, typeDefs } = require('./schema')
const port = process.env.PORT || 8080
new ApolloServer({ resolvers, typeDefs }).listen({ port }, () =>
console.log(`Server ready at: http://localhost:${port}`),
)
Here, you instantiate the server and pass the schema and resolvers from the previous step.
The port the server will bind to is set from the PORT
environment variable. If not set, it will default to 8080
. The PORT
environment variable will be automatically set by App Platform and will ensure your server can accept connections once deployed.
Save and exit the file.
Your GraphQL API is ready to run. Start the server with the following command:
- node src/server.js
You will receive the following output:
OutputServer ready at: http://localhost:8080
It’s considered good practice to add a start script to your package.json
file so that the entry point to your server is clear. Doing so will allow App Platform to start the server once deployed.
First, stop the server by pressing CTRL+C
. Then, to add a start script, open the package.json
file:
- nano package.json
Add the highlighted text to the "scripts"
object in package.json
:
{
"name": "prisma-graphql",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node ./src/server.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"apollo-server": "^3.11.1",
"graphql": "^16.6.0"
}
}
Save and exit the file.
Now you can start the server with the following command:
- npm start
You will receive the following output:
Output> prisma-graphql@1.0.0 start
> node ./src/server.js
Server ready at: http://localhost:8080
To test the GraphQL API, open the URL from the output, which will lead you to the Apollo GraphQL Studio. Click the Query Your Server button on the home page to interact with the IDE.
The Apollo GraphQL Studio is an IDE where you can test the API by sending queries and mutations.
For example, to test the feed
query, which only returns published posts, enter the following query to the left side of the IDE and send the query by pressing the Run or play button:
query {
feed {
id
title
content
published
}
}
The response will display a title of Subscribe to GraphQL Weekly
with its URL and Follow DigitalOcean on Twitter
with its URL.
Click on the +
button on the bar above your previous query to create a new tab. Then, to test the createDraft
mutation, enter the following mutation:
mutation {
createDraft(title: "Deploying a GraphQL API to DigitalOcean") {
id
title
content
published
}
}
After you submit the mutation using the play button, you will receive a response with Deploying a GraphQL API to DigitalOcean
within the title
field as part of the response.
Note: You can choose which fields to return from the mutation by adding or removing fields within the curly braces ({}
) following createDraft
. For example, if you wanted to only return the id
and title
you could send the following mutation:
mutation {
createDraft(title: "Deploying a GraphQL API to DigitalOcean") {
id
title
}
}
You have successfully created and tested the GraphQL server. In the next step, you will create a GitHub repository for the project.
In this step, you will create a GitHub repository for your project and push your changes so that the GraphQL API can be automatically deployed from GitHub to App Platform.
First, stop the development server by pressing CTRL+C
. Then initialize a repository from the prisma-graphql
folder using the following command:
- git init
Next, use the following two commands to commit the code to the repository:
- git add src package-lock.json package.json
- git commit -m 'Initial commit'
Now that the changes have been committed to your local repository, you will create a repository in GitHub and push your changes.
Go to GitHub to create a new repository. For consistency, name the repository prisma-graphql and then click Create repository.
After the repository is created, push the changes with the following commands, which includes renaming the default local branch to main
:
- git remote add origin git@github.com:your_github_username/prisma-graphql.git
- git branch -M main
- git push --set-upstream origin main
You have successfully committed and pushed the changes to GitHub. Next, you will connect the repository to App Platform and deploy the GraphQL API.
In this step, you will connect the GitHub repository you just created to DigitalOcean and then configure App Platform so that the GraphQL API can be automatically deployed when you push changes to GitHub.
First, visit the App Platform page in the DigitalOcean Cloud Console and click on the Create App button.
You will see service provider options with GitHub as the default.
If you have not configured DigitalOcean to your GitHub account, click on the Manage Access button to be redirected to GitHub.
You can select all repositories or specific repositories. Click Install & Authorize, then you will be redirected back to the DigitalOcean App Platform creation.
Choose the repository your_github_username/prisma-graphql
and click Next. Autodeploy is selected by default, and you can leave it selected for consistency in redeploys.
On the Resources page, click the Edit Plan button to choose a suitable plan. Select the Basic plan with the plan size you need (this tutorial will use the $5.00/mo - Basic plan).
Then press the Back to return to the creation page.
If you press the pen icon next to your project name, you can customize the configuration for the app. The Application Settings page will open:
Ensure that the Run Command is set as npm start
. By default, App Platform will set the HTTP port to 8080
, which is the same port that you’ve configured your GraphQL server to bind to.
When you have finished customizing the configuration, press the Back button to return to the setup page. Then, press the Next button to move to the Environment Variables page.
Your environment variables will not need further configuration at the moment. Click the Next button.
On the Info page, you can adjust App Details and Location. Edit your app information to choose the region you want to deploy your app to. Confirm your app details by pressing the Save button. Then, click the Next button.
You will be able to review all of your selected options on the Review page. Then click Create Resources. You will be redirected to the app page, where you will see the progress of the initial deployment.
Once the build finishes, you will get a notification indicating that your app is deployed.
You can now visit your deployed GraphQL API at the URL below the app’s name in your DigitalOcean Console. It will be linked via the ondigitalocean.app
subdomain. When you open the URL, the GraphQL Playground will open the same way as it did in Step 3 of this tutorial.
You have successfully connected your repository to App Platform and deployed your GraphQL API. Next you will evolve your app and replace the in-memory data of the GraphQL API with a database.
So far, you have built a GraphQL API using the in-memory posts
array to store data. If your server restarts, all changes to the data will be lost. To ensure that your data is safely persisted, you will replace the posts
array with a PostgreSQL database and use Prisma to access the data.
In this step, you will install the Prisma CLI, create your initial Prisma schema (the main configuration file for your Prisma setup, containing your database schema), set up PostgreSQL locally with Docker, and connect Prisma to it.
Begin by installing the Prisma CLI with the following command:
- npm install --save-dev prisma
The Prisma CLI will help with database workflows such as running database migrations and generating Prisma Client.
Next, you’ll set up your PostgreSQL database using Docker. Create a new Docker Compose file with the following command:
- nano docker-compose.yml
Add the following code to the newly created file:
version: '3.8'
services:
postgres:
image: postgres:14
restart: always
environment:
- POSTGRES_USER=test-user
- POSTGRES_PASSWORD=test-password
volumes:
- postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
volumes:
postgres:
This Docker Compose configuration file is responsible for starting the official PostgreSQL Docker image on your machine. The POSTGRES_USER
and POSTGRES_PASSWORD
environment variables set the credentials for the superuser (a user with admin privileges). You will also use these credentials to connect Prisma to the database. Replace the test-user
and test-password
with your user credentials.
Finally, you define a volume where PostgreSQL will store its data and bind the 5432
port on your machine to the same port in the Docker container.
Save and exit the file.
With this setup in place, you can launch the PostgreSQL database server with the following command:
- docker-compose up -d
It may take a few minutes to load.
You can verify that the database server is running with the following command:
- docker ps
This command will output something similar to:
OutputCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
198f9431bf73 postgres:10.3 "docker-entrypoint.s…" 45 seconds ago Up 11 seconds 0.0.0.0:5432->5432/tcp prisma-graphql_postgres_1
With the PostgreSQL container running, you can now create your Prisma setup. Run the following command from the Prisma CLI:
- npx prisma init
As a best practice, all invocations of the Prisma CLI should be prefixed with npx
to ensure it uses your local installation.
An output like this will print:
Output✔ Your Prisma schema was created at prisma/schema.prisma
You can now open it in your favorite editor.
Next steps:
1. Set the DATABASE_URL in the .env file to point to your existing database. If your database has no tables yet, read https://pris.ly/d/getting-started
2. Set the provider of the datasource block in schema.prisma to match your database: postgresql, mysql, sqlite, sqlserver, mongodb or cockroachdb.
3. Run prisma db pull to turn your database schema into a Prisma schema.
4. Run prisma generate to generate the Prisma Client. You can then start querying your database.
More information in our documentation:
https://pris.ly/d/getting-started
After running the command, the Prisma CLI generates a dotenv file named .env
in the project folder to define your database connection URL, as well as a new nested folder called prisma
that contains the schema.prisma
file. This is the main configuration file for your Prisma project (in which you will include your data model).
To make sure Prisma knows about the location of your database, open the .env
file:
- nano .env
Adjust the DATABASE_URL
environment variable with your user credentials:
DATABASE_URL="postgresql://test-user:test-password@localhost:5432/my-blog?schema=public"
You use the database credentials test-user
and test-password
, which are specified in the Docker Compose file. If you modified the credentials in your Docker Compose file, be sure to update this line to match the credentials in that file. To learn more about the format of the connection URL, visit the Prisma docs.
You have successfully started PostgreSQL and configured Prisma using the Prisma schema. In the next step, you will define your data model for the blog and use Prisma Migrate to create the database schema.
Now you will define your data model in the Prisma schema file you’ve just created. This data model will then be mapped to the database with Prisma Migrate, which will generate and send the SQL statements for creating the tables that correspond to your data model.
Since you’re building a blog, the main entities of the application will be users and posts. In this step, you will define a Post
model with a similar structure to the Post
type in the GraphQL schema. In a later step, you will evolve the app and add a User
model.
Note: The GraphQL API can be seen as an abstraction layer for your database. When building a GraphQL API, it’s common for the GraphQL schema to closely resemble your database schema. However, as an abstraction, the two schemas won’t necessarily have the same structure, thereby allowing you to control which data you want to expose over the API as some data might be considered sensitive or irrelevant for the API layer.
Prisma uses its own data modeling language to define the shape of your application data.
Open your schema.prisma
file from the project’s folder where package.json
is located:
- nano prisma/schema.prisma
Note: You can verify from the terminal in which folder you are with the pwd
command, which will output the current working directory. Additionally, listing the files with the ls
command will help you navigate your file system.
Add the following model definitions to it:
...
model Post {
id Int @default(autoincrement()) @id
title String
content String?
published Boolean @default(false)
}
You define a model called Post
with a number of fields. The model will be mapped to a database table; the fields represent the individual columns.
The id
fields have the following field attributes:
@default(autoincrement())
sets an auto-incrementing default value for the column.@id
sets the column as the primary key for the table.Save and exit the file.
With the model in place, you can now create the corresponding table in the database using Prisma Migrate with the migrate dev
command to create and run the migration files.
In your terminal, run the following command:
- npx prisma migrate dev --name init --skip-generate
This command creates a new migration on your file system and runs it against the database to create the database schema. The --name init
flag specifies the name of the migration (will be used to name the migration folder that’s created on your file system). The --skip-generate
flag skips generating Prisma Client (this will be done in the next step).
This command will output something similar to:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "my-blog", schema "public" at "localhost:5432"
PostgreSQL database my-blog created at localhost:5432
Applying migration `20201201110111_init`
The following migration(s) have been created and applied from new schema changes:
migrations/
└─ 20201201110111_init/
└─ migration.sql
Your database is now in sync with your schema.
Your prisma/migrations
directory is now populated with the SQL migration file. This approach allows you to track changes to the database schema and create the same database schema in production.
Note: If you’ve already used Prisma Migrate with the my-blog
database and there is an inconsistency between the migrations in the prisma/migration
folder and the database schema, you will be prompted to reset the database with the following output:
Output? We need to reset the PostgreSQL database "my-blog" at "localhost:5432". All data will be lost.
Do you want to continue? › (y/N)
You can resolve this by entering y
which will reset the database. Beware that this will cause all data in the database to be lost.
You’ve now created your database schema. In the next step, you will install Prisma Client and use it in your GraphQL resolvers.
Prisma Client is an auto-generated and type-safe Object Relational Mapper (ORM) that you can use to programmatically read and write data in a database from a Node.js application. In this step, you’ll install Prisma Client in your project.
In your terminal, install the Prisma Client npm package:
- npm install @prisma/client
Note: Prisma Client provides rich auto-completion by generating code based on your Prisma schema to the node_modules
folder. To generate the code, you use the npx prisma generate
command. This is typically done after you create and run a new migration. On the first install, however, this is not necessary as it will automatically be generated for you in a postinstall
hook.
After creating the database and GraphQL schema and installing Prisma Client, you will now use Prisma Client in the GraphQL resolvers to read and write data in the database. You’ll do this by replacing the posts
array, which you’ve used so far to hold your data.
Create and open the following file:
- nano src/db.js
Add the following lines to the new file:
const { PrismaClient } = require('@prisma/client')
module.exports = {
prisma: new PrismaClient(),
}
This code imports Prisma Client, creates an instance of it, and exports the instance that you’ll use in your resolvers.
Now save and close the src/db.js
file.
Next, you will import the prisma
instance into src/schema.js
. To do so, open src/schema.js
:
- nano src/schema.js
Add this line to import prisma
from ./db
at the top of the file:
const { prisma } = require('./db')
...
Then remove the posts
array by deleting the lines that are marked with the hyphen symbol (-
):
...
-const posts = [
- {
- id: 1,
- title: 'Subscribe to GraphQL Weekly for community news ',
- content: 'https://graphqlweekly.com/',
- published: true,
- },
- {
- id: 2,
- title: 'Follow DigitalOcean on Twitter',
- content: 'https://twitter.com/digitalocean',
- published: true,
- },
- {
- id: 3,
- title: 'What is GraphQL?',
- content: 'GraphQL is a query language for APIs',
- published: false,
- },
-]
...
You will next update the Query
resolvers to fetch published posts from the database. First, delete the existing lines in the resolvers.Query
, then update the object by adding the highlighted lines:
...
const resolvers = {
Query: {
feed: (parent, args) => {
return prisma.post.findMany({
where: { published: true },
})
},
post: (parent, args) => {
return prisma.post.findUnique({
where: { id: Number(args.id) },
})
},
},
...
Here, you use two Prisma Client queries:
findMany
fetches posts whose publish
field is false
.findUnique
fetches a single post whose id
field equals the id
GraphQL argument.Per the GraphQL specification, the ID
type is serialized the same way as a String
. Therefore you convert to a Number
because the id
in the Prisma schema is an int
.
Next, you will update the Mutation
resolver to save and update posts in the database. First, delete the code in the resolvers.Mutation
object and the Number(args.id)
lines, then add the highlighted lines:
const resolvers = {
...
Mutation: {
createDraft: (parent, args) => {
return prisma.post.create({
data: {
title: args.title,
content: args.content,
},
})
},
publish: (parent, args) => {
return prisma.post.update({
where: {
id: Number(args.id),
},
data: {
published: true,
},
})
},
},
}
You’re using two Prisma Client queries:
create
to create a Post
record.update
to update the published field of the Post
record whose id
matches the one in the query argument.Finally, remove the resolvers.Post
object:
...
-Post: {
- content: (parent) => parent.content,
- id: (parent) => parent.id,
- published: (parent) => parent.published,
- title: (parent) => parent.title,
-},
...
Your schema.js
should now read as follows:
const { gql } = require('apollo-server')
const { prisma } = require('./db')
const typeDefs = gql`
type Post {
content: String
id: ID!
published: Boolean!
title: String!
}
type Query {
feed: [Post!]!
post(id: ID!): Post
}
type Mutation {
createDraft(content: String, title: String!): Post!
publish(id: ID!): Post
}
`
const resolvers = {
Query: {
feed: (parent, args) => {
return prisma.post.findMany({
where: { published: true },
})
},
post: (parent, args) => {
return prisma.post.findUnique({
where: { id: Number(args.id) },
})
},
},
Mutation: {
createDraft: (parent, args) => {
return prisma.post.create({
data: {
title: args.title,
content: args.content,
},
})
},
publish: (parent, args) => {
return prisma.post.update({
where: {
id: Number(args.id),
},
data: {
published: true,
},
})
},
},
}
module.exports = {
resolvers,
typeDefs,
}
Save and close the file.
Now that you’ve updated the resolvers to use Prisma Client, start the server to test the flow of data between the GraphQL API and the database with the following command:
- npm start
Once again, you will receive the following output:
OutputServer ready at: http://localhost:8080
Open the Apollo GraphQL Studio at the address from the output and test the GraphQL API using the same queries from Step 3.
Now you will commit your changes so that the changes can be deployed to App Platform. Stop the Apollo server with CTRL+C
.
To avoid committing the node_modules
folder and the .env
file, check the .gitignore
file in your project folder:
- cat .gitignore
Confirm that your .gitignore
file contains these lines:
node_modules
.env
If it doesn’t, update the file to match.
Save and exit the file.
Then run the following two commands to commit the changes:
- git add .
- git commit -m 'Add Prisma'
You will receive an output response like this:
Outputgit commit -m 'Add Prisma'
[main 1646d07] Add Prisma
9 files changed, 157 insertions(+), 39 deletions(-)
create mode 100644 .gitignore
create mode 100644 docker-compose.yml
create mode 100644 prisma/migrations/20201201110111_init/migration.sql
create mode 100644 prisma/migrations/migration_lock.toml
create mode 100644 prisma/schema.prisma
create mode 100644 src/db.js
You have updated your GraphQL resolvers to use the Prisma Client to make queries and mutations to your database, then committed all the changes to your remote repository. Next you’ll add a PostgreSQL database to your app in App Platform.
In this step, you will add a PostgreSQL database to your app in App Platform. Then you will use Prisma Migrate to run the migration against it so that the deployed database schema matches your local database.
First, visit the App Platform console and select the prisma-graphql project you created in Step 5.
Next, click the Create button and select Create/Attach Database from the dropdown menu, which will lead you to a page to configure your database.
Choose Dev Database, select a name, and click Create and Attach.
You will be redirected back to the Project view, where there will be a progress bar for creating the database.
After the database has been created, you will run the database migration against the production database on DigitalOcean from your local machine. To run the migration, you will need the connection string of the hosted database.
To get it, click on the db icon in the Components section of the Settings tab.
Under Connection Details, press View and then select Connection String in the dropdown menu. Copy the database URL, which will have the following structure:
postgresql://db:some_password@unique_identifier.db.ondigitalocean.com:25060/db?sslmode=require
Then, run the following command in your terminal, ensuring that you set your_db_connection_string
to the URL you just copied:
- DATABASE_URL="your_db_connection_string" npx prisma migrate deploy
This command will run the migrations against the live database with Prisma Migrate.
If the migration succeeds, you will receive the following output:
OutputPostgreSQL database db created at unique_identifier.db.ondigitalocean.com:25060
Prisma Migrate applied the following migration(s):
migrations/
└─ 20201201110111_init/
└─ migration.sql
You have successfully migrated the production database on DigitalOcean, which now matches the Prisma schema.
Note: If you receive the following error message:
OutputError: P1001: Can't reach database server at `unique_identifier.db.ondigitalocean.com`:`25060`
Navigate to the database dashboard to confirm that your database has been provisioned. You may need to update or disable the Trusted Sources for the database.
Now you can deploy your app by pushing your Git changes with the following command:
- git push
Note: App Platform will make the DATABASE_URL
environment variable available to your application at run-time. Prisma Client will use that environment variable with the env("DATABASE_URL")
in the datasource
block of your Prisma schema.
This will automatically trigger a build. If you open the App Platform console, you will have a deployment progress bar.
Once the deployment succeeds, you will receive a Deployment went live message.
You’ve now backed up your deployed GraphQL API with a database. Open the Live App, which will lead you to the Apollo GraphQL Studio. Test the GraphQL API using the same queries from Step 3.
In the final step you will evolve the GraphQL API by adding the User
model.
Your GraphQL API for blogging has a single entity named Post
. In this step, you’ll evolve the API by defining a new model in the Prisma schema and adapting the GraphQL schema to make use of the new model. You will introduce a User
model with a one-to-many relation to the Post
model, which will allow you to represent the author of posts and associate multiple posts to each user. Then you will evolve the GraphQL schema to allow the creation of users and association of posts with users through the API.
First, open the Prisma schema:
- nano prisma/schema.prisma
Add the highlighted lines to add the authorId
field to the Post
model and to define the User
model:
...
model Post {
id Int @id @default(autoincrement())
title String
content String?
published Boolean @default(false)
author User? @relation(fields: [authorId], references: [id])
authorId Int?
}
model User {
id Int @id @default(autoincrement())
email String @unique
name String
posts Post[]
}
You’ve added the following items to the Prisma schema:
author
and posts
. Relation fields define connections between models at the Prisma level and do not exist in the database. These fields are used to generate the Prisma Client and to access relations with Prisma Client.authorId
field, which is referenced by the @relation
attribute. Prisma will create a foreign key in the database to connect Post
and User
.User
model to represent users.The author
field in the Post
model is optional but allows you to create posts that are not associated with a user.
Save and exit the file once you’re done.
Next, create and apply the migration locally with the following command:
- npx prisma migrate dev --name "add-user"
When the migration succeeds, you will receive the following message:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "my-blog", schema "public" at "localhost:5432"
Applying migration `20201201123056_add_user`
The following migration(s) have been created and applied from new schema changes:
migrations/
└─ 20201201123056_add_user/
└─ migration.sql
Your database is now in sync with your schema.
✔ Generated Prisma Client (4.6.1 | library) to ./node_modules/@prisma/client in 53ms
The command also generates Prisma Client so that you can make use of the new table and fields.
You will now run the migration against the production database on App Platform so that the database schema is the same as your local database. Run the following command in your terminal and set DATABASE_URL
to the connection URL from App Platform:
- DATABASE_URL="your_db_connection_string" npx prisma migrate deploy
You will receive the following output:
OutputEnvironment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "db", schema "public" at "unique_identifier.db.ondigitalocean.com:25060"
2 migrations found in prisma/migrations
Applying migration `20201201123056_add_user`
The following migration have been applied:
migrations/
└─ 20201201123056_add_user/
└─ migration.sql
All migrations have been successfully applied.
You will now update the GraphQL schema and resolvers to make use of the updated database schema.
Open the src/schema.js
file:
- nano src/schema.js
Update typeDefs
with the highlighted lines as follows:
...
const typeDefs = gql`
type User {
email: String!
id: ID!
name: String
posts: [Post!]!
}
type Post {
content: String
id: ID!
published: Boolean!
title: String!
author: User
}
type Query {
feed: [Post!]!
post(id: ID!): Post
}
type Mutation {
createUser(data: UserCreateInput!): User!
createDraft(authorEmail: String, content: String, title: String!): Post!
publish(id: ID!): Post
}
input UserCreateInput {
email: String!
name: String
posts: [PostCreateWithoutAuthorInput!]
}
input PostCreateWithoutAuthorInput {
content: String
published: Boolean
title: String!
}
`
...
In this updated code, you add the following changes to the GraphQL schema:
User
type, which returns an array of Post
.author
field to the Post
type.createUser
mutation, which expects the UserCreateInput
as its input type.PostCreateWithoutAuthorInput
input type used in the UserCreateInput
input for creating posts as part of the createUser
mutation.authorEmail
optional argument to the createDraft
mutation.With the schema updated, you will now update the resolvers to match the schema.
Update the resolvers
object with the highlighted lines as follows:
...
const resolvers = {
Query: {
feed: (parent, args) => {
return prisma.post.findMany({
where: { published: true },
})
},
post: (parent, args) => {
return prisma.post.findUnique({
where: { id: Number(args.id) },
})
},
},
Mutation: {
createDraft: (parent, args) => {
return prisma.post.create({
data: {
title: args.title,
content: args.content,
published: false,
author: args.authorEmail && {
connect: { email: args.authorEmail },
},
},
})
},
publish: (parent, args) => {
return prisma.post.update({
where: { id: Number(args.id) },
data: {
published: true,
},
})
},
createUser: (parent, args) => {
return prisma.user.create({
data: {
email: args.data.email,
name: args.data.name,
posts: {
create: args.data.posts,
},
},
})
},
},
User: {
posts: (parent, args) => {
return prisma.user
.findUnique({
where: { id: parent.id },
})
.posts()
},
},
Post: {
author: (parent, args) => {
return prisma.post
.findUnique({
where: { id: parent.id },
})
.author()
},
},
}
...
The createDraft
mutation resolver now uses the authorEmail
argument (if passed) to create a relation between the created draft and an existing user.
The new createUser
mutation resolver creates a user and related posts using nested writes.
The User.posts
and Post.author
resolvers define how to resolve the posts
and author
fields when the User
or Post
are queried. These use Prisma’s Fluent API to fetch the relations.
Save and exit the file.
Start the server to test the GraphQL API:
- npm start
Begin by testing the createUser
resolver with the following GraphQL mutation:
mutation {
createUser(data: { email: "natalia@prisma.io", name: "Natalia" }) {
email
id
}
}
This mutation will create a user.
Next, test the createDraft
resolver with the following mutation:
mutation {
createDraft(
authorEmail: "natalia@prisma.io"
title: "Deploying a GraphQL API to App Platform"
) {
id
title
content
published
author {
id
name
}
}
}
You can fetch the author
whenever the return value of a query is Post
. In this example, the Post.author
resolver will be called.
Close the server when finished testing.
Then commit your changes and push to deploy the API:
- git add .
- git commit -m "add user model"
- git push
It may take a few minutes for your updates to deploy.
You have successfully evolved your database schema with Prisma Migrate and exposed the new model in your GraphQL API.
In this article, you built a GraphQL API with Prisma and deployed it to DigitalOcean’s App Platform. You defined a GraphQL schema and resolvers with Apollo Server. You then used Prisma Client in your GraphQL resolvers to persist and query data in the PostgreSQL database. As a next step, you can extend the GraphQL API with a query to fetch individual users and a mutation to connect an existing draft to a user.
If you’re interested in exploring the data in the database, check out Prisma Studio. You can also visit the Prisma documentation to learn about different aspects of Prisma and explore some ready-to-run example projects in the prisma-examples
repository.
You can find the code for this project in the DigitalOcean Community repository.
]]>GitLab is an open-source application primarily used to host Git repositories, with additional development-related features like issue tracking. It is designed to be hosted using your own infrastructure, and provides flexibility in deploying as an internal repository store for your development team, a public way to interface with users, or a means for contributors to host their own projects.
The GitLab project enables you to create a GitLab instance on your own hardware with a minimal installation mechanism. In this guide, you will learn how to install and configure GitLab Community Edition on an Ubuntu 18.04 server.
To follow along with this tutorial, you will need:
sudo
user and basic firewall. To set this up, follow our Ubuntu 18.04 initial server setup guide.The published GitLab hardware requirements recommend using a server with a minimum of:
Although you may be able to get by with substituting some swap space for RAM, it is not recommended. The following examples in this guide will use these minimum resources.
your_domain
as an example, but be sure to replace this with your actual domain name.Before installing GitLab, it is important to install the software that it leverages during installation and on an ongoing basis. The required software can be installed from Ubuntu’s default package repositories.
First, refresh the local package index:
- sudo apt update
Then install the dependencies by entering this command:
- sudo apt install ca-certificates curl openssh-server postfix tzdata perl
You will likely have some of this software installed already. For the postfix
installation, select Internet Site when prompted. On the next screen, enter your server’s domain name to configure how the system will send mail.
Now that you have the dependencies installed, you’re ready to install GitLab.
With the dependencies in place, you can install GitLab. This process leverages an installation script to configure your system with the GitLab repositories.
First, move into the /tmp
directory:
- cd /tmp
Then download the installation script:
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Feel free to examine the downloaded script to ensure that you are comfortable with the actions it will take. You can also find a hosted version of the script on the GitLab installation instructions:
- less /tmp/script.deb.sh
Once you are satisfied with the safety of the script, run the installer:
- sudo bash /tmp/script.deb.sh
The script sets up your server to use the GitLab maintained repositories. This lets you manage GitLab with the same package management tools you use for your other system packages. Once this is complete, you can install the actual GitLab application with apt
:
- sudo apt install gitlab-ce
This installs the necessary components on your system and may take some time to complete.
Before you configure GitLab, you need to ensure that your firewall rules are permissive enough to allow web traffic. If you followed the guide linked in the prerequisites, you will already have a ufw
firewall enabled.
View the current status of your active firewall by running:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
The current rules allow SSH traffic through, but access to other services is restricted. Since GitLab is a web application, you need to allow HTTP access. Because you will be taking advantage of GitLab’s ability to request and enable a free TLS/SSL certificate from Let’s Encrypt, also allow HTTPS access.
The protocol to port mapping for HTTP and HTTPS are available in the /etc/services
file, so you can allow that traffic in by name. If you didn’t already have OpenSSH traffic enabled, you should allow that traffic:
- sudo ufw allow http
- sudo ufw allow https
- sudo ufw allow OpenSSH
You can check the ufw status
again to ensure that you granted access to at least these two services:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
This output indicates that the GitLab web interface is now accessible once you configure the application.
Before you can use the application, update the configuration file and run a reconfiguration command. First, open GitLab’s configuration file with your preferred text editor. This example uses nano
:
- sudo nano /etc/gitlab/gitlab.rb
Search for the external_url
configuration line. Update it to match your domain and make sure to change http
to https
to automatically redirect users to the site protected by the Let’s Encrypt certificate:
...
## GitLab URL
##! URL on which GitLab will be reachable.
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
##!
##! Note: During installation/upgrades, the value of the environment variable
##! EXTERNAL_URL will be used to populate/replace this value.
##! On AWS EC2 instances, we also attempt to fetch the public hostname/IP
##! address from AWS. For more details, see:
##! https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
external_url 'https://your_domain'
...
Next, find the letsencrypt['contact_emails']
setting. If you’re using nano
, you can enable a search prompt by pressing CTRL+W
. Write letsencrypt['contact_emails']
into the prompt, then press ENTER
. This setting defines a list of email addresses that the Let’s Encrypt project can use to contact you if there are problems with your domain. It’s recommended to uncomment and fill this out to inform yourself of any issues that may occur:
letsencrypt['contact_emails'] = ['sammy@example.com']
Once you’re done making changes, save and close the file. If you’re using nano
, you can do this by pressing CTRL+X
, then Y
, then ENTER
.
Run the following command to reconfigure GitLab:
- sudo gitlab-ctl reconfigure
This will initialize GitLab using the information it can find about your server. This is a completely automated process, so you will not have to answer any prompts. The process will also configure a Let’s Encrypt certificate for your domain.
With GitLab running, you can perform an initial configuration of the application through the web interface.
Visit the domain name of your GitLab server in your web browser:
https://your_domain
On your first visit, you’ll be greeted with a login page:
GitLab generates an initial secure password for you. It is stored in a folder that you can access as an administrative sudo
user:
- sudo nano /etc/gitlab/initial_root_password
# WARNING: This value is valid only in the following conditions
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the firs$
# 2. Password hasn't been changed manually, either via UI or via command line.
#
# If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
Password: YOUR_PASSWORD
# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
Back on the login page, enter the following:
/etc/gitlab/initial_root_password
]Enter these values into the fields and click the Sign in button. You will be signed in to the application and taken to a landing page that prompts you to begin adding projects:
You can now fine tune your GitLab instance.
One of the first things you should do after logging in, is change your password. To make this change, click on the icon in the upper-right corner of the navigation bar and select Edit Profile:
You’ll then enter a User Settings page. On the left navigation bar, select Password to change your GitLab generated password, to a secure password, then click on the Save password button when you’re finished with your updates:
You’ll be taken back to the login screen with a notification that your password has been changed. Enter your new password to log back into your GitLab instance:
GitLab selects some reasonable defaults, but these are not usually appropriate once you start using the software.
To make the necessary modifications, click on the user icon in the upper-right corner of the navigation bar and select Edit Profile.
You can adjust the Name and Email address from “Administrator” and “admin@example.com” to something more accurate. The name you select will be displayed to other users, while the email will be used for default avatar detection, notifications, Git actions through the interface, and more:
Click on the Update Profile settings button at the bottom when you are finished with your updates. You’ll be prompted to enter your password to confirm changes.
A confirmation email will be sent to the address you provided. Follow the instructions in the email to confirm your account so that you can begin using it with GitLab.
Next, select Account in the left navigation bar:
Here, you can enable two-factor authentication and change your username. By default, the first administrative account is given the name root. Since this is a known account name, it is more secure to change this to a different name. You will still have administrative privileges; the only thing that will change is the name. Replace root with your preferred username:
Click on the Update username button to make the change. You’ll be prompted to confirm the change thereafter.
Next time you log into GitLab, remember to use your new username.
You can enable SSH keys with Git to interact with your GitLab projects. To do this, you need to add your SSH public key to your GitLab account.
In the left navigation bar, select SSH Keys:
If you already have an SSH key pair created on your local computer, you can view the public key by typing:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copy this text and enter it into the Key text box inside your GitLab instance.
If, instead, you get a different message, you do not yet have an SSH key pair configured on your machine:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
If this is the case, you can create an SSH key pair by entering the following command:
- ssh-keygen
Accept the defaults and optionally provide a password to secure the key locally:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Once you have this, you can display your public key as the previous example by entering this command:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Insert this block of text in the output and enter it into the Key text box inside your GitLab instance. Give it a descriptive title, and click the Add key button.
Now you’re able to manage your GitLab projects and repositories from your local machine without having to provide your GitLab account credentials.
With your current setup, it is possible for anyone to sign up for an account when you visit your GitLab instance’s landing page. This may be what you want if you are seeking to host a public project. However, many times, more restrictive settings are desirable.
To begin, navigate to the administrative area by clicking on the hamburger menu in the top navigation bar and select Admin from the drop-down:
Select Settings from the left navigation bar:
You will be taken to the global settings for your GitLab instance. Here, you can adjust a number of settings that affect whether new users can sign up and their level of access.
If you wish to disable sign-ups completely, scroll to the Sign-up Restrictions section and press Expand to view the options.
Then deselect the Sign-up enabled check box:
Remember to click on the Save changes button after making your changes.
The sign-up section is now removed from the GitLab landing page.
If you are using GitLab as part of an organization that provides email addresses associated with a domain, you can restrict sign-ups by domain instead of completely disabling them.
In the Sign-up Restrictions section, select the Send confirmation email on sign-up box, which will allow users to log in only after they’ve confirmed their email.
Next, add your domain or domains to the Whitelisted domains for sign-ups box, one domain per line. You can use the asterisk “*” to specify wildcard domains:
When you’re finished, click on the Save changes button.
The sign-up section is now removed from the GitLab landing page.
By default, new users can create up to 10 projects. If you wish to allow new users from the outside for visibility and participation, but want to restrict their access to creating new projects, you can do so in the Account and Limit Settings section.
Inside, you can change the Default projects limit to 0 to completely disable new users from creating projects:
New users can still be added to projects manually and have access to internal or public projects created by other users.
After your updates, remember to click on the Save changes button.
New users will now be able to create accounts, but unable to create projects.
By default, GitLab has a scheduled task set up to renew Let’s Encrypt certificates after midnight every fourth day, with the exact minute based on your external_url
. You can modify these settings in the /etc/gitlab/gitlab.rb
file.
For example, if you wanted to renew every 7th day at 12:30, you can configure it to do so. First, navigate to the configuration file:
- sudo nano /etc/gitlab/gitlab.rb
Then, find the following lines in the file and remove the #
and update it with following:
...
################################################################################
# Let's Encrypt integration
################################################################################
# letsencrypt['enable'] = nil
letsencrypt['contact_emails'] = ['sammy@digitalocean'] # This should be an array of email addresses to add as contacts
# letsencrypt['group'] = 'root'
# letsencrypt['key_size'] = 2048
# letsencrypt['owner'] = 'root'
# letsencrypt['wwwroot'] = '/var/opt/gitlab/nginx/www'
# See http://docs.gitlab.com/omnibus/settings/ssl.html#automatic-renewal for more on these settings
letsencrypt['auto_renew'] = true
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
...
You can also disable auto-renewal by setting the letsencrypt['auto_renew']
to false
:
...
letsencrypt['auto_renew'] = false
...
With auto-renewals in place, you don’t need to worry about service interruptions.
You now have a working GitLab instance hosted on your own server. You can begin to import or create new projects and configure the appropriate level of access for a team. GitLab is regularly adding features and making updates to their platform, so be sure to check out the project’s home page to stay up-to-date on any improvements or important notices.
]]>GitHub is a cloud-hosted Git management tool. Git is distributed version control, meaning the entire repository and history lives wherever you put it. People tend to use GitHub in their business or development workflow as a managed hosting solution for backups of their repositories. GitHub takes this even further by letting you connect with coworkers, friends, organizations, and more.
In this tutorial, you will learn how to take an existing project you are working on and push it, so it also exists on GitHub.
Deploy your applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
To initialize the repo and push it to GitHub you’ll need:
Sign in to GitHub and create a new empty repo. You can choose to either initialize a README or not. It doesn’t really matter because we’re just going to override everything in this remote repository anyways.
Warning: Through the rest of this tutorial, we’ll assume your GitHub username is sammy
and the repo you created is named my-new-project
. It is important that you replace these placeholders with your actual username and repo name.
From your terminal, run the following commands after navigating to the folder you would like to add.
Make sure you are in the root directory of the project you want to push to GitHub and run:
Note: If you already have an initialized Git repository, you can skip this command.
- git init
This step creates a hidden .git
directory in your project folder, which the git
software recognizes and uses to store all the metadata and version history for the project.
- git add -A
The git add
command is used to tell git which files to include in a commit, and the -A
(or --all
) argument means “include all”.
- git commit -m 'Added my project'
The git commit
command creates a new commit with all files that have been “added”. The -m
(or --message
) sets the message that will be included alongside the commit, used for future reference to understand the commit. In this case, the message is: 'Added my project'
.
- git remote add origin git@github.com:sammy/my-new-project.git
Note: Remember, you will need to replace the highlighted parts of the username and repo name with your own username and repo name.
In git, a “remote” refers to a remote version of the same repository, which is typically on a server somewhere (in this case, GitHub). “origin” is the default name git gives to a remote server (you can have multiple remotes) so git remote add origin
is instructing git to add the URL of the default remote server for this repo.
- git push -u -f origin main
The -u
(or --set-upstream
) flag sets the remote origin
as the upstream reference. This allows you to later perform git push
and git pull
commands without having to specify an origin
since we always want GitHub in this case.
The -f
(or --force
) flag stands for force. This will automatically overwrite everything in the remote directory. We’re using it here to overwrite the default README that GitHub automatically initialized.
Note: If you did not include the default README when creating the project on GitHub, the -f
flag isn’t really necessary.
- git init
- git add -A
- git commit -m 'Added my project'
- git remote add origin git@github.com:sammy/my-new-project.git
- git push -u -f origin main
Now that you have your GitHub repo, it is as easy as 1-click to deploy this repo to make it live by using DigitalOcean App Platform.
Now you are all set to track your code changes remotely in GitHub! As a next step, here’s a complete guide to how to use git.
Once you start collaborating with others on the project, you’ll want to know how to create a pull request.
]]>Version control systems are increasingly indispensable in modern software development as versioning allows you to keep track of your software at the source level. You can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
One of the most popular version control systems currently available is Git. Many projects’ files are maintained in a Git repository, and sites like GitHub, GitLab, and Bitbucket help to facilitate software development, project sharing, and collaboration.
In this guide, you will install and configure Git on an Ubuntu 18.04 server. This guide will cover how to install the software two different ways: via the built-in package manager, and via source. Each of these approaches come with their own benefits depending on your specific needs.
In order to complete this tutorial, you should have a non-root user with sudo
privileges on an Ubuntu 18.04 server. To learn how to achieve this setup, follow our Initial Server Setup Guide.
With your server and user set up, you are ready to begin.
Ubuntu’s default repositories provide you with a fast method to install Git. Note that the version you install via these repositories may be older than the newest version currently available. If you need the latest release, consider moving to the next section of this tutorial to learn how to install and compile Git from source.
First, use the apt package management tools to update your local package index:
- sudo apt update
With the update complete, you can download and install Git:
- sudo apt install git
You can confirm that you have installed Git correctly by running the following command:
- git --version
Outputgit version 2.17.1
With Git successfully installed, you can now move on to the Setting Up Git section of this tutorial to complete your setup.
A more flexible method of installing Git is to compile the software from source. This takes longer and will not be maintained through your package manager, but it will allow you to download the latest release and will give you some control over the options you include if you wish to customize.
Verify the version of Git currently installed:
- git --version
If Git is installed, you’ll receive output similar to the following:
Outputgit version 2.17.1
Before you begin, you need to install the software that Git depends on. This is all available in the default repositories, so you can update your local package index:
- sudo apt update
Then install the packages:
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
After you have installed the necessary dependencies, move into the tmp
directory. This is where you will download your Git tarball:
- cd /tmp
From the Git project website, you can navigate to the tarball list available at https://mirrors.edge.kernel.org/pub/software/scm/git/ and download the version you of your choosing. At the time of writing, the most recent version is 2.37.1. Use curl
and output the file downloaded to git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.37.1.tar.gz
Unpack the compressed tarball file:
- tar -zxf git.tar.gz
Next, move into the new Git directory:
- cd git-*
Now, you can make the package and install it by typing these two commands:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Now, replace the shell process so that the version of Git you just installed will be used:
- exec bash
With this complete, you can be sure that your install was successful by checking the version:
- git --version
Outputgit version 2.37.1
With Git successfully installed, you can now complete your setup.
After you are satisfied with your Git version, you should configure Git so that the generated commit messages you make will contain your correct information and support you as you build your software project.
This can be achieved by using the git config
command. Specifically, you need to provide your name and email address because Git embeds this information into each commit you do. Add this information by typing:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Display all of the configuration items that have been set by typing:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
The information you enter is stored in your Git configuration file, which you can optionally edit by hand with your preferred text editor. The following example uses nano
:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Press CTRL + X
, then Y
then ENTER
to exit the nano
text editor.
There are many other options that you can set, but these are the two essential ones needed. If you skip this step, you’ll likely see warnings when you commit to Git. This makes more work for you because you will then have to revise the commits you have done with the corrected information.
Git is a great way to keep track of changes, revert to previous stages, or branch to create different versions of files and directories. With this tutorial, you’ve learned how to install Git on your system and how to set up the essential Git configurations.
To learn more about how to use Git, check out these articles and series:
]]>Gitea is a source code repository based on the version control system, Git. While there are several self-hosted solutions available such as GitLab and Gogs, Gitea has the benefit of being lightweight, meaning it can run on a relatively small server.
However, having a small server, especially in the realm of VPSes, often means being limited on space. Thankfully, many hosting providers also offer additional storage in the form of external volumes, block storage, or networked file storage (NFS). This gives users the option to save money on smaller VPS hosts for their applications without sacrificing storage.
With Gitea and the ability to decide where your source code is stored, you can ensure that your projects and files have room to expand. In this tutorial, you will mount an external storage volume to a mount point and ensure that Gitea is reading the appropriate information from that volume. By the end, you will have a Gitea installation that stores repositories and other important information on a separate storage volume.
Before you get started, you will need the following:
Note that if you installed Gitea using a different method than outlined in these prerequisites, the names and locations for certain files and directories on your system may differ from what this guide mentions in examples. However, the concepts outlined in this tutorial should be applicable to any Gitea installation.
A volume can take many different forms. It could be an NFS volume, which is storage available on the network provided via a file share. Another possibility is that it takes the form of block storage via a service such as DigitalOcean’s Volumes. In both cases, storage is mounted on a system using the mount
command.
Volumes such as these will be visible as device files stored within /dev
. These files are how the kernel communicates with the storage devices themselves; the files aren’t actually used for storage. In order to be able to store files on the storage device, you will need to mount them using the mount
command.
First, you will need to create a mount point — that is, a folder which will be associated with the device, such that data stored within it winds up stored on that device. Mount points for storage devices such as this typically live in the /mnt
directory.
Create a mount point named gitea
as you would create a normal directory using the mkdir
command:
- sudo mkdir /mnt/gitea
From here, you can mount the device to that directory in order to use it to access that storage space. Use the mount
command to mount the device:
- sudo mount -t ext4 -o defaults,noatime /dev/disk/by-id/your_disk_id /mnt/gitea
The string ext4
option specifies the file system type, ext4 in this case, though depending on your volume’s file system type, it may be something like xfs
or nfs
; to check which type your volume uses, run the mount
command with no options:
- mount
This will provide you with a line of output for every mounted file system. Since you just mounted yours, it will likely be the last on the list:
Output. . .
/dev/sda on /mnt/gitea type ext4 (rw,noatime,discard)
This shows that the file system type is ext4
.
This command mounts the device specified by its ID to /mnt/gitea
. The -o
option specifies the options used when mounting. In this case, you are using the default options which allow for mounting a read/write file system, and the noatime
option specifies that the kernel shouldn’t update the last access time for files and directories on the device in order to be more efficient.
Now that you’ve mounted your device, it will stay mounted as long as the system is up and running. However, as soon as the system restarts, it will no longer be mounted (though the data will remain on the volume), so you will need to tell the system to mount the volume as soon as it starts using the /etc/fstab
(‘file systems table’) file. This file lists the available file systems and their mount points in a tab-delimited format.
Using echo
and tee
, add a new line to the end of /etc/fstab
:
- echo '/dev/disk/by-id/your_disk_id /mnt/gitea ext4 defaults,nofail,noatime 0 0' | sudo tee /etc/fstab
This command appends the string /dev/disk/by-uid/your_disk_id
to the fstab
file and prints it to your screen. As with the previous mount
command, it mounts the device onto the mount point using the defaults
, nofail
, and noatime
options.
Once your changes have been made to /etc/fstab
, the kernel will mount your volume on boot.
Note: Storage devices on Linux are very flexible and come in all different types, from a networked file system (NFS) to a plain old hard drive. To learn more about block storage and devices in Linux, you can read up more about storage concepts in our Introduction to Storage Terminology and Concepts in Linux.
Gitea maintains all of its repositories in a central location. This includes repositories from all users and organizations. Unless configured otherwise, all information is kept in a single directory. This directory is named data
in default installations. For the purposes of this tutorial, we will be using a version of Gitea running on Docker as in the tutorial linked above.
First, let’s get a sense of what this data directory contains. You can do this by moving to the data directory and running the ls
command. Using the -l
format will tell us more information about the files:
- cd gitea
- ls -l
This will provide a listing like the following:
Outputtotal 20
drwxr-xr-x 5 root root 4096 Jun 23 22:34 ./
drwxrwxr-x 3 sammy sammy 4096 Jun 26 22:35 ../
drwxr-xr-x 5 git git 4096 Jun 23 22:42 git/
drwxr-xr-x 12 git git 4096 Jun 26 22:35 gitea/
drwx------ 2 root root 4096 Jun 23 22:34 ssh/
Let’s break down the output of this command. It lists one file or directory per line. In this case, it lists five directories. The entry for .
is a special entry that just means the current directory, and ..
stands for the directory one level up. This output shows that the current directory is owned by the root user, which is the case in this instance because Docker runs as a privileged user, and the directory one level up is owned by sammy.
The git
directory is important to us because it contains all of the repositories that we might want to store on a separate volume. List the contents of the directory:
- ls -l git
This will provide the long listing of the directory:
Outputtotal 24
drwxr-xr-x 5 git git 4096 Jun 23 22:42 ./
drwxr-xr-x 6 root root 4096 Jun 27 14:21 ../
-rw-r--r-- 1 git git 190 Jun 23 22:42 .gitconfig
drwxr-xr-x 2 root root 4096 Jun 23 22:34 .ssh/
drwxr-xr-x 2 git git 4096 Jun 23 22:42 lfs/
drwxr-xr-x 5 git git 4096 Jun 30 20:03 repositories/
Within it are two directories of note: the repositories
directory that contains the git repositories managed by Gitea sorted by user/organization, and the lfs
directory containing data for Git’s Large File Storage functionality. The gitea
directory contains information that Gitea uses in the background, including archives of old repositories, as well the database that contains information such as users and repository information used by the web service. The ssh
directory contains various SSH keypairs that Gitea uses.
Given that all of the information stored in this directory is important, you will want to include the entire directory’s contents on our attached volume.
There are two paths to follow from this point, depending on whether or not you are working with a new installation of Gitea and completing this tutorial during the installation process. In the first path, you will be able to specify locations before finishing the installation, and in the second, you will learn how to move an existing installation.
If you are starting with a brand new installation of Gitea, you can specify where all of your information is stored during the configuration process. For example, if you are setting Gitea up using Docker Compose, you can map the volumes to your attached volume.
Open up the docker-compose.yml
file with your preferred text editor. The following example uses nano
:
- nano docker-compose.yml
Once you have the file open, search for the volumes
entry in the compose file and modify the mapping on the left side of the :
to point to appropriate locations on your block storage volume for the Gitea data directory
...
volumes:
- /mnt/gitea:/data
- /home/git/.ssh/:/data/git/.ssh
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
...
When you are finished setting the information, save and close the file. If you are using nano
, you can do this by pressing CTRL + X
, Y
, and then ENTER
.
Warning: SSH servers look for the .ssh
directory in the home directory of the Git user (git, in this case). This directory contains all of the SSH keys that Gitea will use, so it is not advised to move the mount for this Docker volume. In order to have this location backed up on your volume, it would be best to use another solution such as a cron
job to back up the directory. To learn more, check out this tutorial on using cron
to manage scheduled tasks.
When you run docker-compose
and Gitea installs, it will use your block storage volume to store its data.
If you are not using Docker volumes to manage the locations of your data — for example, if you are installing Gitea on your server via the binary releases per these instructions from Gitea — then you will need to modify the locations within the configuration file (usually /etc/gitea/app.ini
) when you are setting all of the values and before you perform the final installation steps in the browser. For instance, you might set them as follows:
...
# If you are using SQLite for your database, you will need to change the PATH
# variable in this section
[database]
...
PATH = /mnt/gitea/gitea.db
[server]
...
LFS_CONTENT_PATH = /mnt/gitea/lfs
[repository]
ROOT = /mnt/gitea/gitea-repositories
...
Note: Ensure that your git user has write access to this location. You can read up on Linux permissions here.
If you already have an instance of Gitea installed and running, you will still be able to store your data on a separate volume, but it will require some care in ensuring that all of your data remains both safe and accessible to Gitea.
Note: As with any operation involving your data, it is important to ensure that you have an up-to-date backup of everything. In this case, this means a backup of all of the files in your data directory (the SSH files, the gitea-repositories
and lfs
directories, the database, and so on) to a safe location.
There are two options for moving your data to a new volume. The first way is to copy your Gitea data to a secondary location, then turn the original location into a mount point for your volume. When you copy your data back into that location, you will be copying it onto that volume, and no changes will be required within Gitea itself; it will simply continue as it did before. If, however, you do not want to mount the entire device to that destination (for example, if your Gitea data will not be the only thing on that volume), then a second option is to move all of your Gitea data to a new location on that volume and instruct Gitea itself to use that location.
No matter which option you choose, first, stop the Gitea web service.
If you installed Gitea via Docker Compose, use docker-compose
to stop the service. While inside the same directory containing the docker-compose.yml
file, run:
- docker-compose down
If you have installed it as a systemd service, use the systemctl
command:
- sudo systemctl stop gitea
Once Gitea has been stopped, move the entire contents of the data directory to the mount point made in Step 1:
- sudo mv * /mnt/gitea
Ensure that all files have been moved by listing the current directory’s contents:
- ls -la
This will only return the current and parent directory entries:
total 8
drwxrwxr-x 2 sammy sammy 4096 Jun 27 13:56 ./
drwxr-xr-x 7 sammy sammy 4096 Jun 27 13:56 ../
Once all of the data has been moved, change to the new data directory:
- cd /mnt/gitea
Using ls
, ensure that everything looks correct:
- ls -l
This will show the contents of the directory:
Outputtotal 36
drwxr-xr-x 6 root root 4096 Jun 27 14:21 ./
drwxr-xr-x 3 root root 4096 Jun 27 14:21 ../
drwxr-xr-x 5 git git 4096 Jun 23 22:42 git/
drwxr-xr-x 13 git git 4096 Jul 11 08:25 gitea/
drwx------ 2 root root 16384 Jun 27 03:46 lost+found/
drwx------ 2 root root 4096 Jun 23 22:34 ssh/
As before, it should contain the ssh
, git
, and gitea
directories. If you are using SQLite as a database to manage Gitea, it will also contain a file named gitea.db
in the gitea
directory.
When you’re sure that all data has been moved, it’s time to mount the volume to the data directory.
First, move to the parent directory of the data directory you were in previously. In this example using an installation of Gitea using Docker Compose as described in the tutorial linked in the prerequisites, this is the directory which contains your docker-compose.yml
file.
- cd ~/gitea/
As before, use the mount
command, but this time, use the directory you just emptied as the destination:
- sudo mount -o defaults,noatime /dev/disk/by-id/your_disk_id gitea
Now, when you list the contents of that directory, all of your files should be in place:
- ls -la gitea
This command will output the expected information. Note that, depending on your volume’s file system type, you may find an additional directory named lost+found
; this is normal and part of everyday file system use:
total 36
drwxr-xr-x 6 root root 4096 Jun 27 13:58 ./
drwxrwxr-x 3 sammy sammy 4096 Jun 27 02:23 ../
drwxr-xr-x 5 git git 4096 Jun 23 22:42 git/
drwxr-xr-x 12 git git 4096 Jun 27 00:00 gitea/
drwx------ 2 root root 16384 Jun 27 03:46 lost+found/
drwx------ 2 root root 4096 Jun 23 22:34 ssh/
As mentioned, if you would like Gitea to use a directory within the block storage volume, there is an additional step you need to complete before bringing Gitea back up. For example, say that you want to use a folder named scm
on your volume mounted on /mnt/gitea
. After moving all of your Gitea data to /mnt/gitea/scm
, you will need to create a symbolic link from your old data directory to the new one. For this you will use the ln
command:
- sudo ln -s /mnt/gitea/scm gitea
At this point, you can restart Gitea. If you are using Gitea as a systemd service, run:
- sudo systemctl restart gitea
If you are running Gitea as a Docker container using Docker Compose, run:
- docker-compose up -d
Now that everything is up and running, visit your Gitea instance in the browser and ensure that everything works as expected. You should be able to create new objects in Gitea such as repositories, issues, and so on. If you set Gitea up with an SSH shim, you should also be able to check out and push to repositories using git clone
and git push
.
In this tutorial, you moved all of your Gitea data to a block storage volume. Volumes such as these are very flexible and provide many benefits.such as allowing you to store all of your data on larger disks, RAID volumes, networked file systems, or using block storage such as DigitalOcean Volumes to reduce storage expenses. It also allows you to snapshot entire disks for backup so that you can restore their contents in event of a catastrophic failure.
]]>The App walkthroughs all show connecting the GitHub repo to DO Apps. So DO is downloading the repo.
But the app spec is not being updated from the copy in the repository. It seems the only way to update the app spec is to use a command-line tool or edit it in the web UI.
Why can’t DO just update the app spec when it retrieves the GitHub repo?
Now I have to set up a completely different CI pipeline for my code to trigger the update of the App Spec, and when that completes, then (somehow) trigger DO to pull the repo and redeploy my app.
Why so complicated?
]]>Jenkins is an open source automation server intended to automate repetitive technical tasks involved in the continuous integration and delivery of software. With a robust ecosystem of plugins and broad support, Jenkins can handle a diverse set of workloads to build, test, and deploy applications.
In previous guides, we installed Jenkins on an Ubuntu 22.04 server and configured Jenkins with SSL using an Nginx reverse proxy. In this guide, we will demonstrate how to set up Jenkins to automatically test an application when changes are pushed to a repository.
For this tutorial, we will be integrating Jenkins with GitHub so that Jenkins is notified when new code is pushed to the repository. When Jenkins is notified, it will checkout the code and then test it within Docker containers to isolate the test environment from the Jenkins host machine. We will be using an example Node.js application to show how to define the CI/CD process for a project.
To follow along with this guide, you will need an Ubuntu 22.04 server with at least 1G of RAM configured with a secure Jenkins installation. To properly secure the web interface, you will need to assign a domain name to the Jenkins server. Follow these guides to learn how to set up Jenkins in the expected format:
To best control our testing environment, we will run our application’s tests within Docker containers. After Jenkins is up and running, install Docker on the server by following steps one and two of this guide:
When you have completed the above guides, you can continue on with this article.
After following the prerequisites, both Jenkins and Docker are installed on your server. However, by default, the Linux user responsible for running the Jenkins process cannot access Docker.
To fix this, we need to add the jenkins
user to the docker
group using the usermod
command:
- sudo usermod -aG docker jenkins
You can list the members of the docker
group to confirm that the jenkins
user has been added successfully:
- grep docker /etc/group
Outputdocker:x:999:sammy,jenkins
In order for the Jenkins to use its new membership, you need to restart the process:
- sudo systemctl restart jenkins
If you installed Jenkins with the default plugins, you may need to check to ensure that the docker
and docker-pipeline
plugins are also enabled. To do so, click Manage Jenkins from the sidebar, and then Manage Plugins from the next menu. Click on the Available tab of the plugin menu to search for new plugins, and type docker
into the search bar. If both Docker Pipeline
and Docker plugin
are returned as options, and they are unselected, select both, and when prompted, allow Jenkins to restart with the new plugins enabled.
This should take approximately a minute and the page will refresh afterward.
In order for Jenkins to watch your GitHub projects, you will need to create a Personal Access Token in our GitHub account.
Begin by visiting GitHub and signing into your account if you haven’t already done so. Afterwards, click on your user icon in the upper-right hand corner and select Settings from the drop down menu:
On the page that follows, locate the Developer settings section of the left-hand menu and click Personal access tokens:
Click on Generate new token button on the next page:
You will be taken to a page where you can define the scope for your new token.
In the Token description box, add a description that will allow you to recognize it later:
In the Select scopes section, check the repo:status, repo:public_repo and admin:org_hook boxes. These will allow Jenkins to update commit statuses and to create webhooks for the project. If you are using a private repository, you will need to select the general repo permission instead of the repo subitems:
When you are finished, click Generate token at the bottom.
You will be redirected back to the Personal access tokens index page and your new token will displayed:
Copy the token now so that we can reference it later. As the message indicates, there is no way to retrieve the token once you leave this page.
Note: As mentioned in the screenshot above, for security reasons, there is no way to redisplay the token once you leave this page. If you lose your token, delete the current token from your GitHub account and then create a new one.
Now that you have a personal access token for your GitHub account, we can configure Jenkins to watch your project’s repository.
Now that we have a token, we need to add it to our Jenkins server so it can automatically set up webhooks. Log into your Jenkins web interface using the administrative account you configured during installation.
Click on your username in the top-right corner to access your user settings, and from there, click Credentials in the left-hand menu. :
On the next page, click the arrow next to (global) within the Jenkins scope. In the box that appears, click Add credentials:
You will be taken to a form to add new credentials.
Under the Kind drop down menu, select Secret text. In the Secret field, paste your GitHub personal access token. Fill out the Description field so that you will be able to identify this entry at a later date. You can leave the Scope as Global and the ID field blank:
Click the OK button when you are finished.
You will now be able to reference these credentials from other parts of Jenkins to aid in configuration.
Back in the main Jenkins dashboard, click Manage Jenkins in the left hand menu:
In the list of links on the following page, click Configure System:
Scroll through the options on the next page until you find the GitHub section. Click the Add GitHub Server button and then select GitHub Server:
The section will expand to prompt for some additional information. In the Credentials drop down menu, select your GitHub personal access token that you added in the last section:
Click the Test connection button. Jenkins will make a test API call to your account and verify connectivity:
When you are finished, click the Save button to implement your changes.
To demonstrate how to use Jenkins to test an application, we will be using a “hello world” program created with Hapi.js. Because we are setting up Jenkins to react to pushes to the repository, you need to have your own copy of the demonstration code.
Visit the project repository and click the Fork button in the upper-right corner to make a copy of the repository in your account:
A copy of the repository will be added to your account.
The repository contains a package.json
file that defines the runtime and development dependencies, as well as how to run the included test suite. The dependencies can be installed by running npm install
and the tests can be run using npm test
.
We’ve added a Jenkinsfile
to the repo as well. Jenkins reads this file to determine the actions to run against the repository to build, test, or deploy. It is written using the declarative version of the Jenkins Pipeline DSL.
The Jenkinsfile
included in the hello-hapi
repository looks like this:
#!/usr/bin/env groovy
pipeline {
agent {
docker {
image 'node'
args '-u root'
}
}
stages {
stage('Build') {
steps {
echo 'Building...'
sh 'npm install'
}
}
stage('Test') {
steps {
echo 'Testing...'
sh 'npm test'
}
}
}
}
The pipeline
contains the entire definition that Jenkins will evaluate. Inside, we have an agent
section that specifies where the actions in the pipeline will execute. To isolate our environments from the host system, we will be testing in Docker containers, specified by the docker
agent.
Since Hapi.js is a framework for Node.js, we will be using the node
Docker image as our base. We specify the root
user within the container so that the user can simultaneously write to both the attached volume containing the checked out code, and to the volume the script writes its output to.
Next, the file defines two stages, i.e., logical divisions of work. We’ve named the first one “Build” and the second “Test”. The build step prints a diagnostic message and then runs npm install
to obtain the required dependencies. The test step prints another message and then run the tests as defined in the package.json
file.
Now that you have a repository with a valid Jenkinsfile
, we can set up Jenkins to watch this repository and run the file when changes are introduced.
Next, we can set up Jenkins to use the GitHub personal access token to watch our repository.
Back in the main Jenkins dashboard, click New Item in the left hand menu:
Enter a name for your new pipeline in the Enter an item name field. Afterwards, select Pipeline as the item type:
Click the OK button at the bottom to move on.
On the next screen, check the GitHub project box. In the Project url field that appears, enter your project’s GitHub repository URL.
Note: Make sure to point to your fork of the Hello Hapi application so that Jenkins will have permission to configure webhooks.
Next, in the Build Triggers section, check the GitHub hook trigger for GITScm polling box:
In the Pipeline section, we need to tell Jenkins to run the pipeline defined in the Jenkinsfile
in our repository. Change the Definition type to Pipeline script from SCM.
In the new section that appears, choose Git in the SCM menu. In the Repository URL field that appears, enter the URL to your fork of the repository again:
Note: Again, make sure to point to your fork of the Hello Hapi application.
Note: Our example references a Jenkinsfile
available within a public repository. If your project is not publicly accessible, you will need to use the add credentials button to add additional access to the repository. You can add a personal access token as we did with the hooks configuration earlier.
When you are finished, click the Save button at the bottom of the page.
Jenkins does not automatically configure webhooks when you define the pipeline for the repository in the interface. In order to trigger Jenkins to set up the appropriate hooks, we need to perform a manual build the first time.
In your pipeline’s main page, click Build Now in the left hand menu:
A new build will be scheduled. In the Build History box in the lower left corner, a new build should appear in a moment. Additionally, a Stage View will begin to be drawn in the main area of the interface. This will track the progress of your testing run as the different stages are completed:
In the Build History box, click on the number associated with the build to go to the build detail page. From here, you can click the Console Output button in the left hand menu to see details of the steps that were run:
Click the Back to Project item in the left hand menu when you are finished in order to return to the main pipeline view.
Now that we’ve built the project once, we can have Jenkins create the webhooks for our project. Click Configure in the left hand menu of the pipeline:
No changes are necessary on this screen, just click the Save button at the bottom. Now that the Jenkins has information about the project from the initial build process, it will register a webhook with our GitHub project when you save the page.
You can verify this by going to your GitHub repository and clicking the Settings button. On the next page, click Webhooks from the side menu. You should see your Jenkins server webhook in the main interface:
If for any reason Jenkins failed to register the hook (for example, due to upstream API changes or outages between Jenkins and Github), you can quickly add one yourself by clicking Add webhook and ensuring that the Payload URL is set to https://my-jenkins-server:8080/github-webhook
and the Content type is set to application/json
, then clicking Add webhook again at the bottom of the prompt.
Now, when you push new changes to your repository, Jenkins will be notified. It will then pull the new code and retest it using the same procedure.
To approximate this, in our repository page on GitHub, you can click the Create new file button to the left of the green Clone or download button:
On the next page, choose a filename and some dummy contents:
Click the Commit new file button at the bottom when you are finished.
If you return to your Jenkins interface, you will see a new build automatically started:
You can kick off additional builds by making commits to a local copy of the repository and pushing it back up to GitHub.
In this guide, we configured Jenkins to watch a GitHub project and automatically test any new changes that are committed. Jenkins pulls code from the repository and then runs the build and testing procedures from within isolated Docker containers. The resulting code can be deployed or stored by adding additional instructions to the same Jenkinsfile
.
To learn more about GitHub Actions, refer to GitHub’s documentation.
]]>When working on software development, it’s important to be able to manage the source code in an efficient and traceable way. Source code management (SCM) systems are an excellent way to provide an efficient and flexible process for working on projects of any size with any number of developers. Many different pieces of SCM software have existed through the years, from CVS to SubVersion, Perforce to Mercurial, but the current industry leader is Git, which has seen major growth with the popularity of sites such as GitHub and GitLab.
However, with free accounts on these services geared towards public, open-source repositories, the ability to work on private or proprietary software incurs a cost to the developer. Additionally, one’s access to the repository is beholden to an external organization, and many would prefer to control their own software from start to finish.
To that end, several self-hosted solutions such as Gogs, Gitea, and GitLab have been developed over the last several years. This tutorial focuses on setting up one of the more popular solutions, Gitea, to allow you to host private repositories and manage your own projects throughout their entire life-cycle. Gitea is small, self-contained, and lightweight, making it a quick process to deploy without breaking the bank on hardware requirements. You will be using a Docker installation of Gitea, which ensures that the software will be kept up to date.
Before beginning this tutorial, you should have the following:
sudo
privileges as described in the initial server setup for Ubuntu 20.04.your_domain
in examples throughout.Gitea, like many source code repositories, uses SSH for accessing remote repositories. This allows users to control access to their code by managing their SSH keys within Gitea itself. In order for users to be able to access the host via SSH, however, you will need to create a git user on the host machine. This step is completed first so that you can access the user’s user and group ID.
First, create the user on the host who will be accepting these connections:
- sudo adduser --system --shell /bin/bash --gecos 'Git Version Control' --group --disabled-password --home /home/git git
In this command, you create a system user that uses bash as its shell, but does not have a login password. This allows you to use sudo
to run commands as that user but prevents logging in as it. You also set the user’s home directory to /home/git
.
This command will output some information about the user it just created:
OutputAdding system user `git' (UID 112) ...
Adding new group `git' (GID 119) ...
Adding new user `git' (UID 112) with group `git' ...
Creating home directory `/home/git' …
Make note of the UID and GID values provided here (in this case, a UID of 112
and a GID of 119
), as they will be used in a future step.
Gitea has an image available in the global Docker repository, meaning that, using Docker Compose, you can install and run that image as a service with little extra work. The image itself runs the Gitea web and SSH services, allowing Git access both from the browser and the command line.
In order to spin up the Gitea container, you will be using Docker Compose, a declarative tool for setting up an environment.
To begin with, create a directory to host your service and enter it:
- mkdir ~/gitea
- cd ~/gitea
Once there, create a file called docker-compose.yml
using your preferred text editor. The following example uses nano
. This file will contain the descriptions of the containers that will run as part of your Gitea installation:
- nano docker-compose.yml
Add the following into this new file:
version: "3"
networks:
gitea:
external: false
services:
server:
image: gitea/gitea:1.16.5
container_name: gitea
environment:
- USER_UID=UID_from_step_1
- USER_GID=GID_from_step_1
restart: always
networks:
- gitea
volumes:
- ./gitea:/data
- /home/git/.ssh/:/data/git/.ssh
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "127.0.0.1:3000:3000"
- "127.0.0.1:2222:22"
Let’s walk through what this file does:
version: "3"
: this lets Docker Compose what version of configuration file this is.networks
: this section declares the networking setup of our collection of containers. In this case, a gitea
network is created, but is not exposed externally.services
image: gitea/gitea:1.16.5
: this specifies that we will be using Gitea version 1.16.5; however you can change the value after the colon to specify other versions, whether a specific release, a major version such as :1
, or a tag such as :latest
or :dev
.environment
: the environment section specifies environment variables that will be available to the image during installation and running. In this case, we are specifying a user and group ID for the environment, using the UID and GID provided in the output of the adduser
command in Step 1.restart: always
: this line instructs Docker to always restart the container if it goes down, whether because the container itself goes down or the host machine does; essentially, Gitea will start on boot.networks
: this specifies that the Gitea service will have access to and be accessible on the network named above../gitea:/data
and /home/git/.ssh/:/data/git/.ssh
: these are the locations where Gitea will store its repositories and related data. Currently, this is mapped to the folder named gitea
in the current directory. Docker will create this folder when the container starts if it doesn’t exist. The .ssh
folder will be described further later on in Step 6./etc/timezone
and /etc/localtime
: these two files contain information about the timezone and time on the host machine. By mapping these directly into the container as read-only files (specified with the final :ro
part of the definitions), the container will have the same information as the host.ports
: Gitea listens for connections on two ports. It listens for HTTP connections on port 3000
, where it serves the web interface for the source code repository, and it listens for SSH connections on port 22
. In this case, you are keeping port 3000
for HTTP connections by mapping it to the same number, and you are mapping the port on Gitea’s container from the usual 22
to 2222
to avoid port-clashing. In Step 6, you will set up an SSH shim to direct traffic to Gitea when requested.Note: This is a minimal example of a Docker Compose file for Gitea. There are several other options that one can include, such as using MySQL or PostGreSQL as the backing database or a named volume for storage. This minimal setup uses SQLite as the backing database and a volume using the directory named gitea
for storage. You can read more about these options in Gitea’s documentation.
Save and close the file. If you used nano
to edit the file, you can do so by pressing CTRL + X
, Y
, and then ENTER
.
With this file in place you can then bring the containers up using Docker Compose:
- docker-compose up
This command will pull down the images, start the Gitea container, and will return output like this:
Output[+] Running 9/9
⠿ server Pulled 8.2s
⠿ e1096b72685a Pull complete 1.4s
⠿ ac9df86bb932 Pull complete 3.3s
⠿ 6d34ed99b58a Pull complete 3.4s
⠿ a8913d040fab Pull complete 3.6s
⠿ a5d3a72a2366 Pull complete 5.3s
⠿ 1f0dcaae29cc Pull complete 5.6s
⠿ f284bcea5adb Pull complete 7.3s
⠿ 0f09c34c97e3 Pull complete 7.5s
[+] Running 2/2
⠿ Network gitea_gitea Created 0.2s
⠿ Container gitea Created 0.2s
Attaching to gitea
gitea | Generating /data/ssh/ssh_host_ed25519_key...
gitea | Generating /data/ssh/ssh_host_rsa_key...
gitea | Generating /data/ssh/ssh_host_dsa_key...
gitea | Generating /data/ssh/ssh_host_ecdsa_key...
gitea | Server listening on :: port 22.
gitea | Server listening on 0.0.0.0 port 22.
gitea | 2022/03/31 17:26:21 cmd/web.go:102:runWeb() [I] Starting Gitea on PID: 14
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:21:PreloadSettings() [I] AppPath: /usr/local/bin/gitea
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:22:PreloadSettings() [I] AppWorkPath: /app/gitea
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:23:PreloadSettings() [I] Custom path: /data/gitea
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:24:PreloadSettings() [I] Log path: /data/gitea/log
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:25:PreloadSettings() [I] Configuration file: /data/gitea/conf/app.ini
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:26:PreloadSettings() [I] Prepare to run install page
gitea | 2022/03/31 17:26:21 ...s/install/setting.go:29:PreloadSettings() [I] SQLite3 is supported
gitea | 2022/03/31 17:26:21 cmd/web.go:208:listen() [I] Listen: http://0.0.0.0:3000
gitea | 2022/03/31 17:26:21 cmd/web.go:212:listen() [I] AppURL(ROOT_URL): http://localhost:3000/
This will leave the container running in the foreground, however, and it will stop as soon as you exit the process with Ctrl + C
or by losing your connection. In order to have the container run in the background as a separate process, you can append the -d
flag to the Compose command:
- docker-compose up -d
You will be notified when the container starts and then returned to your shell.
Running a web service such as Gitea behind a reverse proxy is common practice, as modern server software such as Apache or Nginx can more easily handle multiple services on one machine, balance load across multiple servers, and handle SSL. Additionally, this will allow you to set up a domain name pointing to your Gitea instance running on standard HTTP(S) ports.
For the purposes of this tutorial, we’ll use Nginx. First, update the package lists on your host machine:
- sudo apt update
Next, install Nginx using apt
:
- sudo apt install nginx
Now, as you are using the firewall ufw
, you will need to allow access to these ports:
- sudo ufw allow "Nginx Full"
Once this is installed, you should be able to access your server in your browser by visiting http://your_domain
. This will lead you to a very plain page welcoming you to Nginx.
At this point, you’ll need to create a reverse proxy entry to direct incoming traffic through Nginx to the Gitea instance running in Docker. Create a new file in the Nginx sites-available
directory using your preferred text editor. The following example uses nano
:
- sudo nano /etc/nginx/sites-available/gitea
In this file, set up a new server block with requests to /
proxied to your Gitea instance:
server {
# Listen for requests on your domain/IP address.
server_name your_domain;
root /var/www/html;
location / {
# Proxy all requests to Gitea running on port 3000
proxy_pass http://localhost:3000;
# Pass on information about the requests to the proxied service using headers
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Once you are finished editing the file, save and close it.
Note: For more information on understanding what’s going on within these directives, see the Understanding Nginx HTTP Proxying, Load Balancing, Buffering, and Caching tutorial.
Nginx determines what sites it will actually serve based on whether or not those files are present in its sites-enabled
directory. This is managed via symbolic links pointing to the files in the sites-available
directory. You will need to create one of those symbolic links for Nginx to start serving Gitea:
- sudo ln -s /etc/nginx/sites-available/gitea /etc/nginx/sites-enabled/gitea
Before you restart Nginx to make your changes live, you should have Nginx itself check that those changes are valid by testing its config.
- sudo nginx -t
If everything’s okay, this command will return output like the following:
Outputnginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
If there are any issues, it will tell you what and where they are.
When you’re ready to move ahead with this change, restart the Nginx system service:
- sudo systemctl restart nginx
Now, when you visit http://your_domain
in your browser, you should find yourself on the initial setup page for Gitea ready for you to fill out.
Thanks to Certbot and the Let’s Encrypt free certificate authority, adding TLS encryption to your Gitea installation app will take only two commands.
First, install Certbot and its Nginx plugin:
- sudo apt install certbot python3-certbot-nginx
Next, run certbot
in --nginx
mode, and specify the same domain that you used in the Nginx server_name
configuration directive:
- sudo certbot --nginx -d your_domain_here
You’ll be prompted to agree to the Let’s Encrypt terms of service, and to enter an email address.
Afterwards, you’ll be asked if you want to redirect all HTTP traffic to HTTPS. It’s up to you, but this is generally recommended and safe to do.
After that, Let’s Encrypt will confirm your request and Certbot will download your certificate:
OutputCongratulations! You have successfully enabled https://your_domain
You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=your_domain
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/your_domain/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/your_domain/privkey.pem
Your cert will expire on 2022-05-09. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
Certbot will automatically reload Nginx with the new configuration and certificates. Reload your site in your browser and it should switch you over to HTTPS automatically if you chose the redirect option.
Your site is now secure and it’s safe to continue with the web-based setup steps.
You can find more information on securing domains with Let’s Encrypt on the How to Secure Nginx with Let’s Encrypt on Ubuntu 20.04 tutorial.
Now you can move on to configuring Gitea itself and creating the first admin user. Visit your Gitea instance by opening https://your_domain
in a browser. On the initial Gitea configuration screen, there will be several options for the service:
Some of these such as the site title are up to your particular use case, though for the purposes of this tutorial, you’ll need to change the following:
https://your_domain
.When you save your configuration changes, you will be directed to the Gitea login page.
Note: Once the configuration has been saved, the Gitea service will restart. As this may take a few seconds, you may encounter an Nginx error stating 502 Bad Gateway. If you do encounter this error, wait a few seconds and restart the page.
As you do not yet have a user, you will need to create one, first. Click the Need an account? Register now link below the login form to register a new user. As the first user on the system, this user will be created as an administrator. If you set up email settings on the configuration screen, you may need to verify your account first.
Once you are logged in as that user, clicking on your user icon in the upper right-hand corner of the page and then clicking Site Administration from the drop-down menu will take you to a page where you will be able to run maintenance jobs, manage user accounts and organizations, and further configure Gitea.
In order to test Gitea, both on the web interface and using Git itself, create a test repository. You can always delete this repository later.
Click on the + sign in the upper right corner of the page, and then click on + New Repository from the drop-down menu. Here, you will be presented with a screen allowing you to name and customize your repository with information such as its description, settings such as whether or not it’s private, and any default contents such as a README
or .gitignore
file.
Once you hit Create Repository, you will have a new repository to play around with.
The final step of the process is to prepare the host machine with an SSH shim. Because Gitea is running in a Docker container, it cannot accept SSH connections on the default port of 22
, as this will clash with the host. In the docker-compose.yml
file you created above, Docker was instructed to map a port on the host to port 22
on the container so that it accepts SSH connections to port 2222
. Additionally, the SSH authorized_keys
file will not be accessible to someone SSHing into the host by default.
In order to take this into account, you will need to create an SSH shim which will pass SSH connections to the git user on the host on to the container. In the compose file, you also specified that the USER in the container will have a user and group ID of 1000, and on the Gitea configure screen, you told the service to use the user named git.
Next, you will need to create an SSH key for the user. This will only be used in a step below and not shared with anyone outside the host.
- sudo -u git ssh-keygen -t rsa -b 4096 -C "Gitea Host Key"
This command uses sudo
to create an SSH key as the user you created above. In this instance, the key will be a 4096-bit RSA key. You will be asked a series of questions such as what password you would like for the key and what to name the key file. Hit ENTER
for each of them, leaving them blank to accept the default.
Warning: If you set a password on the key, you will not be able to use the shim.
You will need to ensure that the user within the Gitea container will accept this key. You can do this by adding it to the .ssh/authorized_keys
file:
- sudo -u git cat /home/git/.ssh/id_rsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
- sudo -u git chmod 600 /home/git/.ssh/authorized_keys
These commands all work with the shim due to the fact that the directory /home/git/.ssh
on the host is mounted as a volume on the container, meaning that the contents are shared between them. When a connection to the host via git over SSH is received, it will use the same authorized_keys
file as the container.
The final step for the shim is to create a stub gitea
command on the host. This is what allows git commands to work over SSH: when an SSH connection is made, a default command will be run. This gitea
command on the host is what will proxy the SSH connection to the container.
For this script, use cat
to write to the file /usr/local/bin/gitea
:
- cat <<"EOF" | sudo tee /usr/local/bin/gitea
- #!/bin/sh
- ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
- EOF
The command in this script SSHes to the Gitea Docker container, passing on the contents of the original command used by git
.
Finally, make sure that the script is executable:
- sudo chmod +x /usr/local/bin/gitea
You can test out pulling from and pushing to Git repositories on your Gitea instance by adding your SSH key to your Gitea user.
You will need the contents of your SSH public key. This usually lives in a file named something like ~/.ssh/id_rsa.pub
, depending on which algorithm you used when creating your key:
- cat ~/.ssh/id_rsa.pub
Note: If you need to create an SSH key for the first time, you can learn how to do so with this How to Set Up SSH Keys on Ubuntu 20.04 tutorial.
Copy the output of this command.
In Gitea, click on your user icon in the upper right-hand corner and select Settings. On the settings page, there will be a series of tabs on the top. Click on SSH/GPG Keys, then the Add Key button beside Manage SSH Keys. Paste your key into the large text area in the form and then click the Add Key button beneath it.
Now, navigate to the test repository you created in Step 3 and copy the SSH URL provided. On your local machine, clone the repository:
- git clone git@your_domain:username/test
This will use SSH to clone the repository. If you have a password set on your SSH key, you will be asked to provide that.
Move to that directory, create a new file:
- cd test
- touch just_testing
Next, add it to your staged changes:
- git add just_testing
Finally, commit that file:
- git commit -am "Just testing pushing over SSH!"
Now, you should be able to push your changes to the remote repository:
- git push origin master
When you refresh the page in your browser, your new file will appear in the repository.
You’ve set up a Gitea service using Docker in order to self-host your source-code repositories. From here, you will be able to work on both public and private repositories, using familiar workflows such as pull-request code reviews and projects organized by organization. Gitea also works well with various continuous integration and deployment (CI/CD) tools such as Drone, Jenkins, and GoCD. Additionally, using Docker volumes such as this allows you to extend your storage to fit Git LFS (large file storage) content on network or block storage.
]]>Software version control systems like Git enable you to keep track of your software at the source level. With versioning tools, you can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
As one of the most popular version control systems currently available, Git is a common choice among open source and other collaborative software projects. Many projects’ files are maintained in a Git repository, and sites like GitHub, GitLab, and Bitbucket help to facilitate software development projects sharing and collaboration.
In this tutorial, we’ll install and configure Git on a Debian 10 server. We will cover how to install the software in two different ways, each with their own benefits depending on your specific needs.
To complete this tutorial, you should have a non-root user with sudo
privileges and firewall enabled on a Debian 10 server. Learn how to set this up by following our Debian 10 initial server setup guide.
With your server and user set up, you are ready to begin. Jump to Installing Git with Default Packages (via the apt package manager) or Installing Git from Source to begin.
Debian’s default repositories provide you with a fast method to install Git. Note that the version you install via these repositories may not be the newest version currently available. If you need the latest release, consider moving to the next section of this tutorial to learn how to install and compile Git from source.
First, use the APT package management tools to update your local package index:
- sudo apt update
With the update complete, you can download and install Git:
- sudo apt install git
You can confirm that you have installed Git correctly by running the following command:
- git --version
Outputgit version 2.20.1
With Git successfully installed, you can now move on to the Setting Up Git section of this tutorial to complete your setup.
A more flexible method of installing Git is to compile the software from source. This takes longer and will not be maintained through your package manager, but it will allow you to download the latest release and give you control over certain options you include if you wish to customize them.
Before you begin, you need to install the software that Git depends on. This is all available in the default repositories, so start by updating your local package index:
- sudo apt update
Next, install the packages:
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
After you have installed the necessary dependencies, go ahead and get the version of Git you would like to install by visiting the Git project’s mirror on GitHub, available via the following URL:
https:/git/git
From here, be sure that you are on the master
branch. Click on the Tags link and select your desired Git version. Unless you have a reason for downloading a release candidate version (marked as rc), try to avoid these as they may be unstable.
Next, on the right side of the page, click on the Code button, then right-click on Download ZIP and copy the link address that ends in .zip
.
Back on your Debian 10 server, change into the tmp
directory to download temporary files:
- cd /tmp
From there, you can use the wget
command to install the copied zip file link. We’ll specify a new name for the file as git.zip
:
- wget https://github.com/git/git/archive/refs/tags/v2.35.1.zip -O git.zip
Next, unzip the file that you downloaded:
- unzip git.zip
Then change into the following directory:
- cd git-*
Now you can make the package with the following command:
- make prefix=/usr/local all
After, install the package by running the following:
- sudo make prefix=/usr/local install
To ensure that the install was successful, you can run git --version
and you should receive relevant output that specifies the current version installed for Git.
Now that you have Git installed, if you want to upgrade to a later version, you can clone the repository, and then build and install. To find the URL to use for the clone operation, navigate to the branch or tag that you want on the project’s GitHub page and then copy the clone URL on the right side:
At the time of writing, the relevant URL is the following:
https:/git/git.git
First change to your home directory:
- cd ~
Then use git clone
on the URL you recently copied:
- git clone https://github.com/git/git.git
This will create a new directory within your current directory where you can rebuild the package and reinstall the newer version, as you did previously. This will overwrite your older version with the new version. Here are the commands again for making and installing the package:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
With this complete, you can be sure that your version of Git is up-to-date.
Now that you have Git installed, you should configure it so that the generated commit messages will contain your correct information.
This can be achieved by using the git config
command. Specifically, you need to provide your name and email address because Git embeds this information into each commit you do.
First, add your name:
- git config --global user.name "Sammy"
Then add your email address:
- git config --global user.email "sammy@domain.com"
You can view all of the configuration items that have been set by running list
:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
The information you enter is stored in your Git configuration file, which you can optionally edit with a text editor. Here we’ll use nano
as an example to edit the Git configuration file:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
There are many other options that you can set, but these are the two essential ones needed. If you skip this step, you’ll likely receive warnings when you commit to Git. This creates more work for you because you will then have to revise the commits you have done with the corrected information.
You should now have Git installed and ready to use on your system.
To learn more about how to use Git, check out these articles and series:
Additionally, you can learn more by reviewing our series on An Introduction to Open Source for more information about using Git as part of open-source projects.
]]>Version control systems like Git are essential to modern software development best practices. Versioning allows you to keep track of your software at the source level. You can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
Many software projects’ files are maintained in Git repositories, and platforms like GitHub, GitLab, and Bitbucket help to facilitate software development project sharing and collaboration.
After installing Git, you’ll need to spend some time getting familiar with the core commands for maintaining a project repository. This tutorial will take you through the first steps of creating and pushing a Git repository on the command line.
If you are converting an existing project into a Git repository, you can proceed to step 2. Otherwise, you can begin by creating a new working directory:
- mkdir testing
Next, move into that working directory:
- cd testing
Once inside that directory, you will need to create a sample file to demonstrate Git’s functionality. You can create an empty file with the touch
command:
- touch file
Once all your project files are in your workspace, you’ll need to start tracking your files with git. The next step explains that process.
You can initialize a Git repository in an existing directory by using the git init
command.
- git init
OutputInitialized empty Git repository in /home/sammy/testing/.git/
Next, you’ll need to use the git add
command in order to allow your existing files to be tracked by Git. For the most part, Git will never track new files automatically, so git add
is a necessary step when adding new content to a repository that Git has not previously tracked.
- git add .
You now have an actively tracked Git repository. From now on, each of the steps in this tutorial will be consistent with a regular workflow for updating and committing to an existing Git repository.
Each time you commit changes to a Git repository, you’ll need to provide a commit message. Commit messages summarize the changes that you’ve made. Commit messages can never be empty, but can be any length – some people prefer to use very long and descriptive commit messages, although some platforms like Github make it easier to read shorter commit messages.
If you are importing an existing project to Git for the first time, it’s typical to just use a message like “Initial Commit”. You can create a commit with the git commit
command:
- git commit -m "Initial Commit" -a
Output[master (root-commit) 1b830f8] initial commit
0 files changed
create mode 100644 file
There are two important parameters of the above command. The first is -m, which signifies that your commit message (in this case “Initial Commit”) is going to follow. Secondly, the -a signifies that your commit should include all added or modified files. Git does not treat this as the default behavior, but when working with Git in the future, you may default to including all updated files in your future commits most of the time.
In order to commit a single file or a few files, you could have used:
- git commit -m "Initial Commit" file1 file2
In the next step, you’ll push this commit to a remote repository.
Up until this point, you have worked exclusively in your own environment. You can, in fact, still benefit from using Git this way, by using advanced command line functionality in order to track and revert your own changes. However, in order to make use of its popular collaboration features on platforms like Github, you’ll need to push changes to a remote server.
The first step to being able to push code to a remote server is providing the URL where the repository lives and giving it a local name. To configure a remote repository and to see a list of all remotes (you can have more than one), use the git remote
command:
- git remote add origin ssh://git@git.domain.tld/repository.git
- git remote -v
Outputorigin ssh://git@git.domain.tld/repository.git (fetch)
origin ssh://git@git.domain.tld/repository.git (push)
The first command adds a remote, called “origin”, and sets the URL to ssh://git@git.domain.tld/repository.git.
You can name your remote whatever you’d like. origin
is a common convention for where your authoritative, upstream copy of the code will live. The URL needs to point to an actual remote repository. For example, if you wanted to push code to GitHub, you would need to use the repository URL that they provide.
Once you have a remote configured, you are able to push your code. You can push code to a remote server by typing the following:
- git push origin main
Note: Prior to 2021, the first branch created in a Git repository was named master
by default. There has since been a push to change the default branch name to main
in order to use more neutral terminology. Although many Git hosting providers such as Github have made this change, your local copy of Git may still default to master
. If you receive an error message about a nonexistent branch called main
, try pushing master
instead for now.
OutputCounting objects: 4, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 266 bytes, done.
Total 3 (delta 1), reused 1 (delta 0)
To ssh://git@git.domain.tld/repository.git
0e78fdf..e6a8ddc main -> main
In the future, when you have more commits to push, you can default to typing git push
, which will inherit both the branch name and the remote name from your last push.
In this tutorial, you created and pushed a starting Git repository. After committing and pushing your code to a repository such as GitHub, you can opt to spend more time collaborating in the web interface, but it will always be important to be able to work from a local machine on the command line. Maintaining or contributing to projects with multiple committers will involve more complex Git commands, but what you’ve covered in this tutorial is enough to work on personal projects.
Next, you may want to learn about using Git branches, or how to make a pull request on Github. You can also refer to the Git reference guide.
]]>This article is deprecated and no longer maintained.
We now provide Git setup instructions for each platform individually.
This article may still be useful as a reference, but may not follow best practices. We strongly recommend using a more recent article.
Open-source projects that are hosted in public repositories benefit from contributions made by the broader developer community, and are typically managed through Git.
A distributed version control system, Git helps both individuals and teams contribute to and maintain open-source software projects. Free to download and use, Git is an example of an open-source project itself.
This tutorial will discuss the benefits of contributing to open-source projects, and go over installing and setting up Git so that you can contribute to software projects.
Open-source software is software that is freely available to use, redistribute, and modify.
Projects that follow the open-source development model encourage a transparent process that is advanced through distributed peer review. Open-source projects can be updated quickly and as needed, and offer reliable and flexible software that is not built on locked proprietary systems.
Contributing to open-source projects helps ensure that they are as good as they can be and representative of the broad base of technology end-users. When end-users contribute to open-source projects through code or documentation, their diverse perspectives provide added value to the project, the project’s end-users, and the larger developer community.
The best way to begin to contribute to open-source projects is to start by contributing to software that you already use. As a user of a particular tool, you best understand what functionalities would be most valuable to the project. Make sure you read any available documentation about the software first. In fact, many open-source projects will have a CONTRIBUTING.md
file in the root directory, which you should read carefully before you contribute. You may also want to get a sense of the interactions between other developers in the community if there are forums about the project available.
Finally, if you’re starting out with contributing to open-source software, it is a good idea to start with something small — each contribution is valuable. You may want to start with fixing typos, adding comments, or writing clearer documentation.
One of the most popular version control systems for software is Git. Git was created in 2005 by Linus Torvalds, the creator of the Linux kernel. Originally utilized for the development of the Linux kernel, Junio Hamano is the current maintainer of the project.
Many projects maintain their files in a Git repository, and sites like GitHub, GitLab, and Bitbucket have streamlined the process of sharing and contributing to code. Every working directory in Git is a full-fledged repository with complete history and tracking independent of network access or a central server.
Version control has become an indispensable tool in modern software development because these systems allow you to keep track of software at the source level. You and other members of a development team can track changes, revert to previous stages, and branch off from the base code to create alternative versions of files and directories.
Git is so useful for open-source projects because it facilitates the contributions of many developers. Each contributor can branch off from the main or master branch of the code base repository to isolate their own changes, and can then make a pull request to have these changes integrated into the main project.
To use Git to contribute to open-source projects, let’s check if Git is installed, and if it’s not, let’s go through how to install it on your local machine.
First, you will want to check if you have Git command line tools installed on your computer. If you have been making repositories of your own code, then you likely have Git installed on your local machine. Some operating systems also come with Git installed, so it is worth checking before you install.
You can check whether Git is installed and what version you are using by opening up a terminal window in Linux or Mac, or a command prompt window in Windows, and typing the following command:
- git --version
However, if Git is not installed, you will receive an error similar to the following:
-bash: git: command not found
'git' is not recognized as an internal or external command, operable program, or batch file.
In this case, you should install Git into your machine. Let’s go through installation for several of the major operating systems.
By far the easiest way of getting Git installed and ready to use is by using your version of Linux’s default repositories. Let’s go through how to install Git on your local Linux machine using this method.
You can use the APT package management tools to update your local package index. After, you can download and install the program:
- sudo apt update
- sudo apt install git
While this is the fastest method of installing Git, the version may be older than the newest version. If you need the latest release, consider compiling Git from source by using this guide.
From here, you can continue on to the section on Setting Up Git.
We’ll be using yum
, CentOS’s native package manager, to search for and install the latest Git package available in CentOS’s repositories.
Let’s first make sure that yum is up to date by running this command:
- sudo yum -y update
The -y
flag is used to alert the system that we are aware that we are making changes, preventing the terminal from prompting us to confirm.
Now, we can go ahead and install Git:
- sudo yum install git
While this is the fastest method of installing Git, the version may be older than the newest version. If you need the latest release, consider compiling Git from source by following Option 2 from this guide.
From here, you can continue on to the section on Setting Up Git.
Git packages for Fedora are available through both yum
and dnf
. Introduced in Fedora 18, DNF, or Dandified Yum, has been the default package manager for Fedora since Fedora 22.
From your terminal window, update dnf and install Git:
- sudo dnf update
- sudo dnf install git
If you have an older version of Fedora, you can use the yum
command instead. Let’s first update yum
, then install Git:
- sudo yum update
- sudo yum install git
From here, you can continue on to the section on Setting Up Git.
On a local Macintosh computer, if you type a Git command into your Terminal window (as in git --version
above), you’ll be prompted to install Git if it is not already on your system. When you receive this prompt, you should agree to have Git installed and follow the instructions and respond to the prompts in your Terminal window.
You can install the most recent version of Git onto your Mac by installing it through the binary installer. There is an OS X Git installer maintained and available for download through the Git website. Clicking here will cause the download to start automatically.
Once Git is fully installed, you can continue on to the section on Setting Up Git.
For Windows, the official build is available for you to download through the Git website. Clicking here will cause the download to start automatically.
There is also an open-source project called Git for Windows, which is separate from the official Git website. This tool provides both command line and graphical user interface tools for using Git effectively on your Windows machine. For more information about this project and to inspect and download the code, visit the Git for Windows project site.
Once Git is fully installed, you can continue on to the section on Setting Up Git.
Now that you have Git installed, you need to do a few things so that the commit messages that will be generated for you will contain your correct information.
The easiest way of doing this is through the git config
command. Specifically, we need to provide our name and email address because Git embeds this information into each commit we do. We can go ahead and add this information by typing:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
We can review all of the configuration items that have been set by typing:
- git config --list
user.name=Your Name
user.email=youremail@domain.com
As you may notice, this has a slightly different format. The information is stored in your Git configuration file, which you can optionally edit by hand with a text editor, like nano for example:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Once you’re done editing your file, you can exit nano by typing the control and x
keys, and when prompted to save the file press y
.
There are many other options that you can set, but these are the two essential ones needed to prevent warnings in the future.
With Git installed and set up on your local machine, you are now ready to use Git for version control of your own software projects as well as contribute to open-source projects that are open to the public.
Adding your own contributions to open-source software is a great way to become more involved in the broader developer community, and help to ensure that software made for the public is of high quality and fully representative of the end-users.
]]>When you maintain an open-source software repository, you’re taking on a leadership role. Whether you’re the founder of a project who released it to the public for use and contributions, or you’re working on a team and are maintaining one specific aspect of the project, you are going to be providing an important service to the larger developer community.
While open-source contributions through pull requests from the developer community are crucial for ensuring that software is as useful as it can be for end users, maintainers have a real impact on shaping the overall project. Repository maintainers are extremely involved in the open-source projects they manage, from day-to-day organization and development, to interfacing with the public and providing prompt and effective feedback to contributors.
This guide will take you through some tips for maintaining public repositories of open-source software. Being a leader of an open-source project comes with both technical and non-technical responsibilities to help foster a user-base and community around your project. Taking on the role of a maintainer is an opportunity to learn from others, get experience with project management, and watch your project grow and change as your users become invested contributors.
Documentation that is thorough, well-organized, and serves the intended communities of your project will help expand your user base. Over time, your user base will become the contributors to your open-source project.
Since you’ll be thinking through the code you are creating anyway, and may even be jotting down notes, it can be worthwhile to incorporate documentation as part of your development process while it is fresh in your mind. You may even want to consider writing the documentation before the code, following the philosophy of a documentation-driven development approach that documents features first and develops those features after writing out what they will do.
Along with your code, there are a few files of documentation that you’ll want to keep in your top-level directory:
README.md
file that provides a summary of the project and your goals.CONTRIBUTING.md
file with contribution instructions.Documentation can come in many forms and can target different audiences. As part of your documentation, and depending on the scope of your work, you may decide to do one or more of the following:
Your project may be better suited to certain kinds of documentation than others, but providing more than one approach to the software will help your user base better understand how to interact with your work.
When writing documentation, or recording voice for a video, it is important to be as clear as possible. It is best to make no assumptions about the technical ability of your audience. You’ll also want to approach your documentation from the top down — that is, explain what your software does in a general way (e.g., automate server tasks, build a website, animate sprites for game development), before going into details.
Though English has become a universal language in the technology sphere, you’ll still want to consider who your expected users are and how to reach them. English may be the best choice to have access to a broad user base, but you’ll want to keep in mind that many people are approaching your documentation as non-native English speakers, so work to favor accessible language that will not confuse your readers or viewers.
Try to write documentation as though you are writing to a collaborator who needs to be brought up to speed on the current project; after all, you’ll want to encourage potential contributors to make pull requests to the project.
Issues are typically a way to keep track of or report bugs, or to request new features to be added to the code base. Open-source repository hosting services like GitHub, GitLab, and Bitbucket will provide you with an interface for yourself and others to keep track of issues within your repository. When releasing open-source code to the public, you should expect to have issues opened by the community of users. Organizing and prioritizing issues will give you a good road map of upcoming work on your project.
Because any user can file an issue, not all issues will be reporting bugs or be feature requests; you may receive questions via the issue tracker tool, or you may receive requests for smaller enhancements to the user interface, for example. It is best to organize these issues as much as possible and to be communicative to the users who are creating these issues.
Issues should represent concrete tasks that need to be done on the source code, and you will need to prioritize them accordingly. You and your team will have an understanding of the amount of time and energy you or contributors can devote to filed issues, and together you can work collaboratively to make decisions and create an actionable plan. When you know you won’t be able to get to a particular issue within a quick timeframe, you can still comment on the issue to let the user know that you have read the issue and that you’ll get to it when you can, and if you are able to you can provide an expected timeline for when you can review the issue again.
For issues that are feature requests or enhancements, you can ask the person who filed the issue whether they are able to contribute code themselves. You can direct them to the CONTRIBUTORS.md
file and any other relevant documentation.
Since questions often do not represent concrete tasks, commenting on the issue to courteously direct the user to relevant documentation can be a good option to keep your interactions professional and kind. If documentation for this question does not exist, now is a great time to add the relevant documentation, and express your thanks to the user for identifying this oversight. If you are getting a lot of questions via issues, you may consider creating a FAQ section of your documentation, or a wiki or forum for others to participate in question-answering.
Whenever a user reports an issue, try to be as kind and gracious as possible. Issues are indicators that users like your software and want to make it better!
Working to organize issues as best you can keep your project up to date and relevant to its community of users. Remove issues that are outside of the scope of your project or become stale, and prioritize the others so that you are able to make continuous progress.
You can improve the efficiency and quality of your project by automating maintenance tasks and testing. Automated maintenance and testing can continuously check the accuracy of your code and provide a more formal process for approving contributor submissions. This helps free up time so you can focus on the most important aspects of your project. The best news is that there are plenty of tools that have already been developed and may fulfill the needs of your project.
Set up automatic testing for incoming contributions by requiring status checks. Make sure to include information about how testing works for your project in a CONTRIBUTING.md
file.
Check out the tools that have been developed to automate maintenance tasks. Several possibilities include automating your releases and code review, or closing issues if an author doesn’t respond when information is requested.
Keep in mind that less is more. Be intentional about the processes and tasks you choose to automate in ways that can optimize efficiency, production, and quality for your project, yourself, and contributors.
The more you welcome contributors to your project and reward their efforts, the more likely you’ll be to encourage more contributions. To get people started, you’ll want to include a CONTRIBUTING.md
file in the top-level of your repository, and a pointer to that file in your README.md
file.
A good file on contributing will outline how to get started working on the project as a developer. You may want to offer a step-by-step guide, or provide a checklist for developers to follow, explaining how to successfully get their code merged into the project through a pull request.
In addition to documentation on how to contribute to the project, don’t forget to keep the code consistent and readable throughout. Code that is easy to understand through comments and clear and consistent usage will go a long way to making contributors feel like they can jump in on the project.
Finally, maintain a list of contributors or authors. You can invite contributors to add themselves to the list no matter what their contribution (even fixing typos is valuable, and can lead to more contributions in the future). This provides a way to recognize contributors for their work on the project in a public-facing way that they can point to, while also making others aware of how well contributors are treated.
By empowering users through documentation, being responsive to issues, and encouraging them to participate, you are already well on your way to building out the community around your open-source project. Users that you keep happy and who you treat as collaborators will in turn promote your software.
Additionally, you can work to promote your project through various avenues:
You’ll want to tailor your promotion to the scope of your project and the number of active team members and contributors you have working with you.
As your community grows, you can provide more spaces for contributors, users, and maintainers to interact. Some options you may consider include:
Consider your core user base and the scope of your project — including the number of people who are maintaining the project and the resources you have available — before rolling out these potential spaces, and seek feedback from your community about what works for them.
Above all, it is important to be kind and show some love in all of your interactions with your community. Being a gracious maintainer can be difficult, but it will pay off for your project down the line.
Repository maintainers are incredibly important within the larger open-source community. Though it requires significant investment and hard work, it is often a rewarding experience that allows you to grow as a developer and a contributor. Being an approachable and kind maintainer can go a long way to advance the development of a project that you care about.
]]>Having an automated deployment process is a requirement for a scalable and resilient application, and GitOps, or Git-based DevOps, has rapidly become a popular method of organizing CI/CD with a Git repository as a “single source of truth.” Tools like CircleCI integrate with your GitHub repository, allowing you to test and deploy your code automatically every time you make a change to your repository. When this kind of CI/CD is combined with the flexibility of Kubernetes infrastructure, you can build an application that scales easily with changing demand.
In this article you will use CircleCI to deploy a sample application to a DigitalOcean Kubernetes (DOKS) cluster. After reading this tutorial, you’ll be able to apply these same techniques to deploy other CI/CD tools that are buildable as Docker images.
To follow this tutorial, you’ll need to have:
A DigitalOcean account, which you can set up by following the Sign up for a DigitalOcean Account documentation.
Docker installed on your workstation, and knowledge of how to build, remove, and run Docker images. You can install Docker on Ubuntu 20.04 by following the tutorial on How To Install and Use Docker on Ubuntu 20.04.
Knowledge of how Kubernetes works and how to create deployments and services on it. It’s highly recommended to read the Introduction to Kubernetes article.
The kubectl
command line interface tool installed on the computer from which you will control your cluster.
An account on Docker Hub to be used to store your sample application image.
A GitHub account and knowledge of Git basics. You can follow the tutorial series Introduction to Git: Installation, Usage, and Branches and How To Create a Pull Request on GitHub to build this knowledge.
For this tutorial, you will use Kubernetes version 1.22.7
and kubectl
version 1.23.5
.
Note: You can skip this section if you already have a running DigitalOcean Kubernetes cluster.
In this first step, you will create the DigitalOcean Kubernetes (DOKS) cluster from which you will deploy your sample application. The kubectl
commands executed from your local machine will change or retrieve information directly from the Kubernetes cluster.
Go to the Kubernetes page on your DigitalOcean account.
Click Create a Kubernetes cluster, or click the green Create button at the top right of the page and select Kubernetes from the dropdown menu.
The next page is where you are going to specify the details of your cluster. On Select a Kubernetes version pick version 1.22.7-do.0. If this one is not available, choose the latest recommended version.
For Choose a datacenter region, choose the region closest to you. This tutorial will use San Francisco.
You then have the option to build your Node pool(s). On Kubernetes, a node is a worker machine, which contains the services necessary to run pods. On DigitalOcean, each node is a Droplet. Your node pool will consist of a single Basic node. Select the 1GB/1vCPU configuration and change to 1 Node on the number of nodes.
You can add extra tags if you want; this can be useful if you plan to use the DigitalOcean API or just to better organize your node pools.
On Choose a name, for this tutorial, use kubernetes-deployment-tutorial
. This will make it easier to follow throughout while reading the next sections. Finally, click the green Create Cluster button to create your cluster. Wait until the cluster creation has completed.
After cluster creation, there will be instructions to connect to your cluster. Follow the instructions on the Automated (recommended) tab or download the kubeconfig file under the Manual tab. This is the file you will be using to authenticate the kubectl
commands you are going to run against your cluster. Download it to your kubectl
machine.
The default way to use that file is to always pass the --kubeconfig
flag and the path to it on all commands you run with kubectl
. For example, if you downloaded the config file to Desktop
, you would run the kubectl get pods
command like this:
- kubectl --kubeconfig ~/Desktop/kubernetes-deployment-tutorial-kubeconfig.yaml get pods
This would yield the following output:
OutputNo resources found.
This means you accessed your cluster. The No resources found.
message is correct, since you don’t have any pods on your cluster.
If you are not maintaining any other Kubernetes clusters you can copy the kubeconfig file to a folder on your home directory called .kube
. Create that directory in case it does not exist:
- mkdir -p ~/.kube
Then copy the config file into the newly created .kube
directory and rename it config
:
- cp current_kubernetes-deployment-tutorial-kubeconfig.yaml_file_path ~/.kube/config
The config file should now have the path ~/.kube/config
. This is the file that kubectl
reads by default when running any command, so there is no need to pass --kubeconfig
anymore. Run the following:
- kubectl get pods
You will receive the following output:
OutputNo resources found in default namespace.
Now access the cluster with the following:
- kubectl get nodes
You will receive the list of nodes on your cluster. The output will be similar to this:
OutputNAME STATUS ROLES AGE VERSION
pool-upkissrv3-uzm8z Ready <none> 12m v1.22.7
In this tutorial you are going to use the default
namespace for all kubectl
commands and manifest files, which are files that define the workload and operating parameters of work in Kubernetes. Namespaces are like virtual clusters inside your single physical cluster. You can change to any other namespace you want; just make sure to always pass it using the --namespace
flag to kubectl
, and/or specifying it on the Kubernetes manifests metadata field. They are a great way to organize the deployments of your team and their running environments; read more about them in the official Kubernetes overview on Namespaces.
By finishing this step you are now able to run kubectl
against your cluster. In the next step, you will create the local Git repository you are going to use to house your sample application.
You are now going to structure your sample deployment in a local Git repository. You will also create some Kubernetes manifests that will be global to all deployments you are going to do on your cluster.
Note: This tutorial has been tested on Ubuntu 20.04, and the individual commands are styled to match this OS. However, most of the commands here can be applied to other Linux distributions with little to no change needed, and commands like kubectl
are platform-agnostic.
First, create a new Git repository locally that you will push to GitHub later on. Create an empty folder called do-sample-app
in your home directory and cd
into it:
- mkdir ~/do-sample-app
- cd ~/do-sample-app
Now create a new Git repository in this folder with the following command:
- git init .
Inside this repository, create an empty folder called kube
:
- mkdir ~/do-sample-app/kube/
This will be the location where you are going to store the Kubernetes resources manifests related to the sample application that you will deploy to your cluster.
Now, create another folder called kube-general
, but this time outside of the Git repository you just created. Make it inside your home directory:
- mkdir ~/kube-general/
This folder is outside of your Git repository because it will be used to store manifests that are not specific to a single deployment on your cluster, but common to multiple ones. This will allow you to reuse these general manifests for different deployments.
With your folders created and the Git repository of your sample application in place, it’s time to arrange the authentication and authorization of your DOKS cluster.
It’s generally not recommended to use the default admin user to authenticate from other Services into your Kubernetes cluster. If your keys on the external provider got compromised, your whole cluster would become compromised.
Instead you are going to use a single Service Account with a specific Role, which is all part of the RBAC Kubernetes authorization model.
This authorization model is based on Roles and Resources. You start by creating a Service Account, which is basically a user on your cluster, then you create a Role, in which you specify what resources it has access to on your cluster. Finally, you create a Role Binding, which is used to make the connection between the Role and the Service Account previously created, granting to the Service Account access to all resources the Role has access to.
The first Kubernetes resource you are going to create is the Service Account for your CI/CD user, which this tutorial will name cicd
.
Create the file cicd-service-account.yml
inside the ~/kube-general
folder, and open it with your favorite text editor:
- nano ~/kube-general/cicd-service-account.yml
Write the following content on it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cicd
namespace: default
This is a YAML file; all Kubernetes resources are represented using one. In this case you are saying this resource is from Kubernetes API version v1
(internally kubectl
creates resources by calling Kubernetes HTTP APIs), and it is a ServiceAccount
.
The metadata
field is used to add more information about this resource. In this case, you are giving this ServiceAccount
the name cicd
, and creating it on the default
namespace.
You can now create this Service Account on your cluster by running kubectl apply
, like the following:
- kubectl apply -f ~/kube-general/
You will receive output similar to the following:
Outputserviceaccount/cicd created
To make sure your Service Account is working, try to log in to your cluster using it. To do that you first need to obtain their respective access token and store it in an environment variable. Every Service Account has an access token which Kubernetes stores as a Secret.
You can retrieve this secret using the following command:
- TOKEN=$(kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode)
Some explanation on what this command is doing:
$(kubectl get secret | grep cicd-token | awk '{print $1}')
This is used to retrieve the name of the secret related to our cicd
Service Account. kubectl get secret
returns the list of secrets on the default namespace, then you use grep
to search for the lines related to your cicd
Service Account. Then you return the name, since it is the first thing on the single line returned from the grep
.
kubectl get secret preceding-command -o jsonpath='{.data.token}' | base64 --decode
This will retrieve only the secret for your Service Account token. You then access the token field using jsonpath
, and pass the result to base64 --decode
. This is necessary because the token is stored as a Base64 string. The token itself is a JSON Web Token.
You can now try to retrieve your pods with the cicd
Service Account. Run the following command, replacing server-from-kubeconfig-file
with the server URL that can be found after server:
in ~/.kube/config
(the config file you downloaded for the cluster). This command will give a specific error that you will learn about later in this tutorial:
- kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods
--insecure-skip-tls-verify
skips the step of verifying the certificate of the server, since you are just testing and do not need to verify this. --kubeconfig="/dev/null"
is to make sure kubectl
does not read your config file and credentials but instead uses the token provided.
The output should be similar to this:
OutputError from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:cicd" cannot list resource "pods" in API group "" in the namespace "default"
This is an error, but it shows us that the token worked. The error you received is about your Service Account not having the neccessary authorization to list the resource secrets
, but you were able to access the server itself. If your token had not worked, the error would have been the following one:
Outputerror: You must be logged in to the server (Unauthorized)
Now that the authentication was a success, the next step is to fix the authorization error for the Service Account. You will do this by creating a role with the necessary permissions and binding it to your Service Account.
Kubernetes has two ways to define roles: using a Role
or a ClusterRole
resource. The difference between the former and the latter is that the first one applies to a single namespace, while the other is valid for the whole cluster.
As you are using a single namespace on this tutorial, you will use a Role
.
Create the file ~/kube-general/cicd-role.yml
and open it with your favorite text editor:
- nano ~/kube-general/cicd-role.yml
The basic idea is to grant access to do everything related to most Kubernetes resources in the default
namespace. Your Role
would look like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cicd
namespace: default
rules:
- apiGroups: ["", "apps", "batch", "extensions"]
resources: ["deployments", "services", "replicasets", "pods", "jobs", "cronjobs"]
verbs: ["*"]
This YAML has some similarities with the one you created previously, but here you are saying this resource is a Role
, and it’s from the Kubernetes API rbac.authorization.k8s.io/v1
. You are naming your role cicd
, and creating it on the same namespace you created your ServiceAccount
, the default
one.
Then you have the rules
field, which is a list of resources this role has access to. In Kubernetes resources are defined based on the API group they belong to, the resource kind itself, and what actions you can do on then, which is represented by a verb. Those verbs are similar to the HTTP ones.
In this case you are saying that your Role
is allowed to do everything, *
, on the following resources: deployments
, services
, replicasets
, pods
, jobs
, and cronjobs
. This also applies to those resources belonging to the following API groups: ""
(empty string), apps
, batch
, and extensions
. The empty string means the root API group. If you use apiVersion: v1
when creating a resource it means this resource is part of this API group.
A Role
by itself does nothing; you must also create a RoleBinding
, which binds a Role
to something, in this case, a ServiceAccount
.
Create the file ~/kube-general/cicd-role-binding.yml
and open it:
- nano ~/kube-general/cicd-role-binding.yml
Add the following lines to the file:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cicd
namespace: default
subjects:
- kind: ServiceAccount
name: cicd
namespace: default
roleRef:
kind: Role
name: cicd
apiGroup: rbac.authorization.k8s.io
Your RoleBinding
has some specific fields that have not yet been covered in this tutorial. roleRef
is the Role
you want to bind to something; in this case it is the cicd
role you created earlier. subjects
is the list of resources you are binding your role to; in this case it’s a single ServiceAccount
called cicd
.
Note: If you had used a ClusterRole
, you would have to create a ClusterRoleBinding
instead of a RoleBinding
. The file would be almost the same. The only difference would be that it would have no namespace
field inside the metadata
.
With those files created you will be able to use kubectl apply
again. Create those new resources on your Kubernetes cluster by running the following command:
- kubectl apply -f ~/kube-general/
You will receive output similar to the following:
Outputrolebinding.rbac.authorization.k8s.io/cicd created
role.rbac.authorization.k8s.io/cicd created
serviceaccount/cicd unchanged
Now, try the command you ran previously:
- kubectl --insecure-skip-tls-verify --kubeconfig="/dev/null" --server=server-from-kubeconfig-file --token=$TOKEN get pods
Since you have no pods, this will yield the following output:
OutputNo resources found in default namespace.
In this step, you gave the Service Account you are going to use on CircleCI the necessary authorization to do meaningful actions on your cluster like listing, creating, and updating resources. Now it’s time to create your sample application.
Note: All commands and files created from now on will start from the folder ~/do-sample-app
you created earlier. This is becase you are now creating files specific to the sample application that you are going to deploy to your cluster.
The Kubernetes Deployment
you are going to create will use the Nginx image as a base, and your application will be a simple static HTML page. This is a great start because it allows you to test if your deployment works by serving HTML directly from Nginx. As you will see later on, you can redirect all traffic coming to a local address:port
to your deployment on your cluster to test if it’s working.
Inside the repository you set up earlier, create a new Dockerfile
file and open it with your text editor of choice:
- nano ~/do-sample-app/Dockerfile
Write the following on it:
FROM nginx:1.21
COPY index.html /usr/share/nginx/html/index.html
This will tell Docker to build the application container from an nginx
image.
Now create a new index.html
file and open it:
- nano ~/do-sample-app/index.html
Write the following HTML content:
<!DOCTYPE html>
<title>DigitalOcean</title>
<body>
Kubernetes Sample Application
</body>
This HTML will display a simple message that will let you know if your application is working.
You can test if the image is correct by building and then running it.
First, build the image with the following command, replacing dockerhub-username
with your own Docker Hub username. You must specify your username here so when you push it later on to Docker Hub it will work:
- docker build ~/do-sample-app/ -t dockerhub-username/do-kubernetes-sample-app
Now run the image. Use the following command, which starts your image and forwards any local traffic on port 8080
to the port 80
inside the image, the port Nginx listens to by default:
- docker run --rm -it -p 8080:80 dockerhub-username/do-kubernetes-sample-app
The command prompt will stop being interactive while the command is running. Instead you will see the Nginx access logs. If you open localhost:8080
on any browser it should show an HTML page with the content of ~/do-sample-app/index.html
. In case you don’t have a browser available, you can open a new terminal window and use the following curl
command to fetch the HTML from the webpage:
- curl localhost:8080
You will receive the following output:
Output<!DOCTYPE html>
<title>DigitalOcean</title>
<body>
Kubernetes Sample Application
</body>
Stop the container (CTRL
+ C
on the terminal where it’s running), and submit this image to your Docker Hub account. To do this, first log in to Docker Hub:
- docker login
Fill in the required information about your Docker Hub account, then push the image with the following command (don’t forget to replace the dockerhub-username
with your own):
- docker push dockerhub-username/do-kubernetes-sample-app
You have now pushed your sample application image to your Docker Hub account. In the next step, you will create a Deployment on your DOKS cluster from this image.
With your Docker image created and working, you will now create a manifest telling Kubernetes how to create a Deployment from it on your cluster.
Create the YAML deployment file ~/do-sample-app/kube/do-sample-deployment.yml
and open it with your text editor:
- nano ~/do-sample-app/kube/do-sample-deployment.yml
Write the following content on the file, making sure to replace dockerhub-username
with your Docker Hub username:
apiVersion: apps/v1
kind: Deployment
metadata:
name: do-kubernetes-sample-app
namespace: default
labels:
app: do-kubernetes-sample-app
spec:
replicas: 1
selector:
matchLabels:
app: do-kubernetes-sample-app
template:
metadata:
labels:
app: do-kubernetes-sample-app
spec:
containers:
- name: do-kubernetes-sample-app
image: dockerhub-username/do-kubernetes-sample-app:latest
ports:
- containerPort: 80
name: http
Kubernetes deployments are from the API group apps
, so the apiVersion
of your manifest is set to apps/v1
. On metadata
you added a new field you have not used previously, called metadata.labels
. This is useful to organize your deployments. The field spec
represents the behavior specification of your deployment. A deployment is responsible for managing one or more pods; in this case it’s going to have a single replica by the spec.replicas
field. That is, it’s going to create and manage a single pod.
To manage pods, your deployment must know which pods it’s responsible for. The spec.selector
field is the one that gives it that information. In this case the deployment will be responsible for all pods with tags app=do-kubernetes-sample-app
. The spec.template
field contains the details of the Pod
this deployment will create. Inside the template you also have a spec.template.metadata
field. The labels
inside this field must match the ones used on spec.selector
. spec.template.spec
is the specification of the pod itself. In this case it contains a single container, called do-kubernetes-sample-app
. The image of that container is the image you built previously and pushed to Docker Hub.
This YAML file also tells Kubernetes that this container exposes the port 80
, and gives this port the name http
.
To access the port exposed by your Deployment
, create a Service. Make a file named ~/do-sample-app/kube/do-sample-service.yml
and open it with your favorite editor:
- nano ~/do-sample-app/kube/do-sample-service.yml
Next, add the following lines to the file:
apiVersion: v1
kind: Service
metadata:
name: do-kubernetes-sample-app
namespace: default
labels:
app: do-kubernetes-sample-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
name: http
selector:
app: do-kubernetes-sample-app
This file gives your Service
the same labels used on your deployment. This is not required, but it helps to organize your applications on Kubernetes.
The service resource also has a spec
field. The spec.type
field is responsible for the behavior of the service. In this case it’s a ClusterIP
, which means the service is exposed on a cluster-internal IP, and is only reachable from within your cluster. This is the default spec.type
for services. spec.selector
is the label selector criteria that should be used when picking the pods to be exposed by this service. Since your pod has the tag app: do-kubernetes-sample-app
, you used it here. spec.ports
are the ports exposed by the pod’s containers that you want to expose from this service. Your pod has a single container which exposes port 80
, named http
, so you are using it here as targetPort
. The service exposes that port on port 80
too, with the same name, but you could have used a different port/name combination than the one from the container.
With your Service
and Deployment
manifest files created, you can now create those resources on your Kubernetes cluster using kubectl
:
- kubectl apply -f ~/do-sample-app/kube/
You will receive the following output:
Outputdeployment.apps/do-kubernetes-sample-app created
service/do-kubernetes-sample-app created
Test if this is working by forwarding one port on your machine to the port that the service is exposing inside your Kubernetes cluster. You can do that using kubectl port-forward
:
- kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80
The subshell command $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}')
retrieves the name of the pod matching the app
tag you used. Otherwise you could have retrieved it from the list of pods by using kubectl get pods
.
After you run port-forward
, the shell will stop being interactive, and will instead output the requests redirected to your cluster:
OutputForwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Opening localhost:8080
on any browser should render the same page you saw when you ran the container locally, but it’s now coming from your Kubernetes cluster. As before, you can also use curl
in a new terminal window to check if it’s working:
- curl localhost:8080
You will receive the following output:
Output<!DOCTYPE html>
<title>DigitalOcean</title>
<body>
Kubernetes Sample Application
</body>
Next, it’s time to push all the files you created to your GitHub repository. To do this you must first create a repository on GitHub called digital-ocean-kubernetes-deploy
.
In order to keep this repository simple for demonstration purposes, do not initialize the new repository with a README
, license
, or .gitignore
file when asked on the GitHub UI. You can add these files later on.
With the repository created, point your local repository to the one on GitHub. To do this, press CTRL
+ C
to stop kubectl port-forward
and get the command line back, then run the following commands to add a new remote called origin
:
- cd ~/do-sample-app/
- git remote add origin https://github.com/your-github-account-username/digital-ocean-kubernetes-deploy.git
There should be no output from the preceding command.
Next, commit all the files you created up to now to the GitHub repository. First, add the files:
- git add --all
Next, commit the files to your repository, with a commit message in quotation marks:
- git commit -m "initial commit"
This will yield output similar to the following:
Output[master (root-commit) db321ad] initial commit
4 files changed, 47 insertions(+)
create mode 100644 Dockerfile
create mode 100644 index.html
create mode 100644 kube/do-sample-deployment.yml
create mode 100644 kube/do-sample-service.yml
Finally, push the files to GitHub:
- git push -u origin master
You will be prompted for your username and password. Once you have entered this, you will see output like this:
OutputCounting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (7/7), 907 bytes | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)
To github.com:your-github-account-username/digital-ocean-kubernetes-deploy.git
* [new branch] master -> master
Branch master set up to track remote branch master from origin.
If you go to your GitHub repository page you will now see all the files there. With your project up on GitHub, you can now set up CircleCI as your CI/CD tool.
For this tutorial, you will use CircleCI to automate deployments of your application whenever the code is updated, so you will need to log in to CircleCI using your GitHub account and set up your repository.
First, go to their homepage https://circleci.com
, and press Sign Up.
You are using GitHub, so click the green Sign Up with GitHub button.
CircleCI will redirect to an authorization page on GitHub. CircleCI needs some permissions on your account to be able to start building your projects. This allows CircleCI to obtain your email, deploy keys and permission to create hooks on your repositories, and add SSH keys to your account. If you need more information on what CircleCI is going to do with your data, check their documentation about GitHub integration.
After authorizing CircleCI you will be redirected to the Projects page. Here you can set up your GitHub repository in CircleCI. Select Set Up Project in the entry for your digital-ocean-kubernetes-deploy
repo. Then select the Faster: Commit a starter CI pipeline to a new branch option. This will create a new circleci-project-setup branch for your project.
Next, specify some environment variables in the CircleCI settings. You can find the settings of the project by selecting the Project Settings button in the top right section of the page then selecting Environment Variables. Press Add Environment Variable to create new environment variables.
First, add two environment variables called DOCKERHUB_USERNAME
and DOCKERHUB_PASS
, which will be needed later on to push the image to Docker Hub. Set the values to your Docker Hub username and password, respectively.
Then add three more: KUBERNETES_TOKEN
, KUBERNETES_SERVER
, and KUBERNETES_CLUSTER_CERTIFICATE
.
The value of KUBERNETES_TOKEN
will be the value of the local environment variable you used earlier to authenticate on your Kubernetes cluster using your Service Account user. If you have closed the terminal, you can always run the following command to retrieve it again:
- kubectl get secret $(kubectl get secret | grep cicd-token | awk '{print $1}') -o jsonpath='{.data.token}' | base64 --decode
KUBERNETES_SERVER
will be the string you passed as the --server
flag to kubectl
when you logged in with your cicd
Service Account. You can find this after server:
in the ~/.kube/config
file, or in the file kubernetes-deployment-tutorial-kubeconfig.yaml
downloaded from the DigitalOcean dashboard when you made the initial setup of your Kubernetes cluster.
KUBERNETES_CLUSTER_CERTIFICATE
should also be available on your ~/.kube/config
file. It’s the certificate-authority-data
field on the clusters
item related to your cluster. It should be a long string; make sure to copy all of it.
Those environment variables must be defined here because most of them contain sensitive information, and it is not secure to place them directly on the CircleCI YAML config file.
With CircleCI listening for changes on your repository and the environment variables configured, it’s time to create the configuration file.
Make a directory called .circleci
inside your sample application repository:
- mkdir ~/do-sample-app/.circleci/
Inside this directory, create a file named config.yml
and open it with your favorite editor:
- nano ~/do-sample-app/.circleci/config.yml
Add the following content to the file, making sure to replace dockerhub-username
with your Docker Hub username:
version: 2.1
jobs:
build:
docker:
- image: circleci/buildpack-deps:bullseye
environment:
IMAGE_NAME: dockerhub-username/do-kubernetes-sample-app
working_directory: ~/app
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: |
docker build -t $IMAGE_NAME:latest .
- run:
name: Push Docker Image
command: |
echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
docker push $IMAGE_NAME:latest
workflows:
version: 2
build-deploy-master:
jobs:
- build:
filters:
branches:
only: master
This sets up the workflow build-deploy-master
, which right now has a single job called build
. This job will run for every commit to the master
branch.
The build
job is using the image circleci/buildpack-deps:bullseye
to run its steps, which is an image from CircleCI based on the official buildpack-deps
Docker image, but with some extra tools installed, like the Docker binaries themselves.
The workflow has four steps:
checkout
retrieves the code from GitHub.setup_remote_docker
sets up a remote, isolated environment for each build. This is required before you use any docker
command inside a job step. This is necessary because as the steps are running inside a docker image, setup_remote_docker
allocates another machine to run the commands there.run
step builds the image, as you did previously in your local environment. For that you are using the environment variable you declared in environment:
, IMAGE_NAME
.run
step pushes the image to Docker Hub, using the environment variables you configured on the project settings to authenticate.There are other registries besides Docker Hub that you can use to store container images and collaborate on them with others. For instance, DigitalOcean has its own container registry to which you can push container images.
To push an image to the DigitalOcean Container Registry, you could replace the second run
step with one like the following:
. . .
- run: |
docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD registry.digitalocean.com/your_registry
docker push registry.digitalocean.com/your_registry/do-kubernetes-sample-app
. . .
This assumes you’ve securely set $DOCKER_USERNAME
and $DOCKER_PASSWORD
as environment variables previously in the file, as this will avoid exposing them in the CircleCI job output. Also, note that both of these values should be set to the same DigitalOcean API token.
To learn more about the DigitalOcean container registry, check out our product documentation.
Commit the new file to your repository and push the changes upstream:
- cd ~/do-sample-app/
- git add .circleci/
- git commit -m "add CircleCI config"
- git push
This will trigger a new build on CircleCI. The CircleCI workflow is going to correctly build and push your image to Docker Hub.
Now that you have created and tested your CircleCI workflow, you can set your DOKS cluster to retrieve the up-to-date image from Docker Hub and deploy it automatically when changes are made.
Now that your application image is being built and sent to Docker Hub every time you push changes to the master
branch on GitHub, it’s time to update your deployment on your Kubernetes cluster so that it retrieves the new image and uses it as a base for deployment.
To do that, first fix one issue with your deployment: it’s currently depending on an image with the latest
tag. This tag does not tell us which version of the image you are using. You cannot easily lock your deployment to that tag because it’s overwritten everytime you push a new image to Docker Hub, and by using it like that you lose the reproducibility of containerized applications.
You can read more about that on Vladislav Supalov’s article about why depending on the latest
tag is an anti-pattern.
To correct this, you first must make some changes to your Push Docker Image
build step in the ~/do-sample-app/.circleci/config.yml
file. Open up the file:
- nano ~/do-sample-app/.circleci/config.yml
Then add the highlighted lines to your Push Docker Image
step:
...
- run:
name: Push Docker Image
command: |
echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1
docker push $IMAGE_NAME:latest
docker push $IMAGE_NAME:$CIRCLE_SHA1
...
CircleCI has some special environment variables set by default. One of them is CIRCLE_SHA1
, which contains the hash of the commit it’s building. The changes you made to ~/do-sample-app/.circleci/config.yml
will use this environment variable to tag your image with the commit it was built from, always tagging the most recent build with the latest tag. That way, you always have specific images available, without overwriting them when you push something new to your repository.
Save and exit the file.
Next, change your deployment manifest file to point to that file. This would be a small change if inside ~/do-sample-app/kube/do-sample-deployment.yml
you could set your image as dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1
, but kubectl
doesn’t do variable substitution inside the manifests when you use kubectl apply
. To account for this, you can use envsubst
. envsubst
is a CLI tool that is part of the GNU gettext project. It allows you to pass some text to it, and if it finds any variable inside the text that has a matching environment variable, it replaces the variable with the respective value. The resulting text is then returned as output.
To use this, you will create a bash script that will be responsible for your deployment. Make a new folder called scripts
inside ~/do-sample-app/
:
- mkdir ~/do-sample-app/scripts/
Inside that folder create a new bash script called ci-deploy.sh
and open it with your favorite text editor:
- nano ~/do-sample-app/scripts/ci-deploy.sh
Inside it write the following bash script:
#! /bin/bash
# exit script when any command ran here returns with non-zero exit code
set -e
COMMIT_SHA1=$CIRCLE_SHA1
# Export it so it's available for envsubst
export COMMIT_SHA1=$COMMIT_SHA1
# Since the only way for envsubst to work on files is using input/output redirection,
# it's not possible to do in-place substitution, so you will save the output to another file
# and overwrite the original with that one.
envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
./kubectl \
--kubeconfig=/dev/null \
--server=$KUBERNETES_SERVER \
--certificate-authority=cert.crt \
--token=$KUBERNETES_TOKEN \
apply -f ./kube/
Let’s go through this script, using the comments in the file. First, there is the following:
set -e
This line makes sure any failed command stops the execution of the bash script. That way if one command fails, the next ones are not executed.
COMMIT_SHA1=$CIRCLE_SHA1
export COMMIT_SHA1=$COMMIT_SHA1
These lines export the CircleCI $CIRCLE_SHA1
environment variable with a new name. If you had just declared the variable without exporting it using export
, it would not be visible for the envsubst
command.
envsubst <./kube/do-sample-deployment.yml >./kube/do-sample-deployment.yml.out
mv ./kube/do-sample-deployment.yml.out ./kube/do-sample-deployment.yml
envsubst
cannot do in-place substitution. That is, it cannot read the content of a file, replace the variables with their respective values, and write the output back to the same file. Therefore, you will redirect the output to another file and then overwrite the original file with the new one.
echo "$KUBERNETES_CLUSTER_CERTIFICATE" | base64 --decode > cert.crt
The environment variable $KUBERNETES_CLUSTER_CERTIFICATE
you created earlier on CircleCI’s project settings is in reality a Base64 encoded string. To use it with kubectl
you must decode its contents and save it to a file. In this case you are saving it to a file named cert.crt
inside the current working directory.
./kubectl \
--kubeconfig=/dev/null \
--server=$KUBERNETES_SERVER \
--certificate-authority=cert.crt \
--token=$KUBERNETES_TOKEN \
apply -f ./kube/
Finally, you are running kubectl
. The command has similar arguments to the one you ran when you were testing your Service Account. You are calling apply -f ./kube/
, since on CircleCI the current working directory is the root folder of your project. ./kube/
here is your ~/do-sample-app/kube
folder.
Save the file and make sure it’s executable:
- chmod +x ~/do-sample-app/scripts/ci-deploy.sh
Now, edit ~/do-sample-app/kube/do-sample-deployment.yml
:
- nano ~/do-sample-app/kube/do-sample-deployment.yml
Change the tag of the container image value to look like the following one:
...
containers:
- name: do-kubernetes-sample-app
image: dockerhub-username/do-kubernetes-sample-app:$COMMIT_SHA1
ports:
- containerPort: 80
name: http
Save and close the file. You must now add some new steps to your CI configuration file to update the deployment on Kubernetes.
Open ~/do-sample-app/.circleci/config.yml
in your favorite text editor:
- nano ~/do-sample-app/.circleci/config.yml
Write the following new job, right below the build
job you created previously:
...
command: |
echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1
docker push $IMAGE_NAME:latest
docker push $IMAGE_NAME:$CIRCLE_SHA1
deploy:
docker:
- image: circleci/buildpack-deps:bullseye
working_directory: ~/app
steps:
- checkout
- run:
name: Install envsubst
command: |
sudo apt-get update && sudo apt-get -y install gettext-base
- run:
name: Install kubectl
command: |
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod u+x ./kubectl
- run:
name: Deploy Code
command: ./scripts/ci-deploy.sh
...
The first two steps of your new deploy
job are installing some dependencies, first envsubst
and then kubectl
. The Deploy Code
step is responsible for running our deploy script.
Now you will add this job to the build-deploy-master
workflow you created previously. Inside the build-deploy-master
workflow configuration, write the following new entry right after the existing entry for the build
job:
...
workflows:
version: 2
build-deploy-master:
jobs:
- build:
filters:
branches:
only: master
- deploy:
requires:
- build
filters:
branches:
only: master
This adds the deploy
job to the build-deploy-master
workflow. The deploy
job will only run for commits to master
, and only after the build
job is completed.
The contents of ~/do-sample-app/.circleci/config.yml
will now be like this:
version: 2.1
jobs:
build:
docker:
- image: circleci/buildpack-deps:bullseye
environment:
IMAGE_NAME: dockerhub-username/do-kubernetes-sample-app
working_directory: ~/app
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: |
docker build -t $IMAGE_NAME:latest .
- run:
name: Push Docker Image
command: |
echo "$DOCKERHUB_PASS" | docker login -u "$DOCKERHUB_USERNAME" --password-stdin
docker tag $IMAGE_NAME:latest $IMAGE_NAME:$CIRCLE_SHA1
docker push $IMAGE_NAME:latest
docker push $IMAGE_NAME:$CIRCLE_SHA1
deploy:
docker:
- image: circleci/buildpack-deps:bullseye
working_directory: ~/app
steps:
- checkout
- run:
name: Install envsubst
command: |
sudo apt-get update && sudo apt-get -y install gettext-base
- run:
name: Install kubectl
command: |
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod u+x ./kubectl
- run:
name: Deploy Code
command: ./scripts/ci-deploy.sh
workflows:
version: 2
build-deploy-master:
jobs:
- build:
filters:
branches:
only: master
- deploy:
requires:
- build
filters:
branches:
only: master
You can now save and exit the file.
To make sure the changes are really going to be reflected on your Kubernetes deployment, edit your index.html
. Change the HTML to something else, like:
<!DOCTYPE html>
<title>DigitalOcean</title>
<body>
Automatic Deployment is Working!
</body>
Once you have saved the above change, commit all the modified files to the repository, and push the changes upstream:
- cd ~/do-sample-app/
- git add --all
- git commit -m "add deploy script and add new steps to circleci config"
- git push
You will see the new build running on CircleCI, and successfully deploying the changes to your Kubernetes cluster.
Wait for the build to finish, then run the same command you ran previously:
- kubectl port-forward $(kubectl get pod --selector="app=do-kubernetes-sample-app" --output jsonpath='{.items[0].metadata.name}') 8080:80
Make sure everything is working by opening your browser on the URL localhost:8080
or by making a curl
request to it. It should show the updated HTML:
- curl localhost:8080
You will receive the following output:
Output<!DOCTYPE html>
<title>DigitalOcean</title>
<body>
Automatic Deployment is Working!
</body>
This means that you have successfully set up automated deployment with CircleCI.
This was a basic tutorial on how to do deployments to DigitalOcean Kubernetes using CircleCI. From here, you can improve your pipeline in many ways. The first thing you can do is create a single build
job for multiple deployments, each one deploying to different Kubernetes clusters or different namespaces. This can be useful when you have different Git branches for development/staging/production environments, ensuring that the deployments are always separated.
You could also build your own image to be used on CircleCI, instead of using buildpack-deps
. This image could be based on it, but could already have kubectl
and envsubst
dependencies installed.
If you would like to learn more about CI/CD on Kubernetes, check out the tutorials for our CI/CD on Kubernetes Webinar Series, or for more information about apps on Kubernetes, see Modernizing Applications for Kubernetes.
]]>Git is an open-source distributed version control system that makes collaborative software projects more manageable. Many projects maintain their files in a Git repository, and platforms like GitHub have made sharing and contributing to code accessible, valuable, and effective.
Open-source projects that are hosted in public repositories benefit from contributions made by the broader developer community through pull requests, which request that a project accept changes you have made to its code repository.
This tutorial will guide you through making a pull request to a Git repository through the command line so that you can contribute to open-source software projects.
You should have Git installed on your local machine. You can check if Git is installed on your computer and go through the installation process for your operating system by following this guide.
You’ll also need to have or create a GitHub account. You can do so through the GitHub website, github.com, and can either log in or create your account.
As of November 2020, GitHub removed password-based authentication. For this reason, you will need to create a personal access token or add your SSH public key information in order to access GitHub repositories through the command line.
Finally, you should identify an open-source software project to contribute to. You can become more familiar with open-source projects by reading through this introduction.
A repository, or repo for short, is essentially the main folder of a project. The repository contains all the relevant project files, including documentation, and also stores the revision history for each file. On GitHub, repositories can have multiple collaborators and can either be public or private.
In order to work on an open-source project, you will first need to make your own copy of the repository. To do this, you should fork the repository and then clone it so that you have a local working copy.
You can fork a repository on GitHub by navigating with your browser to the GitHub URL of the open-source project you would like to contribute to.
GitHub repository URLs will reference both the username associated with the owner of the repository, as well as the repository name. For example, DigitalOcean Community (username: do-community
) is the owner of the cloud_haiku project repository, so the GitHub URL for that project is:
https://github.com/do-community/cloud_haiku
In the above example, do-community
is the username and cloud_haiku
is the repository name.
Once you have identified the project you would like to contribute to, you can navigate to the URL, which will be formatted like so:
https://github.com/username/repository
Or, you can search for the project using the GitHub search bar.
When you’re on the main page for the repository, a Fork button will be displayed on your upper right-hand side of the page, underneath your user icon:
Click on the Fork button to start the forking process. Within your browser window, you’ll receive a notification that the repository you’re forking is being processed:
Once the process is done, your browser will go to a screen similar to the previous repository screen, except that at the top you will see your username before the repository name, and in the URL it will also say your username before the repository name.
So, in the example above, instead of do-community / cloud_haiku at the top of the page, you’ll see your-username / cloud_haiku, and the new URL will read similar to this:
https://github.com/your-username/cloud_haiku
With the repository forked, you’re ready to clone it so that you have a local working copy of the code base.
To make your own local copy of the repository you would like to contribute to, let’s first open up a terminal window.
We’ll use the git clone
command along with the URL that points to your fork of the repository.
This URL will be similar to the URL above, except now it will end with .git
. In the cloud_haiku
example above, the URL will read similar to this, with your actual username replacing your-username
:
https://github.com/your-username/cloud_haiku.git
You can alternatively copy the URL by using the green “⤓ Code” button from your repository page that you forked from the original repository page. Once you click the button, you’ll be able to copy the URL by clicking the clipboard button next to the URL:
Once we have the URL, we’re ready to clone the repository. To do this, we’ll combine the git clone
command with the repository URL from the command line in a terminal window:
git clone https://github.com/your-username/repository.git
Now that we have a local copy of the code, we can move on to creating a new branch on which to work with the code.
Whenever you work on a collaborative project, you and other programmers contributing to the repository will have different ideas for new features or fixes at once. Some of these new features will not take significant time to implement, but some of them will be ongoing. Because of this, it is important to branch the repository so that you are able to manage the workflow, isolate your code, and control what features make it back to the main branch of the project repository.
The primary branch of a project repository is usually called the main branch. A recommended practice is to consider anything on the main branch as being deployable for others to use at any time.
Note: In June 2020, GitHub updated its terminology to refer to default source code branches as the main
branch, instead of the master
branch. If your default branch still appears as master
you can update it to main
by changing the default branch settings.
When creating a branch based on the existing project, it is very important that you create your new branch off of the main branch. You should also make sure that your branch name is a descriptive one. Rather than calling it my-branch
, you should go with something like frontend-hook-migration
or fix-documentation-typos
instead.
To create the branch from our terminal window, let’s change our directory so that we are working in the directory of the repository. Be sure to use the actual name of the repository (such as cloud_haiku
) to change into that directory.
cd repository
Now, we’ll create our new branch with the git branch
command. Make sure you name it descriptively so that others working on the project understand what you are working on.
git branch new-branch
Now that our new branch is created, we can switch to make sure that we are working on that branch by using the git checkout
command:
git checkout new-branch
Once you enter the git checkout
command, you will receive the following output:
OutputSwitched to branch 'new-branch'
Alternatively, you can condense the above two commands, creating and switching to a new branch, with the following command and -b
flag:
git checkout -b new-branch
If you want to switch back to main
, you’ll use the checkout
command with the name of the main branch:
git checkout main
The checkout
command will allow you to switch between multiple branches, so you can potentially work on multiple features at once.
At this point, you can now modify existing files or add new files to the project on your own branch.
To demonstrate making a pull request, let’s use the example cloud_haiku
repo and create a new file in our local copy. Use your preferred text editor to create a new file so that we can add a new haiku poem as explained in the contributing guidelines. For example, we can use nano and call our example file filename.md
. You’ll need to call your file an original name with the .md
extension for Markdown.
nano filename.md
Next, we’ll add some text to the new file, following the contributing guidelines. We’ll need to use the Jekyll format and add a haiku with line breaks. The following file is an example file, as you’ll need to contribute an original haiku.
---
layout: haiku
title: Octopus Cloud
author: Sammy
---
Distributed cloud <br>
Like the octopuses' minds <br>
Across the network <br>
Once you’ve included your text, save and close the file. If you used nano, do so by pressing CTRL + X
, then Y
, and then ENTER
.
Once you have modified an existing file or added a new file to the project of your choice, you can stage it to your local repository, which we can do with the git add
command. In our example, filename.md
, we will type the following command.
git add filename.md
We passed the name of the file we created to this command to stage it to our local repository. This ensures your file is ready to be added.
If you are looking to add all the files you have modified in a particular directory, you can stage them all with the following command:
git add .
Here, the full stop or period will add all relevant files.
If you are looking to recursively add all changes including those in subdirectories, you can type:
git add -A
Or, alternatively, you can type git add -all
for all new files to be staged.
With our file staged, we’ll want to record the changes that we made to the repository with the git commit
command.
The commit message is an important aspect of your code contribution; it helps the maintainers and other contributors to fully understand the change you have made, why you made it, and how significant it is. Additionally, commit messages provide a historical record of the changes for the project at large, helping future contributors along the way.
If we have a very short message, we can record that with the -m
flag and the message in quotes. In our example of adding a haiku, our git commit
may be similar to the following.
git commit -m "Added a new haiku in filename.md file"
Unless it is a minor or expected change, we may want to include a lengthier commit message so that our collaborators are fully up to speed with our contribution. To record this larger message, we will run the git commit
command which will open the default text editor:
git commit
When running this command, you may notice that you are in the vim editor, which you can quit by typing :q
. If you would like to configure your default text editor, you can do so with the git config
command, and set nano as the default editor, for example:
git config --global core.editor "nano"
Or vim:
git config --global core.editor "vim"
After running the git commit
command, depending on the default text editor you’re using, your terminal window should display a document ready for you to edit that will be similar to this:
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch new-branch
# Your branch is up-to-date with 'origin/new-branch'.
#
# Changes to be committed:
# modified: new-feature.py
#
Underneath the introductory comments, you should add the commit message to the text file.
To write a useful commit message, you should include a summary on the first line that is around 50 characters long. Under this, and broken up into digestible sections, you should include a description that states the reason you made this change, how the code works, and additional information that will contextualize and clarify it for others to review the work when merging it. Try to be as helpful and proactive as possible to ensure that those maintaining the project are able to fully understand your contribution.
Once you have saved and exited the commit message text file, you can verify what Git will be committing with the following command:
git status
Depending on the changes that you have made, you will receive output that resembles this:
OutputOn branch new-branch
nothing to commit, working tree clean
At this point you can use the git push
command to push the changes to the current branch of your forked repository:
git push --set-upstream origin new-branch
The command will provide you with output to let you know of the progress, and it will be similar to the following:
OutputCounting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 336 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/your-username/repository.git
a1f29a6..79c0e80 new-branch -> new-branch
Branch new-branch set up to track remote branch new-branch from origin.
You can now navigate to the forked repository on your GitHub webpage and toggle to the branch you pushed to see the changes you have made in-browser.
At this point, it is possible to make a pull request to the original repository, but if you have not already done so, you’ll want to make sure that your local repository is up-to-date with the upstream repository.
While you are working on a project alongside other contributors, it is important for you to keep your local repository up-to-date with the project as you don’t want to make a pull request for code that will automatically cause conflicts (though in collaborative code projects, conflicts are bound to occur). To keep your local copy of the code base updated, you’ll need to sync changes.
We’ll first go over configuring a remote for the fork, then syncing the fork.
Remote repositories make it possible for you to collaborate with others on a Git project. Each remote repository is a version of the project that is hosted on the internet or a network you have access to. Each remote repository should be accessible to you as either read-only or read-write, depending on your user privileges.
In order to be able to sync changes you make in a fork with the original repository you’re working with, you need to configure a remote that references the upstream repository. You should set up the remote to the upstream repository only once.
Let’s first check which remote servers you have configured. The git remote
command will list whatever remote repository you have already specified, so if you cloned your repository as we did above, you’ll at least receive output regarding the origin repository, which is the default name given by Git for the cloned directory.
From the directory of the repository in our terminal window, let’s use the git remote
command along with the -v
flag to display the URLs that Git has stored along with the relevant remote shortnames (as in “origin”):
git remote -v
Since we cloned a repository, our output should be similar to this:
Outputorigin https://github.com/your-username/forked-repository.git (fetch)
origin https://github.com/your-username/forked-repository.git (push)
If you have previously set up more than one remote, the git remote -v
command will provide a list of all of them.
Next, we’ll specify a new remote upstream repository for us to sync with the fork. This will be the original repository that we forked from. We’ll do this with the git remote add
command.
git remote add upstream https://github.com/original-owner-username/original-repository.git
For our cloud_haiku
example, this command would be the following:
git remote add upstream https://github.com/do-community/cloud_haiku.git
In this example, upstream
is the shortname we have supplied for the remote repository since in terms of Git, “upstream” refers to the repository that we cloned from. If we want to add a remote pointer to the repository of a collaborator, we may want to provide that collaborator’s username or a shortened nickname for the shortname.
We can verify that our remote pointer to the upstream repository was properly added by using the git remote -v
command again from the repository directory:
git remote -v
Outputorigin https://github.com/your-username/forked-repository.git (fetch)
origin https://github.com/your-username/forked-repository.git (push)
upstream https://github.com/original-owner-username/original-repository.git (fetch)
upstream https://github.com/original-owner-username/original-repository.git (push)
Now you can refer to upstream
on the command line instead of writing the entire URL, and you are ready to sync your fork with the original repository.
Once we have configured a remote that references the upstream and original repository on GitHub, we are ready to sync our fork of the repository to keep it up-to-date.
To sync our fork, from the directory of our local repository in a terminal window, we’ll use the git fetch
command to fetch the branches along with their respective commits from the upstream repository. Since we used the shortname “upstream” to refer to the upstream repository, we’ll pass that to the command.
git fetch upstream
Depending on how many changes have been made since we forked the repository, your output may be different, and may include a few lines on counting, compressing, and unpacking objects. Your output will end similarly to the following lines, but may vary depending on how many branches are part of the project:
OutputFrom https://github.com/original-owner-username/original-repository
* [new branch] main -> upstream/main
Now, commits to the main branch will be stored in a local branch called upstream/main
.
Let’s switch to the local main branch of our repository:
git checkout main
OutputSwitched to branch 'main'
We’ll now merge any changes that were made in the original repository’s main branch, that we will access through our local upstream/main
branch, with our local main branch:
git merge upstream/main
The output here will vary, but it will begin with Updating
if changes have been made, or Already up-to-date
. if no changes have been made since you forked the repository.
Your fork’s main branch is now in sync with the upstream repository, and any local changes you made were not lost.
Depending on your own workflow and the amount of time you spend on making changes, you can sync your fork with the upstream code of the original repository as many times as it makes sense for you. But you should certainly sync your fork right before making a pull request to make sure you don’t contribute conflicting code automatically.
At this point, you are ready to make a pull request to the original repository.
You should navigate to your forked repository, and press the New pull request button on your left-hand side of the page.
You can modify the branch on the next screen. On either side you can select the appropriate repository from the drop-down menu and the appropriate branch.
Once you have chosen, for example, the main branch of the original repository on the left-hand side, and the new-branch of your forked repository of the right-hand side, you should receive a screen that states your branches can be merged if there is no competing code:
You should add a title and a comment to the appropriate fields and then press the Create pull request button.
At this point, the maintainers of the original repository will decide whether or not to accept your pull request. They may ask for you to edit or revise your code prior to accepting the pull request through submitting a code review.
At this point, you have successfully sent a pull request to an open-source software repository. Following this, you should make sure to update and rebase your code while you are waiting to have it reviewed. Project maintainers may ask for you to rework your code, so you should be prepared to do so.
Contributing to open-source projects — and becoming an active open-source developer — can be a rewarding experience. Making regular contributions to software you frequently use allows you to make sure that that software is as valuable to other end users as it can be.
If you’re interested in learning more about Git and collaborating on open-source software, you can read our tutorial series entitled An Introduction to Open Source. If you’re already familiar with Git, and would like a cheat sheet, you can refer to “How To Use Git: A Reference Guide.”
]]>Version control has become a central requirement for modern software development. It allows projects to safely track changes and enable reversions, integrity checking, and collaboration among other benefits.
Through the use of a “hooks” system, git allows developers and administrators to extend functionality by specifying scripts that git will call based on different events and actions.
In this guide, you will explore the idea of git hooks and demonstrate how to implement code that can assist you in automating tasks in your own unique environment. You will be using an Ubuntu 20.04 server in this guide, but any system that can run git should work in a similar way.
Before you get started, you must have git
installed on your server. If you are following along on Ubuntu 20.04, you can check out our guide on how to install git on Ubuntu 20.04.
You should be familiar with how to use git in a general sense. If you need an introduction, the series that the installation is a part of, called Introduction to Git: Installation, Usage, and Branches, is a good place to start.
Note: If you already feel comfortable with git and git hook concepts, and want to dive into practical examples, you can skip ahead to “Setting Up a Repository”.
When you are finished with the above requirements, continue on.
Git hooks are a rather simple concept that was implemented to address a need. When developing software on a shared project, maintaining style guide standards, or deploying software, there are often repetitive tasks that you will want to do each time an action is taken.
Git hooks are event-based. When you run certain git commands, the software will check the hooks
directory within the git repository to see if there is an associated script to run.
Some scripts run prior to an action taking place, which can be used to ensure code compliance to standards, for sanity checking, or to set up an environment. Other scripts run after an event in order to deploy code, re-establish correct permissions (something git cannot track very well), and so forth.
Using these abilities, it is possible to enforce policies, ensure consistency, control your environment, and even handle deployment tasks.
The book Pro Git by Scott Chacon attempts to divide the different types of hooks into categories. He categorizes them as such:
These categorizations are helpful for getting a general idea of the events that you can optionally set up a hook for. But to actually understand how these items work, it is best to experiment and to find out what solutions you are trying to implement.
Certain hooks also take parameters. This means that when git calls the script for the hook, it will pass in some relevant data that the script can then use to complete tasks. In full, the hooks that are available are:
Hook Name | Invoked By | Description | Parameters (Number and Description) |
---|---|---|---|
applypatch-msg | git am |
Can edit the commit message file and is often used to verify or actively format a patch’s message to a project’s standards. A non-zero exit status aborts the commit. | (1) name of the file containing the proposed commit message |
pre-applypatch | git am |
This is actually called after the patch is applied, but before the changes are committed. Exiting with a non-zero status will leave the changes in an uncommitted state. Can be used to check the state of the tree before actually committing the changes. | (none) |
post-applypatch | git am |
This hook is run after the patch is applied and committed. Because of this, it cannot abort the process, and is mainly used for creating notifications. | (none) |
pre-commit | git commit |
This hook is called before obtaining the proposed commit message. Exiting with anything other than zero will abort the commit. It is used to check the commit itself (rather than the message). | (none) |
prepare-commit-msg | git commit |
Called after receiving the default commit message, just prior to firing up the commit message editor. A non-zero exit aborts the commit. This is used to edit the message in a way that cannot be suppressed. | (1 to 3) Name of the file with the commit message, the source of the commit message (message , template , merge , squash , or commit ), and the commit SHA-1 (when operating on an existing commit). |
commit-msg | git commit |
Can be used to adjust the message after it has been edited in order to ensure conformity to a standard or to reject based on any criteria. It can abort the commit if it exits with a non-zero value. | (1) The file that holds the proposed message. |
post-commit | git commit |
Called after the actual commit is made. Because of this, it cannot disrupt the commit. It is mainly used to allow notifications. | (none) |
pre-rebase | git rebase |
Called when rebasing a branch. Mainly used to halt the rebase if it is not desirable. | (1 or 2) The upstream from where it was forked, the branch being rebased (not set when rebasing current) |
post-checkout | git checkout and git clone |
Run when a checkout is called after updating the worktree or after git clone . It is mainly used to verify conditions, display differences, and configure the environment if necessary. |
(3) Ref of the previous HEAD, ref of the new HEAD, flag indicating whether it was a branch checkout (1) or a file checkout (0) |
post-merge | git merge or git pull |
Called after a merge. Because of this, it cannot abort a merge. Can be used to save or apply permissions or other kinds of data that git does not handle. | (1) Flag indicating whether the merge was a squash. |
pre-push | git push |
Called prior to a push to a remote. In addition to the parameters, additional information, separated by a space is passed in through stdin in the form of “<local ref> <local sha1> <remote ref> <remote sha1>”. Parsing the input can get you additional information that you can use to check. For instance, if the local sha1 is 40 zeros long, the push is a delete and if the remote sha1 is 40 zeros, it is a new branch. This can be used to do many comparisons of the pushed ref to what is currently there. A non-zero exit status aborts the push. | (2) Name of the destination remote, location of the destination remote |
pre-receive | git-receive-pack on the remote repo |
This is called on the remote repo just before updating the pushed refs. A non-zero status will abort the process. Although it receives no parameters, it is passed a string through stdin in the form of “<old-value> <new-value> <ref-name>” for each ref. | (none) |
update | git-receive-pack on the remote repo |
This is run on the remote repo once for each ref being pushed instead of once for each push. A non-zero status will abort the process. This can be used to make sure all commits are only fast-forward, for instance. | (3) The name of the ref being updated, the old object name, the new object name |
post-receive | git-receive-pack on the remote repo |
This is run on the remote when pushing after all refs have been updated. It does not take parameters, but receives info through stdin in the form of “<old-value> <new-value> <ref-name>”. Because it is called after the updates, it cannot abort the process. | (none) |
post-update | git-receive-pack on the remote repo |
This is run only once after all of the refs have been pushed. It is similar to the post-receive hook in that regard, but does not receive the old or new values. It is used mostly to implement notifications for the pushed refs. | (?) A parameter for each of the pushed refs containing its name |
pre-auto-gc | git gc --auto |
Is used to do some checks before automatically cleaning repos. | (none) |
post-rewrite | git commit --amend , git-rebase |
This is called when git commands are rewriting already committed data. In addition to the parameters, it receives strings in stdin in the form of “<old-sha1> <new-sha1>”. | (1) Name of the command that invoked it (amend or rebase ) |
Before you can begin your script, you’ll need to learn a bit about what environmental variables git sets when calling hooks. To get your script to function, you will eventually need to unset an environmental variable that git sets when calling the post-commit
hook.
This is a very important point to internalize if you hope to write git hooks that function in a reliable way. Git sets different environmental variables depending on which hook is being called. This means that the environment that git is pulling information from will be different depending on the hook.
The first issue with this is that it can make your scripting environment very unpredictable if you are not aware of what variables are being set automatically. The second issue is that the variables that are set are almost completely absent in git’s own documentation.
Fortunately, Mark Longair developed a method for testing each of the variables that git sets when running these hooks. It involves putting the following contents in various git hook scripts:
#!/bin/bash
echo Running $BASH_SOURCE
set | egrep GIT
echo PWD is $PWD
The information on his site is from 2011 working with git version 1.7.1, so there have been a few changes. This guide is using Ubuntu 20.04 with git 2.25.1.
The results of the tests on this version of git are below (including the working directory as seen by git when running each hook). The local working directory for the test was /home/sammy/test_hooks
and the bare remote (where necessary) was /home/sammy/origin/test_hooks.git
:
applypatch-msg
, pre-applypatch
, post-applypatch
GIT_AUTHOR_DATE='Mon, 11 Aug 2014 11:25:16 -0400'
GIT_AUTHOR_EMAIL=sammy@example.com
GIT_AUTHOR_NAME=Sammy User'
GIT_INTERNAL_GETTEXT_SH_SCHEME=gnu
GIT_REFLOG_ACTION=am
/home/sammy/test_hooks
pre-commit
, prepare-commit-msg
, commit-msg
, post-commit
GIT_AUTHOR_DATE='@1407774159 -0400'
GIT_AUTHOR_EMAIL=sammy@example.com
GIT_AUTHOR_NAME=Sammy User'
GIT_DIR=.git
GIT_EDITOR=:
GIT_INDEX_FILE=.git/index
GIT_PREFIX=
/home/sammy/test_hooks
pre-rebase
GIT_INTERNAL_GETTEXT_SH_SCHEME=gnu
GIT_REFLOG_ACTION=rebase
/home/sammy/test_hooks
post-checkout
GIT_DIR=.git
GIT_PREFIX=
/home/sammy/test_hooks
post-merge
GITHEAD_4b407c...
GIT_DIR=.git
GIT_INTERNAL_GETTEXT_SH_SCHEME=gnu
GIT_PREFIX=
GIT_REFLOG_ACTION='pull other master'
/home/sammy/test_hooks
pre-push
GIT_PREFIX=
/home/sammy/test_hooks
pre-receive
, update
, post-receive
, post-update
GIT_DIR=.
/home/sammy/origin/test_hooks.git
pre-auto-gc
post-rewrite
GIT_AUTHOR_DATE='@1407773551 -0400'
GIT_AUTHOR_EMAIL=sammy@example.com
GIT_AUTHOR_NAME=Sammy User'
GIT_DIR=.git
GIT_PREFIX=
/home/sammy/test_hooks
These variables have implications on how git sees its environment. You will use the above information about variables to ensure that your script takes its environment into account correctly.
Now that you have all of this general information, you can demonstrate how to implement these in a few scenarios.
To get started, you’ll create a new, empty repository in your home directory. You can call this proj
.
- mkdir ~/proj
- cd ~/proj
- git init
OutputInitialized empty Git repository in /home/sammy/proj/.git/
For the rest of this guide, replace sammy with your username as needed.
Now, you are in the empty working directory of a git-controlled directory. Before you do anything else, jump into the repository that is stored in the hidden file called .git
within this directory:
- cd .git
- ls -F
Outputbranches/ config description HEAD hooks/ info/ objects/ refs/
You’ll see a number of files and directories. The one you’re interested in is the hooks
directory:
- cd hooks
- ls -l
Outputtotal 40
-rwxrwxr-x 1 sammy sammy 452 Aug 8 16:50 applypatch-msg.sample
-rwxrwxr-x 1 sammy sammy 896 Aug 8 16:50 commit-msg.sample
-rwxrwxr-x 1 sammy sammy 189 Aug 8 16:50 post-update.sample
-rwxrwxr-x 1 sammy sammy 398 Aug 8 16:50 pre-applypatch.sample
-rwxrwxr-x 1 sammy sammy 1642 Aug 8 16:50 pre-commit.sample
-rwxrwxr-x 1 sammy sammy 1239 Aug 8 16:50 prepare-commit-msg.sample
-rwxrwxr-x 1 sammy sammy 1352 Aug 8 16:50 pre-push.sample
-rwxrwxr-x 1 sammy sammy 4898 Aug 8 16:50 pre-rebase.sample
-rwxrwxr-x 1 sammy sammy 3611 Aug 8 16:50 update.sample
You can see a few things here. First, you can see that each of these files are marked executable. Since these scripts are just called by name, they must be executable and their first line must be a shebang magic number reference to call the correct script interpreter. Most commonly, these are scripting languages like bash, perl, python, etc.
The second thing you may notice is that all of the files end in .sample
. That is because git simply looks at the filename when trying to find the hook files to execute. Deviating from the name of the script git is looking for basically disables the script. In order to enable any of the scripts in this directory, you would have to remove the .sample
suffix.
Your first example will use the post-commit
hook to show you how to deploy to a local web server whenever a commit is made. This is not the hook you would use for a production environment, but it lets us demonstrate some important, barely-documented items that you should know about when using hooks.
First, you will install the Apache web server to demonstrate:
- sudo apt-get update
- sudo apt-get install apache2
In order for your script to modify the web root at /var/www/html
(this is the document root on Ubuntu 20.04. Modify as needed), you need to have write permission. First, give your normal user ownership of this directory. You can do this by typing:
- sudo chown -R `whoami`:`id -gn` /var/www/html
Now, in your project directory, create an index.html
file:
- cd ~/proj
- nano index.html
Inside, you can add a little bit of HTML just to demonstrate the idea. It doesn’t have to be complicated:
<h1>Here is a title!</h1>
<p>Please deploy me!</p>
Add the new file to tell git to track the file:
- git add .
Now, before you commit, you are going to set up your post-commit
hook for the repository. Create this file within the .git/hooks
directory for the project:
- nano .git/hooks/post-commit
Since git hooks are standard scripts, you need to tell git to use bash by starting with a shebang:
#!/bin/bash
unset GIT_INDEX_FILE
git --work-tree=/var/www/html --git-dir=$HOME/proj/.git checkout -f
In the following line, you need to look closely at the environmental variables that are set each time the post-commit
hook is called. In particular, the GIT_INDEX_FILE
is set to .git/index
.
This path is in relation to the working directory, which in this case is /var/www/html
. Since the git index does not exist at this location, the script will fail if you leave it as-is. To avoid this situation, you can manually unset the variable.
After that, you are just going to use git itself to unpack the newest version of the repository after the commit, into your web directory. You will want to force this transaction to make sure this is successful each time.
When you are finished with these changes, save and close the file.
Because this is a regular script file, you need to make it executable:
- chmod +x .git/hooks/post-commit
Now, you are finally ready to commit the changes you made in your git repo. Ensure that you are back in the correct directory and then commit the changes:
- cd ~/proj
- git commit -m "here we go..."
Now, if you visit your server’s domain name or IP address in your browser, you should see the index.html
file you created:
http://server_domain_or_IP
As you can see, your most recent changes have been automatically pushed to the document root of your web server upon commit. You can make some additional changes to show that it works on each commit:
- echo "<p>Here is a change.</p>" >> index.html
- git add .
- git commit -m "First change"
When you refresh your browser, you should immediately see the new changes that you applied:
As you can see, this type of set up can make things easier for testing changes locally. However, you’d almost never want to publish on commit in a production environment. It is much safer to push after you’ve tested your code and are sure it is ready.
In this next example, you’ll demonstrate a better way to update a production server. You can do this by using the push-to-deploy model in order to update your web server whenever you push to a bare git repository. You can use the same server you’ve set up as your development machine.
On your production machine, you will be setting up another web server, a bare git repository that you will push changes to, and a git hook that will execute whenever a push is received.
Complete the steps below as a normal user with sudo privileges.
On the production server, start off by installing the web server:
- sudo apt-get update
- sudo apt-get install apache2
Again, you should give ownership of the document root to the user you are operating as:
- sudo chown -R `whoami`:`id -gn` /var/www/html
You need to install git on this machine as well:
- sudo apt-get install git
Now, you can create a directory within your user’s home directory to hold the repository. You can then move into that directory and initialize a bare repository. A bare repository does not have a working directory and is better for servers that you will not be working with much directly:
- mkdir ~/proj
- cd ~/proj
- git init --bare
Since this is a bare repository, there is no working directory and all of the files that are usually located in .git
are now in the main directory itself.
Next, you need to create another git hook. This time, you are interested in the post-receive
hook, which is run on the server receiving a git push
. Open this file in your editor:
- nano hooks/post-receive
Again, you need to start off by identifying the type of script you are writing. After that, you can type out the same checkout command that you used in your post-commit
file, modified to use the paths on this machine:
#!/bin/bash
while read oldrev newrev ref
do
if [[ $ref =~ .*/master$ ]];
then
echo "Master ref received. Deploying master branch to production..."
git --work-tree=/var/www/html --git-dir=$HOME/proj checkout -f
else
echo "Ref $ref successfully received. Doing nothing: only the master branch may be deployed on this server."
fi
done
Since this is a bare repository, the --git-dir
should point to the top-level directory of that repo.
However, you need to add some additional logic to this script. If you accidentally push a test-feature
branch to this server, you do not want that to be deployed. You want to make sure that you are only going to be deploying the master
branch.
First, you need to read the standard input. For each ref being pushed, the three pieces of info (old rev, new rev, ref) will be fed to the script, separated by white space, as standard input. You can read this with a while
loop to surround the git
command.
So now, you will have three variables set based on what is being pushed. For a master branch push, the ref
object will contain something that looks like refs/heads/master
. You can check to see if the ref the server is receiving has this format by using an if
construct.
Finally, add some text describing what situation was detected, and what action was taken. You should add an else
block to notify the user when a non-master branch was successfully received, even though the action won’t trigger a deployment.
When you are finished, save and close the file. But remember, you must make the script executable for the hook to work:
- chmod +x hooks/post-receive
Now, you can set up access to this remote server on your client.
Back on your client (development) machine, go back into the working directory of your project:
- cd ~/proj
Inside, add the remote server as a remote called production
. The command you type should look something like this:
- git remote add production sammy@remote_server_domain_or_IP:proj
Now push your current master branch to your production server:
- git push production master
If you do not have SSH keys configured, you may have to enter the password of your production server user. You should see something that looks like this:
Output]Counting objects: 8, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 473 bytes | 0 bytes/s, done.
Total 4 (delta 0), reused 0 (delta 0)
remote: Master ref received. Deploying master branch...
To sammy@107.170.14.32:proj
009183f..f1b9027 master -> master
As you can see, the text from your post-receive
hook is in the output of the command. If you visit your production server’s domain name or IP address in your web browser, you should see the current version of your project:
It looks like the hook has successfully pushed your code to production once it received the information.
Now, time to test out some new code. Back on the development machine, you will create a new branch to hold your changes. Make a new branch called test_feature
and check the new branch out by typing:
- git checkout -b test_feature
You are now working in the test_feature
branch. Try making a change that you might want to move to production. You will commit it to this branch:
- echo "<h2>New Feature Here</h2>" >> index.html
- git add .
- git commit -m "Trying out new feature"
At this point, if you go to your development machine’s IP address or domain name, you should see your changes displayed:
This is because your development machine is still being re-deployed at each commit. This work-flow is great for testing out changes prior to moving them to production.
You can push your test_feature
branch to your remote production server:
- git push production test_feature
You should see the other message from your post-receive
hook in the output:
OutputCounting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 301 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
remote: Ref refs/heads/test_feature successfully received. Doing nothing: only the master branch may be deployed on this server
To sammy@107.170.14.32:proj
83e9dc4..5617b50 test_feature -> test_feature
If you check out the production server in your browser again, you should see that nothing has changed. This is what you expect, since the change that you pushed was not in the master branch.
Now that you have tested your changes on your development machine, you are sure that you want to incorporate this feature into your master branch. You can checkout your master
branch and merge in your test_feature
branch on your development machine:
- git checkout master
- git merge test_feature
Now, you have merged the new feature into the master branch. Pushing to the production server will deploy your changes:
- git push production master
If you check out your production server’s domain name or IP address, you will see your changes:
Using this workflow, you can have a development machine that will immediately show any committed changes. The production machine will be updated whenever you push the master branch.
If you’ve followed along this far, you should be able to see the different ways that git hooks can help automate some of your tasks. They can help you deploy your code, or help you maintain quality standards by rejecting non-conformant changes or commit messages.
While the utility of git hooks is hard to argue, the actual implementation can be rather difficult to grasp and frustrating to troubleshoot. Practicing implementing various configurations, experimenting with parsing arguments and standard input, and keeping track of how git constructs the hooks’ environment will go a long way in teaching you how to write effective hooks. In the long run, the time investment is usually worth it, as it can easily save you and your team loads of manual work over the course of your project’s life.
To start using git to contribute to projects, check out How To Contribute to Open Source: Getting Started with Git. Or if you’re interested in more ways to use git, try How To Use Git: A Reference Guide.
]]>Version control has become an indispensable tool in modern software development. Version control systems allow you to keep track of your software at the source level. You can track changes, revert to previous stages, and branch off from the base code to create alternative versions of files and directories.
One of the most popular version control systems is git
. Many projects maintain their files in a Git repository, and sites like GitHub, GitLab, and Bitbucket have made sharing and contributing to code with Git easier than ever.
In this guide, we will demonstrate how to install Git on a CentOS 7 server. We will cover how to install the software in a couple of different ways, each with their own benefits, along with how to set up Git so that you can begin collaborating right away.
Before you begin with this guide, there are a few steps that need to be completed first.
You will need a CentOS 7 server installed and configured with a non-root user that has sudo
privileges. If you haven’t done this yet, you can run through steps 1–4 in the CentOS 7 initial server setup guide to create this account.
Once you have your non-root user, you can use it to SSH into your CentOS server and continue with the installation of Git.
The easiest way to install Git is from CentOS’s default software repositories. This is the fastest method, but the Git version that is installed this way may be older than the newest version available. If you need the latest release, consider compiling git
from source.
Use yum
, CentOS’s native package manager, to search for and install the latest git
package available in CentOS’s repositories:
- sudo yum install git
If the command completes without error, you will have git
downloaded and installed. To double-check that it is working correctly, try running Git’s built-in version check:
- git --version
If that check produced a Git version number, then you can now move on to setting up Git.
Now that you have git
installed, you will need to configure some information about yourself so that commit messages will be generated with the correct information attached. To do this, use the git config
command to provide the name and email address that you would like to have embedded into your commits:
- git config --global user.name "Your Name"
- git config --global user.email "you@example.com"
To confirm that these configurations were added successfully, we can see all of the configuration items that have been set by typing:
- git config --list
Outputuser.name=Your Name
user.email=you@example.com
This configuration will save you the trouble of seeing an error message and having to revise commits after you submit them.
You should now have git
installed and ready to use on your system. To learn more about how to use Git, check out these more in-depth articles:
My DO production server includes an SSL certificate. It was installed by hand long before I used Github. The default root project directory Apache delivers is “var/www/html”. I discovered to my cost that you can not clone a repo into a production server that has an established SSL cert with a wholly different root project name.
So - what should I do? Delete my existing cert and apply a new one? or can you easily edit what you have.
Many Thanks for all your help.
]]>Jenkins is an open source automation server intended to automate repetitive technical tasks involved in the continuous integration and delivery of software. With a robust ecosystem of plugins and broad support, Jenkins can handle a diverse set of workloads to build, test, and deploy applications.
In previous guides, we installed Jenkins on an Ubuntu 20.04 server and configured Jenkins with SSL using an Nginx reverse proxy. In this guide, we will demonstrate how to set up Jenkins to automatically test an application when changes are pushed to a repository.
For this tutorial, we will be integrating Jenkins with GitHub so that Jenkins is notified when new code is pushed to the repository. When Jenkins is notified, it will checkout the code and then test it within Docker containers to isolate the test environment from the Jenkins host machine. We will be using an example Node.js application to show how to define the CI/CD process for a project.
To follow along with this guide, you will need an Ubuntu 20.04 server with at least 1G of RAM configured with a secure Jenkins installation. To properly secure the web interface, you will need to assign a domain name to the Jenkins server. Follow these guides to learn how to set up Jenkins in the expected format:
To best control our testing environment, we will run our application’s tests within Docker containers. After Jenkins is up and running, install Docker on the server by following steps one and two of this guide:
When you have completed the above guides, you can continue on with this article.
After following the prerequisites, both Jenkins and Docker are installed on your server. However, by default, the Linux user responsible for running the Jenkins process cannot access Docker.
To fix this, we need to add the jenkins
user to the docker
group using the usermod
command:
- sudo usermod -aG docker jenkins
You can list the members of the docker
group to confirm that the jenkins
user has been added successfully:
- grep docker /etc/group
Outputdocker:x:999:sammy,jenkins
In order for the Jenkins to use its new membership, you need to restart the process:
- sudo systemctl restart jenkins
If you installed Jenkins with the default plugins, you may need to check to ensure that the docker
and docker-pipeline
plugins are also enabled. To do so, click Manage Jenkins from the sidebar, and then Manage Plugins from the next menu. Click on the Available tab of the plugin menu to search for new plugins, and type docker
into the search bar. If both Docker Pipeline
and Docker plugin
are returned as options, and they are unselected, select both, and when prompted, allow Jenkins to restart with the new plugins enabled.
This should take approximately a minute and the page will refresh afterward.
In order for Jenkins to watch your GitHub projects, you will need to create a Personal Access Token in our GitHub account.
Begin by visiting GitHub and signing into your account if you haven’t already done so. Afterwards, click on your user icon in the upper-right hand corner and select Settings from the drop down menu:
On the page that follows, locate the Developer settings section of the left-hand menu and click Personal access tokens:
Click on Generate new token button on the next page:
You will be taken to a page where you can define the scope for your new token.
In the Token description box, add a description that will allow you to recognize it later:
In the Select scopes section, check the repo:status, repo:public_repo and admin:org_hook boxes. These will allow Jenkins to update commit statuses and to create webhooks for the project. If you are using a private repository, you will need to select the general repo permission instead of the repo subitems:
When you are finished, click Generate token at the bottom.
You will be redirected back to the Personal access tokens index page and your new token will displayed:
Copy the token now so that we can reference it later. As the message indicates, there is no way to retrieve the token once you leave this page.
Note: As mentioned in the screenshot above, for security reasons, there is no way to redisplay the token once you leave this page. If you lose your token, delete the current token from your GitHub account and then create a new one.
Now that you have a personal access token for your GitHub account, we can configure Jenkins to watch your project’s repository.
Now that we have a token, we need to add it to our Jenkins server so it can automatically set up webhooks. Log into your Jenkins web interface using the administrative account you configured during installation.
Click on your username in the top-right corner to access your user settings, and from there, click Credentials in the left-hand menu. :
On the next page, click the arrow next to (global) within the Jenkins scope. In the box that appears, click Add credentials:
You will be taken to a form to add new credentials.
Under the Kind drop down menu, select Secret text. In the Secret field, paste your GitHub personal access token. Fill out the Description field so that you will be able to identify this entry at a later date. You can leave the Scope as Global and the ID field blank:
Click the OK button when you are finished.
You will now be able to reference these credentials from other parts of Jenkins to aid in configuration.
Back in the main Jenkins dashboard, click Manage Jenkins in the left hand menu:
In the list of links on the following page, click Configure System:
Scroll through the options on the next page until you find the GitHub section. Click the Add GitHub Server button and then select GitHub Server:
The section will expand to prompt for some additional information. In the Credentials drop down menu, select your GitHub personal access token that you added in the last section:
Click the Test connection button. Jenkins will make a test API call to your account and verify connectivity:
When you are finished, click the Save button to implement your changes.
To demonstrate how to use Jenkins to test an application, we will be using a “hello world” program created with Hapi.js. Because we are setting up Jenkins to react to pushes to the repository, you need to have your own copy of the demonstration code.
Visit the project repository and click the Fork button in the upper-right corner to make a copy of the repository in your account:
A copy of the repository will be added to your account.
The repository contains a package.json
file that defines the runtime and development dependencies, as well as how to run the included test suite. The dependencies can be installed by running npm install
and the tests can be run using npm test
.
We’ve added a Jenkinsfile
to the repo as well. Jenkins reads this file to determine the actions to run against the repository to build, test, or deploy. It is written using the declarative version of the Jenkins Pipeline DSL.
The Jenkinsfile
included in the hello-hapi
repository looks like this:
#!/usr/bin/env groovy
pipeline {
agent {
docker {
image 'node'
args '-u root'
}
}
stages {
stage('Build') {
steps {
echo 'Building...'
sh 'npm install'
}
}
stage('Test') {
steps {
echo 'Testing...'
sh 'npm test'
}
}
}
}
The pipeline
contains the entire definition that Jenkins will evaluate. Inside, we have an agent
section that specifies where the actions in the pipeline will execute. To isolate our environments from the host system, we will be testing in Docker containers, specified by the docker
agent.
Since Hapi.js is a framework for Node.js, we will be using the node
Docker image as our base. We specify the root
user within the container so that the user can simultaneously write to both the attached volume containing the checked out code, and to the volume the script writes its output to.
Next, the file defines two stages, i.e., logical divisions of work. We’ve named the first one “Build” and the second “Test”. The build step prints a diagnostic message and then runs npm install
to obtain the required dependencies. The test step prints another message and then run the tests as defined in the package.json
file.
Now that you have a repository with a valid Jenkinsfile
, we can set up Jenkins to watch this repository and run the file when changes are introduced.
Next, we can set up Jenkins to use the GitHub personal access token to watch our repository.
Back in the main Jenkins dashboard, click New Item in the left hand menu:
Enter a name for your new pipeline in the Enter an item name field. Afterwards, select Pipeline as the item type:
Click the OK button at the bottom to move on.
On the next screen, check the GitHub project box. In the Project url field that appears, enter your project’s GitHub repository URL.
Note: Make sure to point to your fork of the Hello Hapi application so that Jenkins will have permission to configure webhooks.
Next, in the Build Triggers section, check the GitHub hook trigger for GITScm polling box:
In the Pipeline section, we need to tell Jenkins to run the pipeline defined in the Jenkinsfile
in our repository. Change the Definition type to Pipeline script from SCM.
In the new section that appears, choose Git in the SCM menu. In the Repository URL field that appears, enter the URL to your fork of the repository again:
Note: Again, make sure to point to your fork of the Hello Hapi application.
Note: Our example references a Jenkinsfile
available within a public repository. If your project is not publicly accessible, you will need to use the add credentials button to add additional access to the repository. You can add a personal access token as we did with the hooks configuration earlier.
When you are finished, click the Save button at the bottom of the page.
Jenkins does not automatically configure webhooks when you define the pipeline for the repository in the interface. In order to trigger Jenkins to set up the appropriate hooks, we need to perform a manual build the first time.
In your pipeline’s main page, click Build Now in the left hand menu:
A new build will be scheduled. In the Build History box in the lower left corner, a new build should appear in a moment. Additionally, a Stage View will begin to be drawn in the main area of the interface. This will track the progress of your testing run as the different stages are completed:
In the Build History box, click on the number associated with the build to go to the build detail page. From here, you can click the Console Output button in the left hand menu to see details of the steps that were run:
Click the Back to Project item in the left hand menu when you are finished in order to return to the main pipeline view.
Now that we’ve built the project once, we can have Jenkins create the webhooks for our project. Click Configure in the left hand menu of the pipeline:
No changes are necessary on this screen, just click the Save button at the bottom. Now that the Jenkins has information about the project from the initial build process, it will register a webhook with our GitHub project when you save the page.
You can verify this by going to your GitHub repository and clicking the Settings button. On the next page, click Webhooks from the side menu. You should see your Jenkins server webhook in the main interface:
If for any reason Jenkins failed to register the hook (for example, due to upstream API changes or outages between Jenkins and Github), you can quickly add one yourself by clicking Add webhook and ensuring that the Payload URL is set to https://my-jenkins-server:8080/github-webhook
and the Content type is set to application/json
, then clicking Add webhook again at the bottom of the prompt.
Now, when you push new changes to your repository, Jenkins will be notified. It will then pull the new code and retest it using the same procedure.
To approximate this, in our repository page on GitHub, you can click the Create new file button to the left of the green Clone or download button:
On the next page, choose a filename and some dummy contents:
Click the Commit new file button at the bottom when you are finished.
If you return to your Jenkins interface, you will see a new build automatically started:
You can kick off additional builds by making commits to a local copy of the repository and pushing it back up to GitHub.
In this guide, we configured Jenkins to watch a GitHub project and automatically test any new changes that are committed. Jenkins pulls code from the repository and then runs the build and testing procedures from within isolated Docker containers. The resulting code can be deployed or stored by adding additional instructions to the same Jenkinsfile
.
To learn more about GitHub Actions, refer to GitHub’s documentation.
]]>file: create.tf #####################
terraform { required_providers { gitlab = { source = “gitlabhq/gitlab” } } } variable “gitlab_token” { type = string default = “SECRET” } variable “base_url” { type = string default = “https://gitlab.com/errooorr/api/v4/” } provider “gitlab” { token = var.gitlab_token base_url = var.base_url ########################
and
file: terraform.tfvars
############
gitlab_token = blablabla
]]>file: create.tf #####################
terraform { required_providers { gitlab = { source = “gitlabhq/gitlab” } } } variable “gitlab_token” { type = string default = “SECRET” } variable “base_url” { type = string default = “https://gitlab.com/errooorr/api/v4/” } provider “gitlab” { token = var.gitlab_token base_url = var.base_url ########################
and
file: terraform.tfvars
############
gitlab_token = blablabla
]]>https://github.com/TheRenegadeCoder/sample-programs/pull/2525 Hacktober Site is not recognising this as COMPLETE
been trying to raise thru Twitter & Email, …no response.
Pl help.
]]>i have this files which i need to make my deployment:
deployment.yaml,ingress.yaml,service.yaml,configmap.yaml
so from cli point of view with kubectl I have understood how to deploy and expose services to the internet through ingress.
what is important for my workflow is, that I want to have multiple versions (using gitlab tags) and deployments through .gitlab-ci.yaml.
i have succesfully connected my kubernetes cluster from digital ocean with my self-managed gitlab instance which has metrics installed on it so i can see the cpu usage etc. so i know its working properly.
how would i proceed now, does anyone have experience with it ? is it possible to just copy paste this files into my repo and process it through a runner ? i think its not possible, isn’t it ?
I would need some help or tips how to proceed.
thank you and a nice weekend to everybody.
]]>When we create a new server we are not able to access them.
We are freaking on what to do now. I am wondering if DO took us off network.
Thanks.
]]>I deleted the dev one and then I added a managed mysql one but when I run php artisan migrate I get an access denied error.
Is there any guide for transferring from a dev database to a managed database as I am also not sure what I do with the environment variables.
Thanks.
]]>Thanks a lot,
M.
]]>In this tutorial, you will set up a local deployment of Great Expectations, an open source data validation and documentation library written in Python. Data validation is crucial to ensuring that the data you process in your pipelines is correct and free of any data quality issues that might occur due to errors such as incorrect inputs or transformation bugs. Great Expectations allows you to establish assertions about your data called Expectations, and validate any data using those Expectations.
When you’re finished, you’ll be able to connect Great Expectations to your data, create a suite of Expectations, validate a batch of data using those Expectations, and generate a data quality report with the results of your validation.
To complete this tutorial, you will need:
In this step, you will install the Great Expectations package in your local Python environment, download the sample data you’ll use in this tutorial, and initialize a Great Expectations project.
To begin, open a terminal and make sure to activate your virtual Python environment. Install the Great Expectations Python package and command-line tool (CLI) with the following command:
- pip install great_expectations==0.13.35
Note: This tutorial was developed for Great Expectations version 0.13.35 and may not be applicable to other versions.
In order to have access to the example data repository, run the following git command to clone the directory and change into it as your working directory:
- git clone https://github.com/do-community/great_expectations_tutorial
- cd great_expectations_tutorial
The repository only contains one folder called data
, which contains two example CSV files with data that you will use in this tutorial. Take a look at the contents of the data
directory:
- ls data
You’ll see the following output:
Outputyellow_tripdata_sample_2019-01.csv yellow_tripdata_sample_2019-02.csv
Great Expectations works with many different types of data, such as connections to relational databases, Spark dataframes, and various file formats. For the purpose of this tutorial, you will use these CSV files containing a small set of taxi ride data to get started.
Finally, initialize your directory as a Great Expectations project by running the following command. Make sure to use the --v3-api
flag, as this will switch you to using the most recent API of the package:
- great_expectations --v3-api init
When asked OK to proceed? [Y/n]:
, press ENTER
to proceed.
This will create a folder called great_expectations
, which contains the basic configuration for your Great Expectations project, also called the Data Context. You can inspect the contents of the folder:
- ls great_expectations
You will see the first level of files and subdirectories that were created inside the great_expectations
folder:
Outputcheckpoints great_expectations.yml plugins
expectations notebooks uncommitted
The folders store all the relevant content for your Great Expectations setup. The great_expectations.yml
file contains all important configuration information. Feel free to explore the folders and configuration file a little more before moving on to the next step in the tutorial.
In the next step, you will add a Datasource to point Great Expectations at your data.
In this step, you will configure a Datasource in Great Expectations, which allows you to automatically create data assertions called Expectations as well as validate data with the tool.
While in your project directory, run the following command:
- great_expectations --v3-api datasource new
You will see the following output. Enter the options shown when prompted to configure a file-based Datasource for the data
directory:
OutputWhat data would you like Great Expectations to connect to?
1. Files on a filesystem (for processing with Pandas or Spark)
2. Relational database (SQL)
: 1
What are you processing your files with?
1. Pandas
2. PySpark
: 1
Enter the path of the root directory where the data files are stored. If files are on local disk enter a path relative to your current working directory or an absolute path.
: data
After confirming the directory path with ENTER
, Great Expectations will open a Jupyter notebook in your web browser, which allows you to complete the configuration of the Datasource and store it to your Data Context. The following screenshot shows the first few cells of the notebook:
The notebook contains several pre-populated cells of Python code to configure your Datasource. You can modify the settings for the Datasource, such as the name, if you like. However, for the purpose of this tutorial, you’ll leave everything as-is and execute all cells using the Cell > Run All
menu option. If run successfully, the last cell output will look as follows:
Output[{'data_connectors': {'default_inferred_data_connector_name': {'module_name': 'great_expectations.datasource.data_connector',
'base_directory': '../data',
'class_name': 'InferredAssetFilesystemDataConnector',
'default_regex': {'group_names': ['data_asset_name'], 'pattern': '(.*)'}},
'default_runtime_data_connector_name': {'module_name': 'great_expectations.datasource.data_connector',
'class_name': 'RuntimeDataConnector',
'batch_identifiers': ['default_identifier_name']}},
'module_name': 'great_expectations.datasource',
'class_name': 'Datasource',
'execution_engine': {'module_name': 'great_expectations.execution_engine',
'class_name': 'PandasExecutionEngine'},
'name': 'my_datasource'}]
This shows that you have added a new Datasource called my_datasource
to your Data Context. Feel free to read through the instructions in the notebook to learn more about the different configuration options before moving on to the next step.
Warning: Before moving forward, close the browser tab with the notebook, return to your terminal, and press CTRL+C
to shut down the running notebook server before proceeding.
You have now successfully set up a Datasource that points at the data
directory, which will allow you to access the CSV files in the directory through Great Expectations. In the next step, you will use one of these CSV files in your Datasource to automatically generate Expectations with a profiler.
In this step of the tutorial, you will use the built-in Profiler to create a set of Expectations based on some existing data. For this purpose, let’s take a closer look at the sample data that you downloaded:
yellow_tripdata_sample_2019-01.csv
and yellow_tripdata_sample_2019-02.csv
contain taxi ride data from January and February 2019, respectively.For this purpose, you will create Expectations (data assertions) based on certain properties of the January data and then, in a later step, use those Expectations to validate the February data. Let’s get started by creating an Expectation Suite, which is a set of Expectations that are grouped together:
- great_expectations --v3-api suite new
By selecting the options shown in the output below, you specify that you would like to use a profiler to generate Expectations automatically, using the yellow_tripdata_sample_2019-01.csv
data file as an input. Enter the name my_suite
as the Expectation Suite name when prompted and press ENTER
at the end when asked Would you like to proceed? [Y/n]
:
OutputUsing v3 (Batch Request) API
How would you like to create your Expectation Suite?
1. Manually, without interacting with a sample batch of data (default)
2. Interactively, with a sample batch of data
3. Automatically, using a profiler
: 3
A batch of data is required to edit the suite - let's help you to specify it.
Which data asset (accessible by data connector "my_datasource_example_data_connector") would you like to use?
1. yellow_tripdata_sample_2019-01.csv
2. yellow_tripdata_sample_2019-02.csv
: 1
Name the new Expectation Suite [yellow_tripdata_sample_2019-01.csv.warning]: my_suite
When you run this notebook, Great Expectations will store these expectations in a new Expectation Suite "my_suite" here:
<path_to_project>/great_expectations_tutorial/great_expectations/expectations/my_suite.json
Would you like to proceed? [Y/n]: <press ENTER>
This will open another Jupyter notebook that lets you complete the configuration of your Expectation Suite. The notebook contains a fair amount of code to configure the built-in profiler, which looks at the CSV file you selected and creates certain types of Expectations for each column in the file based on what it finds in the data.
Scroll down to the second code cell in the notebook, which contains a list of ignored_columns
. By default, the profiler will ignore all columns, so let’s comment out some of them to make sure the profiler creates Expectations for them. Modify the code so it looks like this:
ignored_columns = [
# "vendor_id"
# , "pickup_datetime"
# , "dropoff_datetime"
# , "passenger_count"
"trip_distance"
, "rate_code_id"
, "store_and_fwd_flag"
, "pickup_location_id"
, "dropoff_location_id"
, "payment_type"
, "fare_amount"
, "extra"
, "mta_tax"
, "tip_amount"
, "tolls_amount"
, "improvement_surcharge"
, "total_amount"
, "congestion_surcharge"
,]
Make sure to remove the comma before "trip_distance"
. By commenting out the columns vendor_id
, pickup_datetime
, dropoff_datetime
, and passenger_count
, you are telling the profiler to generate Expectations for those columns. In addition, the profiler will also generate table-level Expectations, such as the number and names of columns in your data, and the number of rows. Once again, execute all cells in the notebook by using the Cell > Run All
menu option.
When executing all cells in this notebook, two things happen:
yellow_tripdata_sample_2019-01.csv
file you told it to use.In the next step, you will take a closer look at the Data Docs that were opened in the new browser window.
In this step of the tutorial, you will inspect the Data Docs that Great Expectations generated and learn how to interpret the different pieces of information. Go to the browser window that just opened and take a look at the page, shown in the screenshot below.
At the top of the page, you will see a box titled Overview, which contains some information about the validation you just ran using your newly created Expectation Suite my_suite
. It will tell you Status: Succeeded
and show some basic statistics about how many Expectations were run. If you scroll further down, you will see a section titled Table-Level Expectations. It contains two rows of Expectations, showing the Status, Expectation, and Observed Value for each row. Below the table Expectations, you will see the column-level Expectations for each of the columns you commented out in the notebook.
Let’s focus on one specific Expectation: The passenger_count
column has an Expectation stating “values must belong to this set: 1 2 3 4 5 6
.” which is marked with a green checkmark and has an Observed Value of “0% unexpected”. This is telling you that the profiler looked at the values in the passenger_count
column in the January CSV file and detected only the values 1 through 6, meaning that all taxi rides had between 1 and 6 passengers. Great Expectations then created an Expectation for this fact. The last cell in the notebook then triggered validation of the January CSV file and it found no unexpected values. This is spuriously true, since the same data that was used to create the Expectation was also the data used for validation.
In this step, you reviewed the Data Docs and observed the passenger_count
column for its Expectation. In the next step, you’ll see how you can validate a different batch of data.
In the final step of this tutorial, you will create a new Checkpoint, which bundles an Expectation Suite and a batch of data to execute validation of that data. After creating the Checkpoint, you will then run it to validate the February taxi data CSV file and see whether the file passed the Expectations you previously created. To begin, return to your terminal and stop the Jupyter notebook by pressing CTRL+C
if it is still running. The following command will start the workflow to create a new Checkpoint called my_checkpoint
:
- great_expectations --v3-api checkpoint new my_checkpoint
This will open a Jupyter notebook with some pre-populated code to configure the Checkpoint. The second code cell in the notebook will have a random data_asset_name
pre-populated from your existing Datasource, which will be one of the two CSV files in the data
directory you’ve seen earlier. Ensure that the data_asset_name
is yellow_tripdata_sample_2019-02.csv
and modify the code if needed to use the correct filename.
my_checkpoint_name = "my_checkpoint" # This was populated from your CLI command.
yaml_config = f"""
name: {my_checkpoint_name}
config_version: 1.0
class_name: SimpleCheckpoint
run_name_template: "%Y%m%d-%H%M%S-my-run-name-template"
validations:
- batch_request:
datasource_name: my_datasource
data_connector_name: default_inferred_data_connector_name
data_asset_name: yellow_tripdata_sample_2019-02.csv
data_connector_query:
index: -1
expectation_suite_name: my_suite
"""
print(yaml_config)
"""
This configuration snippet configures a new Checkpoint, which reads the data asset yellow_tripdata_sample_2019-02.csv
, i.e., your February CSV file, and validates it using the Expectation Suite my_suite
. Confirm that you modified the code correctly, then execute all cells in the notebook. This will save the new Checkpoint to your Data Context.
Finally, in order to run this new Checkpoint and validate the February data, scroll down to the last cell in the notebook. Uncomment the code in the cell to look as follows:
context.run_checkpoint(checkpoint_name=my_checkpoint_name)
context.open_data_docs()
Select the cell and run it using the Cell > Run Cells
menu option or the SHIFT+ENTER
keyboard shortcut. This will open Data Docs in a new browser tab.
On the Validation Results overview page, click on the topmost run to navigate to the Validation Result details page. The Validation Result details page will look very similar to the page you saw in the previous step, but it will now show that the Expectation Suite failed, validating the new CSV file. Scroll through the page to see which Expectations have a red X next to them, marking them as failed.
Find the Expectation on the passenger_count
column you looked at in the previous step: “values must belong to this set: 1 2 3 4 5 6
”. You will notice that it now shows up as failed and highlights that 1579 unexpected values found. ≈15.79% of 10000 total rows
. The row also displays a sample of the unexpected values that were found in the column, namely the value 0
. This means that the February taxi ride data suddenly introduced the unexpected value 0
as in the passenger_counts
column, which seems like a potential data bug. By running the Checkpoint, you validated the new data with your Expectation Suite and detected this issue.
Note that each time you execute the run_checkpoint
method in the last notebook cell, you kick off another validation run. In a production data pipeline environment, you would call the run_checkpoint
command outside of a notebook whenever you’re processing a new batch of data to ensure that the new data passes all validations.
In this article you created a first local deployment of the Great Expectations framework for data validation. You initialized a Great Expectations Data Context, created a new file-based Datasource, and automatically generated an Expectation Suite using the built-in profiler. You then created a Checkpoint to run validation against a new batch of data, and inspected the Data Docs to view the validation results.
This tutorial only taught you the basics of Great Expectations. The package contains more options for configuring Datasources to connect to other types of data, for example relational databases. It also comes with a powerful mechanism to automatically recognize new batches of data based on pattern-matching in the tablename or filename, which allows you to only configure a Checkpoint once to validate any future data inputs. You can learn more about Great Expectations in the official documentation.
]]>```
$ git pull origin master
From https://github.com/...
...
error: The following untracked working tree files would be overwritten by
merge:
accounts/migrations/0001_initial.py
Please move or remove them before you merge.
Aborting
```
Locally, my accounts app has only one migration:
```
accounts > migrations
__init__.py
0001_initial.py
```
When I run git status on the server, I get a lot of untracked files, and I can see two migrations related to my accounts app (even though locally I only have one migration file in the accounts/migrations) as well as other untracked files (not related to accounts app):
```
On branch master
Untracked files:
(use "git add <file>..." to include in what will be committed)
accounts/migrations/0001_initial.py
accounts/migrations/0002_alter_user_id.py
...
```
Given that I don’t want to mess with the production database, I don’t wish you to change the migration files on the server to replicate the local migration files unless this does not cause any problem for my server. So, how should I resolve this error?
]]>What’s the best way to clone this droplet as-is (SQL database and all) and run it locally? Once local I’ll configure it with Git and use GitHub Actions to rsync back to the production server.
]]>I face two different issues:
How to automate the Github authentication?
Which password should I use?
I run:
ssh keygen
add key.pub
add agent and authenticate
In Github I:
On my DO server I note the file /etc/ssh/sshd_config
has:
PasswordAuthentication no
#Port 22
##HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed----_key
Should I uncomment any of these?
After adding the clone git command I request to add my GitHub username and password… then it fails.
I tried with:
id_rsa.pub
I constantly fail.
I run the command to a user (not root) that was granted rights.
Any tip, advice on how to overcome this hurdle would be highly appreciated
]]>I followed the steps mentioned in https://www.digitalocean.com/community/questions/how-can-use-submodules-in-private-repos-with-gitlab-in-do-apps to add Git Submodule.
At root of the project, I have .gitmodules
which contains
[submodule "src/helper/joiyyyyy"]
path = src/helper/joiyyyyy
url = git@gitlab.com:adarshyyyy/joiyyyyy.git
Note: Instead of https://
I am using git@gitlab
Also, I have enabled the Deploy Key
in Git Sub Module on GitLab.
Still when deploying, I am getting warning
[joiyyyy] [2021-09-10 05:14:38] => Initializing build
[joiyyyy] [2021-09-10 05:14:38] => Retrieving source code to /workspace
[joiyyyy] [2021-09-10 05:14:38] => Selecting branch "prod"
[joiyyyy] [2021-09-10 05:14:42] => Checking out commit "4720ab4de3f8bfbefe37b0d02fc22c0e14cbcdad"
[joiyyyy] [2021-09-10 05:14:43] => Cloning submodules
[joiyyyy] [2021-09-10 05:14:43] warning: error cloning submodules: entry not found
[joiyyyy] [2021-09-10 05:14:43] => Got source_dir: /
[joiyyyy] [2021-09-10 05:14:43] => Using workspace root /workspace
Note: 3rd last line warning: error cloning submodules: entry not found
.
Finally, the Deployment will fail, as Node is not able to import function from this Git Submodule.
]]>Contributing to open-source projects is a rewarding experience as you work to make software better for end users like yourself. Once you submit a pull request, the process of contributing to a project can require some rebasing and reworking of code prior to acceptance, followed by a general cleanup of your branches.
This tutorial will guide you through some of the next steps you may need to take after you submit a pull request to an open-source software project.
This tutorial will walk you through the steps you’ll take after making a pull request, so you should already have Git installed, and either have made or are thinking about creating a pull request.
As of November 2020, GitHub removed password-based authentication. For this reason, you will need to create a personal access token or add your SSH public key information in order to access GitHub repositories through the command line.
To learn more about contributing to open-source projects, you can read this introduction. To learn about making pull requests, you can read “How To Create a Pull Request on GitHub.”
While you contribute to open source, you may find that there are conflicts between your branch or pull request and the upstream code. You may get an error like this in your shell:
OutputCONFLICT (content): Merge conflict in your-file.py
Automatic merge failed; fix conflicts and then commit the result.
Or like this on your pull request via GitHub’s website:
This may happen if the maintainers do not respond to your pull request for a while, or if many people are contributing to the project at once. When this happens and you still want to merge your pull request, you will have to resolve conflicts and rebase your code.
A rebase allows us to move branches around by changing the commit that they are based on. This way, we can rebase our code to make them based on the main branch’s more recent commits. Rebasing should be done with care, and you should make sure you are working with the right commits and on the right branch throughout the process. We’ll also go over using the git reflog
command below in case you make an error.
As we did in the pull request tutorial, we’ll move into the code directory:
- cd repository
Next, you want to ensure you’re in the correct branch by navigating to it with the git checkout
command:
- git checkout new-branch
Then, run git fetch
for the most recent upstream version of the code:
- git fetch origin
Once you have the upstream version of the project fetched, you can clean up your comments by either squashing or rewording your commit messages to make them more digestible to the project maintainers. If you did not do many small commits, this may not be necessary.
To begin this process, you’ll perform an interactive rebase. An interactive rebase can be used to edit previous commit messages, combine several commits into one, or delete or revert commits that are not necessary any longer. To do this, we will need to be able to reference the commits that we have made either by number or by a string that references the base of our branch.
To find out the number of commits we have made, we can inspect the total number of commits that have been made to the project with the following command:
- git log
This will provide you with output that looks similar to this:
Outputcommit 46f196203a16b448bf86e0473246eda1d46d1273
Author: username-2 <email-2>
Date: Mon Dec 14 07:32:45 2015 -0400
Commit details
commit 66e506853b0366c87f4834bb6b39d941cd034fe3
Author: username1 <email-1>
Date: Fri Nov 27 20:24:45 2015 -0500
Commit details
The log shows all the commits made to the given project’s repository, so your commits will be listed along with commits made by others. For projects that have an extensive history of commits by multiple authors, you’ll want to specify yourself as author in the command:
- git log --author=your-username
By specifying this parameter, you should be able to count up the commits you’ve made. If you’re working on multiple branches you can add --branches[=<branch>]
to the end of your command to limit by branch.
Now if you know the number of commits you’ve made on the branch that you want to rebase, you can run the git rebase
command like so:
- git rebase -i HEAD~x
Here, -i
refers to the rebase being interactive, and HEAD
refers to the latest commit from the main branch. The x
will be the number of commits you have made to your branch since you initially fetched it.
If, however, you don’t know how many commits you have made on your branch, you’ll need to find which commit is the base of your branch, which you can do by running the following command:
- git merge-base new-branch main
This command will return a long string known as a commit hash, something that looks like the following:
Output66e506853b0366c87f4834bb6b39d341cd094fe9
We’ll use this commit hash to pass to the git rebase
command:
git rebase -i 66e506853b0366c87f4834bb6b39d341cd094fe9
For either of the above commands, your command-line text editor will open with a file that contains a list of all the commits in your branch, and you can now choose whether to squash commits or reword them.
When we squash commit messages, we are squashing or combining several smaller commits into one larger one.
In front of each commit you’ll see the word “pick,” so your file will look similar to this if you have two commits:
pick a1f29a6 Adding a new feature
pick 79c0e80 Here is another new feature
# Rebase 66e5068..79c0e80 onto 66e5068 (2 command(s))
Now, for each line of the file except for the first line, you should replace the word “pick” with the word “squash” to combine the commits:
pick a1f29a6 Adding a new feature
squash 79c0e80 Here is another new feature
At this point, you can save and close the file, which will open a new file that combines all the commit messages of all of the commits. You can reword the commit message as you see fit, and then save and close the file.
You’ll receive feedback once you have closed the file:
OutputSuccessfully rebased and updated refs/heads/new-branch.
You now have combined all of the commits into one by squashing them together.
Rewording commit messages is great for when you notice a typo, or you realize you were not using parallel language for each of your commits.
Once you perform the interactive rebase as described above with the git rebase -i
command, you’ll have a file open up that looks like this:
pick a1f29a6 Adding a new feature
pick 79c0e80 Here is another new feature
# Rebase 66e5068..79c0e80 onto 66e5068 (2 command(s))
Now, for each of the commits that you would like to reword, replace the word “pick” with “reword”:
pick a1f29a6 Adding a new feature
reword 79c0e80 Adding a second new feature
# Rebase 66e5068..79c0e80 onto 66e5068 (2 command(s))
Once you save and close the file, a new text file will appear in your terminal editor that shows the modified wording of the commit message. If you would like to edit the file again, you can do so before saving and closing the file. Doing this can ensure that your commit messages are useful and uniform.
Once you are satisfied with the number of commits you are making and the relevant commit messages, you should complete the rebase of your branch on top of the latest version of the project’s upstream code. To do this, you should run this command from your repository’s directory:
- git rebase origin/main
At this point, Git will begin replaying your commits onto the latest version of main. If you get conflicts while this occurs, Git will pause to prompt you to resolve conflicts prior to continuing. If there is nothing to resolve, your output will state the following:
OutputCurrent branch new-branch is up to date.
Once you have fixed the conflicts, you’ll run:
- git rebase --continue
This command will indicate to Git that it can now continue replaying your commits.
If you previously combined commits through using the squash
command, you will only need to resolve conflicts once.
Once you perform a rebase, the history of your branch changes, and you are no longer able to use the git push
command because the direct path has been modified.
We will have to instead use the --force
or -f
flag to force-push the changes, informing Git that you are fully aware of what you are pushing.
Let’s first ensure that our push.default
is simple
, which is the default in Git 2.0+, by configuring it:
- git config --global push.default simple
At this point, we should ensure that we are on the correct branch by checking out the branch we are working on:
- git checkout new-branch
OutputAlready on 'new-branch'
. . .
Now we can perform the force-push:
- git push -f
Now you should receive feedback of your updates along with the message that this was a forced update
. Your pull request is now updated.
If at some point you threw out a commit that you really wanted to integrate into the larger project, you should be able to use Git to restore commits you may have thrown away by accident.
We’ll be using the git reflog
command to find our missing commits and then create a new branch from that commit.
Reflog is short for reference logs which record when the tips of branches and other references were last updated within the local repository.
From the local directory of the code repository we are working in, we’ll run the command:
- git reflog
Once you run this command, you’ll receive output that looks like the following:
Output46f1962 HEAD@{0}: checkout: moving from branch-1 to new-branch
9370d03 HEAD@{1}: commit: code cleanups
a1f29a6 HEAD@{2}: commit: brand new feature
38f2fc2 HEAD@{3}: commit: remove testing methods
. . .
Your commit messages will let you know which of the commits is the one that you left behind, and the relevant string will be before the HEAD@{x}
information on the left-hand side of your terminal window.
Now you can take that information and create a new branch from the relevant commit:
git checkout -b new-new-branch a1f29a6
In the example above, we made a new branch from the third commit displayed above, the one that rolled out a “brand new feature,” represented by the string a1f29a6
.
Depending on what you need to do from here, you can follow the steps on setting up your branch in this tutorial on pull requests, or return to the top of the current tutorial to work through rebasing the new branch.
Note: If you recently ran the git gc
command to clean up unnecessary files and optimize the local repository you may be unable to restore lost commits.
When you submit a pull request, you are in dialogue with a larger project. Submitting a pull request is inviting others to talk about your work, just as you yourself are talking about and engaging with a bigger project. For you to have a successful conversation, it is important for you to be able to communicate why you are making the pull request through your commit messages, so it is best to be as precise and clear as possible.
The pull request review may be lengthy and detailed, depending on the project. It is best to think of the process as a learning experience, and a good way for you to improve your code and make the pull request better and more in line with the needs of the software project. The review should allow you to make the changes yourself through the maintainers’ advice and direction.
The pull request will keep a log of notes from reviewers and any updates and discussions you have together. You may need to make several extra commits throughout this process before the pull request is accepted. This is completely normal and provides a good opportunity for you to work on revision as part of a team.
Your pull request will continue to be maintained through Git, and be auto-updated throughout the process as long as you keep adding commits to the same branch and pushing those to your fork.
Though you are putting your code out there into the larger world for review by your peers, you should never be made to feel like the review is getting personal, so be sure to read relevant CONTRIBUTION.md
files or Codes of Conduct. It is important to make sure that your commits are aligning with the guidelines specified by the project, but if you begin to feel uncomfortable, the project you are working on may not be deserving of your contribution. There are many welcoming spaces in the open-source community and while you can expect your code to be looked at with a critical eye, all feedback you receive should be professional and courteous.
If your pull request has been accepted, you have successfully made a contribution to an open-source software project!
At this point, you will need to pull the changes you made back into your fork through your local repository. This is what you have already done when you went through the process to sync your fork. You can do this with the following commands in your terminal window:
- git checkout main
- git pull --rebase origin main
- git push -f origin main
Now, you should clean up both your local and remote branches by removing the branch you created in both places as they are no longer needed. First, let’s remove the local branch:
- git branch -d new-branch
The -d
flag added to the git branch
command will delete the branch that you pass to the command. In the example above, it is called new-branch.
Next, we’ll remove the remote branch:
- git push origin --delete new-branch
With the branches deleted you have cleaned up the repository and your changes now live in the main repository. You should keep in mind that because the changes you made through your pull request are now part of the main repository, they may not be available to the average end user who is downloading public releases. Generally speaking, software maintainers will bundle several new features and fixes together into a single public release.
This tutorial took you through some of the next steps you may need to complete after submitting a pull request to an open-source software repository.
Contributing to open-source projects — and becoming an active open-source developer — is often a rewarding experience. Making regular contributions to software you frequently use helps to ensure that it is valuable and useful to its community of users.
]]>Teams of developers and open-source software maintainers typically manage their projects through Git, a distributed version control system that supports collaboration.
This cheat sheet style guide provides a quick reference to commands that are useful for working and collaborating in a Git repository. To install and configure Git, be sure to read “How To Contribute to Open Source: Getting Started with Git.”
How to Use This Guide:
highlighted text
in this guide’s commands, keep in mind that this text should refer to the commits and files in your own repository.Check your Git version with the following command, which will also confirm that Git is installed:
- git --version
Git allows you to configure a number of settings that will apply to all the repositories on your local machine. For instance, configure a username that Git will use to credit you with any changes you make to a local repository:
- git config --global user.name “firstname lastname”
Configure an email address to be associated with each history marker:
- git config --global user.email “valid-email”
Configure your preferred text editor as well:
- git config --global core.editor “nano”
You can initialize your current working directory as a Git repository with init
:
- git init
To copy an existing Git repository hosted remotely, you’ll use git clone
with the repo’s URL or server location (in the latter case you will use ssh
):
- git clone https://www.github.com/username/repo-name
Show your current Git directory’s remote repository:
- git remote
For more verbose output, use the -v
flag:
- git remote -v
Add the Git upstream, which can be a URL or can be hosted on a server (in the latter case, connect with ssh
):
- git remote add upstream https://www.github.com/username/repo-name
When you’ve modified a file and have marked it to go in your next commit, it is considered to be a staged file.
Check the status of your Git repository, including files added that are not staged, and files that are staged:
- git status
To stage modified files, use the add
command, which you can run multiple times before a commit. If you make subsequent changes that you want to include in the next commit, you must run add
again.
You can specify the specific file with add
:
- git add my_script.py
With .
you can add all files in the current directory, including files that begin with a .
:
- git add .
If you would like to add all files in a current directory as well as files in subdirectories, you can use the -all
or -A
flag:
- git add -A
You can remove a file from staging while retaining changes within your working directory with reset
:
- git reset my_script.py
Once you have staged your updates, you are ready to commit them, which will record changes you have made to the repository.
To commit staged files, you’ll run the commit
command with your meaningful commit message so that you can track commits:
- git commit -m "Commit message"
You can condense staging all tracked files by committing them in one step:
- git commit -am "Commit message"
If you need to modify your commit message, you can do so with the --amend
flag:
- git commit --amend -m "New commit message"
A branch in Git is a movable pointer to one of the commits in the repository, it allows you to isolate work and manage feature development and integrations. You can learn more about branches by reading the Git documentation.
List all current branches with the branch
command. An asterisk (*
) will appear next to your currently active branch:
- git branch
Create a new branch. You will remain on your currently active branch until you switch to the new one:
- git branch new-branch
Switch to any existing branch and check it out into your current working directory:
- git checkout another-branch
You can consolidate the creation and checkout of a new branch by using the -b
flag:
- git checkout -b new-branch
Rename your branch name:
- git branch -m current-branch-name new-branch-name
Merge the specified branch’s history into the one you’re currently working in:
- git merge branch-name
Abort the merge, in case there are conflicts:
- git merge --abort
You can also select a particular commit to merge with cherry-pick
with the string that references the specific commit:
- git cherry-pick f7649d0
When you have merged a branch and no longer need the branch, you can delete it:
- git branch -d branch-name
If you have not merged a branch to main, but are sure you want to delete it, you can force delete a branch:
- git branch -D branch-name
To download changes from another repository, such as the remote upstream, you’ll use fetch
:
- git fetch upstream
Merge the fetched commits. Note that some repositories may use master
instead of main
:
- git merge upstream/main
Push or transmit your local branch commits to the remote repository branch:
- git push origin main
Fetch and merge any commits from the tracking remote branch:
- git pull
Display the commit history for the currently active branch:
- git log
Show the commits that changed a particular file. This follows the file regardless of file renaming:
- git log --follow my_script.py
Show the commits that are on one branch and not on the other. This will show commits on a-branch
that are not on b-branch
:
- git log a-branch..b-branch
Look at reference logs (reflog
) to see when the tips of branches and other references were last updated within the repository:
- git reflog
Show any object in Git via its commit string or hash in a more human-readable format:
- git show de754f5
The git diff
command shows changes between commits, branches, and more. You can read more fully about it through the Git documentation.
Compare modified files that are on the staging area:
- git diff --staged
Display the diff of what is in a-branch
but is not in b-branch
:
- git diff a-branch..b-branch
Show the diff between two specific commits:
- git diff 61ce3e6..e221d9c
Track path changes by deleting a file from your project and stage this removal for commit:
- git rm file
Or change an existing file path and then stage the move:
- git mv existing-path new-path
Check the commit log to see if any paths have been moved:
- git log --stat -M
Sometimes you’ll find that you made changes to some code, but before you finish you have to begin working on something else. You’re not quite ready to commit the changes you have made so far, but you don’t want to lose your work. The git stash
command will allow you to save your local modifications and revert back to the working directory that is in line with the most recent HEAD
commit.
Stash your current work:
- git stash
See what you currently have stashed:
- git stash list
Your stashes will be named stash@{0}
, stash@{1}
, and so on.
Show information about a particular stash:
- git stash show stash@{0}
To bring the files in a current stash out of the stash while still retaining the stash, use apply
:
- git stash apply stash@{0}
If you want to bring files out of a stash, and no longer need the stash, use pop
:
- git stash pop stash@{0}
If you no longer need the files saved in a particular stash, you can drop
the stash:
- git stash drop stash@{0}
If you have multiple stashes saved and no longer need to use any of them, you can use clear
to remove them:
- git stash clear
If you want to keep files in your local Git directory, but do not want to commit them to the project, you can add these files to your .gitignore
file so that they do not cause conflicts.
Use a text editor such as nano to add files to the .gitignore
file:
- nano .gitignore
To see examples of .gitignore
files, you can look at GitHub’s .gitignore
template repo.
A rebase allows us to move branches around by changing the commit that they are based on. With rebasing, you can squash or reword commits.
You can start a rebase by either calling the number of commits you have made that you want to rebase (5
in the case below):
- git rebase -i HEAD~5
Alternatively, you can rebase based on a particular commit string or hash:
- git rebase -i 074a4e5
Once you have squashed or reworded commits, you can complete the rebase of your branch on top of the latest version of the project’s upstream code. Note that some repositories may use master
instead of main
:
- git rebase upstream/main
To learn more about rebasing and updating, you can read How To Rebase and Update a Pull Request, which is also applicable to any type of commit.
You can revert back the changes that you made on a given commit by using revert
. Your working tree will need to be clean in order for this to be achieved:
- git revert 1fc6665
Sometimes, including after a rebase, you need to reset your working tree. You can reset to a particular commit, and delete all changes, with the following command:
- git reset --hard 1fc6665
To force push your last known non-conflicting commit to the origin repository, you’ll need to use --force
:
Warning: Force pushing to the main (sometimes master
) branch is often frowned upon unless there is a really important reason for doing it. Use sparingly when working on your own repositories, and work to avoid this when you’re collaborating.
- git push --force origin main
To remove local untracked files and subdirectories from the Git directory for a clean working branch, you can use git clean
:
- git clean -f -d
If you need to modify your local repository so that it looks like the current upstream main branch (that is, there are too many conflicts), you can perform a hard reset:
Note: Performing this command will make your local repository look exactly like the upstream. Any commits you have made but that were not pulled into the upstream will be destroyed.
- git reset --hard upstream/main
This guide covers some of the more common Git commands you may use when managing repositories and collaborating on software.
You can learn more about open-source software and collaboration in our Introduction to Open Source tutorial series:
There are many more commands and variations that you may find useful as part of your work with Git. To learn more about all of your available options, you can run the following to receive useful information:
- git --help
You can also read more about Git and look at Git’s documentation from the official Git website.
]]>Version control isn’t just for code. It’s for anything you want to track, including content. Using Git to manage your next writing project gives you the ability to view multiple drafts at the same time, see differences between those drafts, and even roll back to a previous version. And if you’re comfortable doing so, you can then share your work with others on GitHub or other central Git repositories.
In this tutorial you’ll use Git to manage a small Markdown document. You’ll store an initial version, commit it, make changes, view the difference between those changes, and review the previous version. When you’re done, you’ll have a workflow you can apply to your own writing projects.
To manage your changes, you’ll create a local Git repository. A Git repository lives inside of an existing directory, so start by creating a new directory for your article:
- mkdir article
Switch to the new article
directory:
- cd article
The git init
command creates a new empty Git repository in the current directory. Execute that command now:
- git init
You’ll see the following output which confirms your repository was created:
OutputInitialized empty Git repository in /Users/sammy/article/.git/
The .gitignore
file lets you tell Git that some files should be ignored. You can use this to ignore temporary files your text editor might create, or operating systems files. On macOS, for example, the Finder application creates .DS_Store
files in directories. Create a .gitignore
file that ignores them:
- nano .gitignore
Add the following lines to the file:
# Ignore Finder files
.DS_store
The first line is a comment, which will help you identify what you’re ignoring in the future. The second line specifies the file to ignore.
Save the file and exit the editor.
As you discover more files you want to ignore, open the .gitignore
file and add a new line for each file or directory you want to ignore.
Now that your repository is configured, you can start working.
Git only knows about files you tell it about. Just because a file exists in the directory holding the repository doesn’t mean Git will track its changes. You have to add a file to the repository and then commit the changes.
Create a new Markdown file called article.md
:
- nano article.md
Add some text to the file:
# How To Use Git to Manage Your Writing Project
### Introduction
Version control isn't just for code. It's for anything you want to track, including content. Using Git to manage your next writing project gives you the ability to view multiple drafts at the same time, see differences between those drafts, and even roll back to a previous version. And if you're comfortable doing so, you can then share your work with others on GitHub or other central git repositories.
In this tutorial you'll use Git to manage a small Markdown document. You'll store an initial version, commit it, make changes, view the difference between those changes, and review the previous version. When you're done, you'll have a workflow you can apply to your own writing projects.
Save the changes and exit the editor.
The git status
command will show you the state of your repository. It will show you what files need to be added so Git can track them. Run this command:
- git status
You’ll see this output:
OutputOn branch master
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
.gitignore
article.md
nothing added to commit but untracked files present (use "git add" to track)
In the output, the Untracked files
section shows the files that Git isn’t looking at. These files need to be added to the repository so Git can watch them for changes. Use the git add
command to do this:
- git add .gitignore
- git add article.md
Now run git status
to verify those files have been added:
OutputOn branch master
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: .gitignore
new file: article.md
Both files are now listed in the Changes to be committed
section. Git knows about them, but it hasn’t created a snapshot of the work yet. Use the git commit
command to do that.
When you create a new commit, you need to provide a commit message. A good commit message states what your changes are. When you’re working with others, the more detailed your commit messages are, the better.
Use the command git commit
to commit your changes:
- git commit -m "Add gitignore file and initial version of article"
The output of the command shows that the files were committed:
Output[master (root-commit) 95fed84] Add gitignore file and initial version of article
2 files changed, 9 insertions(+)
create mode 100644 .gitignore
create mode 100644 article.md
Use the git status
command to see the state of the repository:
- git status
The output shows there are no changes that need to be added or committed.
OutputOn branch master
nothing to commit, working tree clean
Now let’s look at how to work with changes.
You’ve added your initial version of the article. Now you’ll add more text so you can see how to manage changes with Git.
Open the article in your editor:
- nano article.md
Add some more text to the end of the file:
## Prerequisites
* Git installed on your local computer. The tutorial [How to Contribute to Open Source: Getting Started with Git](https://www.digitalocean.com/community/tutorials/how-to-contribute-to-open-source-getting-started-with-git) walks you through installing Git and covers some background information you may find useful.
Save the file.
Use the git status
command to see where things stand in your repository:
- git status
The output shows there are changes:
OutputOn branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: article.md
no changes added to commit (use "git add" and/or "git commit -a")
As expected, the article.md
file has changes.
Use git diff
to see what they are:
- git diff article.md
The output shows the lines you’ve added:
diff --git a/article.md b/article.md
index 77b081c..ef6c301 100644
--- a/article.md
+++ b/article.md
@@ -5,3 +5,7 @@
Version control isn't just for code. It's for anything you want to track, including content. Using Git to manage your next writing project gives you the ability to view multiple drafts at the same time, see differences between those drafts, and even roll back to a previous version. And if you're comfortable doing so, you can then share your work with others on GitHub or other central git repositories.
In this tutorial you'll use Git to manage a small Markdown document. You'll store an initial version, commit it, make changes, view the difference between those changes, and review the previous version. When you're done, you'll have a workflow you can apply to your own writing projects.
+
+## Prerequisites
+
+* Git installed on your local computer. The tutorial [How to Contribute to Open Source: Getting Started with Git](https://www.digitalocean.com/community/tutorials/how-to-contribute-to-open-source-getting-started-with-git) walks you through installing Git and covers some background information you may find useful.
In the output, lines starting with a plus (+) sign are lines you added. Lines that were removed would show up with a minus (-) sign. Lines that were unchanged would have neither of these characters in front.
Using git diff
and git status
is a helpful way to see what you’ve changed. You can also save the diff to a file so you can view it later with the following command:
- git diff article.md > article_diff.diff
Using the .diff
extension will help your text editor apply the proper syntax highlighting.
Saving the changes to your repository is a two-step process. First, add the article.md
file again, and then commit. Git wants you to explicitly tell it which files go in every commit, so even though you added the file before, you have to add it again. Note that the output from the git status
command reminds you of that.
Add the file and then commit the changes, providing a commit message:
- git add article.md
- git commit -m "add prerequisites section"
The output verifies that the commit worked:
Output[master 1fbfc21] add prerequisites section
1 file changed, 4 insertions(+)
Use git status
to see your repository status. You’ll see that there’s nothing else to do.
- git status
OutputOn branch master
nothing to commit, working tree clean
Continue this process as you revise your article. Make changes, verify them, add the file, and commit the changes with a detailed message. Commit your changes as often or as little as you feel comfortable. You might perform a commit after you finish each draft, or right before you do a major rework of your article’s structure.
If you send a draft of a document to someone else and they make changes to it, take their copy and replace your file with theirs. Then use git diff
to see the changes they made quickly. Git will see the changes whether you typed them in directly or replaced the file with one you downloaded from the web, email, or elsewhere.
Now let’s look at managing the versions of your article.
Sometimes it’s helpful to look at a previous version of a document. Whenever you’ve used git commit
, you’ve supplied a helpful message that summarizes what you’ve done.
The git log
command shows you the commit history of your repository. Every change you’ve committed has an entry in the log.
- git log
Outputcommit 1fbfc2173f3cec0741e0a6b21803fbd0be511bc4
Author: Sammy Shark <sammy@digitalocean>
Date: Thu Sep 19 16:35:41 2019 -0500
add prerequisites section
commit 95fed849b0205c49eda994fff91ec03642d59c79
Author: Sammy Shark <sammy@digitalocean>
Date: Thu Sep 19 16:32:34 2019 -0500
Add gitignore file and initial version of article
Each commit has a specific identifier. You use this number to reference a specific commit’s changes. You only need the first several characters of the identifier though. The git log --oneline
command gives you a condensed version of the log with shorter identifiers:
- git log --oneline
Output1fbfc21 add prerequisites section
95fed84 Add gitignore file and initial version of article
To view the initial version of your file, use git show
and the commit identifier. The identifiers in your repository will be different than the ones in these examples.
- git show 95fed84 article.md
The output shows the commit detail, as well as the changes that happened during that commit:
Outputcommit 95fed849b0205c49eda994fff91ec03642d59c79
Author: Sammy Shark <sammy@digitalocean>
Date: Thu Sep 19 16:32:34 2019 -0500
Add gitignore file and initial version of article
diff --git a/article.md b/article.md
new file mode 100644
index 0000000..77b081c
--- /dev/null
+++ b/article.md
@@ -0,0 +1,7 @@
+# How To Use Git to Manage Your Writing Project
+
+### Introduction
+
+Version control isn't just for code. It's for anything you want to track, including content. Using Git to manage your next writing project gives you the ability to view multiple drafts at the same time, see differences between those drafts, and even roll back to a previous version. And if you're comfortable doing so, you can then share your work with others on GitHub or other central git repositories.
+
+In this tutorial you'll use Git to manage a small Markdown document. You'll store an initial version, commit it, make changes, view the difference between those changes, and review the previous version. When you're done, you'll have a workflow you can apply to your own writing projects.
To see the file itself, modify the command slightly. Instead of a space between the commit identifier and the file, replace with :./
like this:
- git show 95fed84:./article.md
You’ll see the content of that file, at that revision:
Output# How To Use Git to Manage Your Writing Project
### Introduction
Version control isn't just for code. It's for anything you want to track, including content. Using Git to manage your next writing project gives you the ability to view multiple drafts at the same time, see differences between those drafts, and even roll back to a previous version. And if you're comfortable doing so, you can then share your work with others on GitHub or other central git repositories.
In this tutorial you'll use Git to manage a small Markdown document. You'll store an initial version, commit it, make changes, view the difference between those changes, and review the previous version. When you're done, you'll have a workflow you can apply to your own writing projects.
You can save that output to a file if you need it for something else:
- git show 95fed84:./article.md > old_article.md
As you make more changes, your log will grow, and you’ll be able to review all of the changes you’ve made to your article over time.
In this tutorial you used a local Git repository to track the changes in your writing project. You can use this approach to manage individual articles, all the posts for your blog, or even your next novel. And if you push your repository to GitHub, you can invite others to help you edit your work.
]]>WordPress themes are typically submitted to the review team to be hosted and published on the WordPress Theme Directory. We are working on an internal theme that will not work outside of our use case. It is also developed and managed using git and hosted on our Digital Ocean droplet.
How can we use our self-hosted, git-managed WordPress theme and have it behave like a normal theme would? For example, WordPress detects when the theme needs to be updated and the theme is auto-updated. If auto-updating is disabled, then it should wait till the update link is clicked manually.
I have begun with creating a --bare
repo on our droplet with ssh authentication. I have also attempted to create a php class in the theme that cross checks the active theme’s version with the git repo but have unsuccessful in doing so. I did find some articles that have some good info but always ends up using a service or plugin to handle the pushing. If at all possible, we would like to accomplish this without 3rd party services or plugins.
I have broken out what I think needs to happen in the following steps:
I was able to get the current theme version by using wp_get_theme()
and then retrieving the version property. I am hoping to use the same function to check the theme on in the git repo but I hit a wall trying to retrieve the git repo. I am not sure how to retrieve the git repo, and once it is retrieved, how would I decode it from the bare git data. Also to clarify, we are not using Github or any other service to manage our git repo. Our WordPress install is currently setup on my local environment (wp-env) as I have not yet deployed it to a server. We are planning on deploying it on a FreeBSD distribution with Nginx.
<?php
class Dnk_Webos_Version_Check {
protected $file;
protected $theme;
protected $version;
public function __construct( $file ) {
$this->file = $file;
$this->theme = wp_get_theme();
$this->version = $this->theme->version;
// Debug test
add_action( 'admin_notices', array( $this, 'debug_notice' ) );
}
public function debug_notice() {
$debug = esc_html( $this->theme );
?>
<div class="notice notice-success is-dismissible">
<p><?php echo $this->version; ?></p>
</div>
<?php
}
}
?>
Local Environment WordPress 5.8 PHP 7.4.21 Apache/2.4.38 (Debian) Linux 5.10.25-linuxkit x86_64
Git Server FreeBSD 12.2 zfs x64
]]>For example, the current version configures a default UFW firewall and installs Docker Compose 1.22.0.
I noticed for example the current version can barely run docker compose on a standard DO droplet because the entropy is too low. Docker has an 8 year old article on this problem here.
It seems like haveged
should to be installed by default, or that the documentation on the marketplace item should include a note about why docker-compose
might hang on a smaller VPS instance.
There doesn’t seem to be a way to provide feedback on documentation or the current image. The link takes you over to the docker forums in general, which don’t have a meaningful thread about the Digital Ocean image that I can see.
It would be great to use the same recipe offered by DO right now in the image but add the haveged line or any other items. Really, to be explicit about the configuration people should know exactly what the 1-click image is made up of procedurally.
I saw this previous answer from DO Staff recommending the cloning of marketplace images in order to keep a copy of something, but that doesn’t seem like a reasonable way to conduct devops.
Where does DO provide the script used for building these images? How is DO managing versions managed between the scripts? Would you consider making these scripts public and versioned so that people can create issues and pull requests and otherwise understand decisions about what went into them?
]]>I have been trying to figure out how to successfully install an npm package hosted on my Github account. It works locally on my mac but not from my Digital Ocean Droplet.
Here are the guides I have looked at it and tried: https://www.digitalocean.com/community/questions/permission-denied-publickey-using-git-from-digitalocean-console-forge-user
https://stackoverflow.com/questions/57808112/digitalocean-and-github-permission-denied
and several others!
Nothing works.
I regenerated new keys and uploaded them to GitHub but still nothing.
The command I am trying to run is:
npm install git+ssh://github.com/mujibsardar/nodebb-theme-askavan.git
I get:
35 verbose argv "/usr/bin/node" "/usr/bin/npm" "install" "git+ssh://github.com/mujibsardar/nodebb-theme-askavan.git"
36 verbose node v10.16.0
37 verbose npm v7.18.1
38 error code 128
39 error command failed
40 error command git --no-replace-objects ls-remote ssh://git@github.com/mujibsardar/nodebb-theme-askavan.git
41 error git@github.com: Permission denied (publickey).
41 error fatal: Could not read from remote repository.
41 error
41 error Please make sure you have the correct access rights
41 error and the repository exists.
42 verbose exit 128
here are the permissions on my .ssh directory
drwx------ 2 root root 4096 Feb 1 21:21 .
drwx------ 9 root root 4096 Jun 18 22:34 ..
-rw-r--r-- 1 root root 103 Jun 19 16:33 authorized_keys
-rw------- 1 root root 411 Jun 18 18:59 id_ed25519
-rw-r--r-- 1 root root 103 Jun 18 18:59 id_ed25519.pub
-rw-r--r-- 1 root root 1548 Jun 18 18:26 known_hosts
My apologies if I missed to share something more fundamental. Any help would be highly appreciated!
]]>SyntaxError: Unexpected token export
I even removed the middleware where the error happens and tried to run again, but the error happened somewhere else in my code. I’ve stuck in this since yesterday. Do you have any idea?
Thanks
]]>I want to get it to say "Downloading and extracting Node v12.18.
Is that doable? Or, do I need a docker droplet?
]]>I have a Node based backend app deployed on the App platform. This app is located in a monorepo together with my frontend. Currently when any files are changed and pushed in the frontend directory, the backend will also redeploy.
My ideal situation would be that the backend will recognize that there are no file changes in the backend directory, and therefore not redeploy. This is currently the case for my frontend on Netlify, is something similar possible on Digital Ocean?
]]>I’m a first-timer using Digital Ocean. I want to set up a Drone continuous integration system to work with Github.
I have followed the How To Install and Configure Drone on Ubuntu 20.04 and all the “nested” tutorials for setting up a droplet, buying a domain, connecting it to digital ocean, etc.
To my knowledge, the steps were successfully completed and domain.com
and drone.domain.com
both behave as they should in the DNS Lookup Tool.
At the end of Step 4 in the tutorial, I open drone.domain.com
, which as expected shows me a page where I need to authorize Drone- Github access. The screen looks like the “Drone UI” section of this article.
Once authorized, Drone is supposed to build a list of repositories. However, once I press "Authorize " it quickly flashes with the tab title “Drone | Continuous Integration …”, but then go to the “Site can’t be reached” page.
I can access domain.com
. I’m using HostGator as a provider, so right now it only shows me the “Log in to HostGator” button, but it works.
In case it helps, I have previously tried to install Drone locally using docker-compose. Here I saw the exact same behavior, where the localhost port where Drone should be at said “Site can’t be reached”.
I have tried using Firefox and Chrome.
Any help is appreciated.
]]>{
"errors": [
{
"message": "\nInvalid `prisma.post.create()` invocation:\n\n\n error: Error validating datasource `db`: the URL must start with the protocol `postgresql://` or `postgres://`.\n --> schema.prisma:6\n | \n 5 | provider = \"postgresql\"\n 6 | url = env(\"DATABASE_URL\")\n | \n\nValidation Error Count: 1",
"locations": [
{
"line": 11,
"column": 3
}
],
"path": [
"createDraft"
],
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"exception": {
"clientVersion": "2.23.0",
"stacktrace": [
"Error: ",
"Invalid `prisma.post.create()` invocation:",
"",
"",
" error: Error validating datasource `db`: the URL must start with the protocol `postgresql://` or `postgres://`.",
" --> schema.prisma:6",
" | ",
" 5 | provider = \"postgresql\"",
" 6 | url = env(\"DATABASE_URL\")",
" | ",
"",
"Validation Error Count: 1",
" at cb (/workspace/node_modules/@prisma/client/runtime/index.js:35107:17)",
" at runMicrotasks (<anonymous>)",
" at processTicksAndRejections (internal/process/task_queues.js:97:5)"
]
}
}
}
],
"data": null
}
]]>=> Cloning submodules
warning: error cloning submodules: repository not found
For now, I’ve hardcoded the full “https://github.com/Mindesk/” prefix, but I’d rather use the relative address, i.e. “…/”.
Is this a known issue? And is there something I can do in order to support relative addressing?
]]>I would like some advice on ‘best practice’ for deploying projects to a DigitalOcean Droplet.
I have a number of hobby projects that are compiled with Docker, either webpages built with Node.js or python API’s. My usually process is:
I have recently automated step 3 with GitHub workflows.
The process has served me well without complaints when projects are small but I am now becoming more space conscious - my git repositories often include files that aren’t necessary for deployment.
So my question is: what is the best practice method for deploying projects compiled with Docker?
Some of my thoughts are:
Am I going about this the right way? Any advise would be appreciated.
]]>Project is hosted on Github and CI/CD will be running by Github Actions.
I would like prepare 3 stages:
debug = true
, with API keys for sandboxes, database migrations, database seeds and unit, integration and e2e tests;debug = false
, with API keys for sandboxes, database seeds and unit, integration and e2e tests;debug = false
, with target API keys and database migrations.I prepare sepearate workflows on Github: staging.yaml and production.yaml.
Of course, it’s not necessary for works on development stage.
Also I use Deployer for running commands, but it’s not that relevant for this thread.
staging.yaml:
step: build
- build docker environment (or pull and push)
- run docker
- build app (e.g. install vendors)
- unit tests
- integration tests
- export build app as artifacts
step: prepare server (e.g. droplet on DigitalOcean)
- unzpi artifacts
- build docker enviroment with e.g. database
- run docker
- install vendors
- database seeds
step: tests
- run e2e tests
step: publish
- set subdomain (or domain) to docker environment
production.yaml:
step: build
- build docker environment (or pull and push)
- run docker
- build app
- export build app as artifacts
step: connect to existing server (e.g. droplet on DigitalOcean)
- make new dir in releases/ directory
- unzpi artifacts to newest directory in releases/ directory
- run docker (or no)
- database migrations
- symlink direcotry (used as DOCUMENT_ROOT) do newest directory in releases/ directory
step: publish
- just set success message
I’m getting ready to launch my first dockerized app on Digital Ocean, automated through git. As part of the process, git runs my automated tests. However, this will wipe out my database. Is there a way to specify a docker-compose.yml file for the automated testing portion (on git) and then the actual deployment to Digital Ocean?
Thank you,
Eric
]]>In package.json:
...
"dependencies": {
"library": "myusername/library",
...
},
...
I’ve authorized Digital Ocean to access all my repositories but during the build, the library package cannot be retrieved, erroring with:
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
What I’ve tried:
What is the correct way of adding this library package dependency such that it can be fetched during build?
I’d really appreciate any help.
Thank!
]]>Initial set up of Ubuntu 20.04 server: https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-20-04
Installing Nginx: https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-20-04
Securing Nginx: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-20-04
I’ve followed everything step-by-step and my server works on HTTPS. However when I try to clone a git repo hosted on GitLab, the connection repeatedly fails with the following error:
kex_exchange_identification: Connection closed by remote host
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Now, I have generated an SSH key on my server (via ssh-keygen) and added the public SSH key to the Gitlab website.
However the firewall is not allowing me to connect. I have tried “sudo ufw allow git” but this did not work.
I was able to clone the repo by disabling the firewall (sudo ufw disable) and then enabling the firewall after the repo has been cloned, however this is not a realistic solution for obvious reasons.
Can anyone help out? Do I allow a certain port or specific IP for Gitlab connections?
Thanks
]]>After configuring this server, I also want to create a staging and development server, so I create a snapshop of the server and create two copies of this snapshot. This results in 3 servers with the same SSH keys used to access the GIT repository. These SSH keys are only used for read access to the GIT repository.
Are there any reasons (security wise) to not do this, and make sure a unique SSH key is configured for each copy of the original server?
Thanks in advance!
I found this thread that revolves around the same questions, but as that’s quite old, I was wondering what the current viewpoint on this matter is.
]]>[MASKED]@[MASKED]: Permission denied (publickey). 34 Cleaning up file based variables 00:01 35ERROR: Job failed: exit code 255
]]>Statamic is a CMS framework built on top of Laravel, which uses a flat-file system to store its content. Statamic offers a Git integration that uses automatic commits when users make changes to the content. So my problem with the Apps platform, is that it makes an auto-deploy every time a new commit appears in the repository. So if I set up the Git integration in Statamic every time a user changes the content it would make an unnecessary deployment.
The official Statamic documentation recommends to customize the automatic commit messages and update the deploy script to ignore these type of commits. I checked the official Digital Ocean docs about the options I can use in the Apps spec file but couldn’t find anything useful.
https://www.digitalocean.com/docs/app-platform/references/app-specification-reference/
I see I can set deploy_on_push
to false
to have full control over the deployment, but it’s not the most ideal solution for my problem.
Is there anything I can do to make this work automatically?
]]>Editing the helpText with the shark gif works but I cannot see a console output.
On Chrome I get: “DevTools failed to load SourceMap: Could not load content for chrome-extension://fehcbmngdgagfalpnfphdhojfdcoblgc/static/js/contentScript.js.map: HTTP error: status code 404, net::ERR_UNKNOWN_URL_SCHEME”
On Firefox: Error: Can’t find profile directory.
Restarting the hello app with pm2 is fine. I do it with the console of WinSCP
I need some tutorial or example for these very first steps to running a hello world app, adding some code and test it. Could someone help me?
Some more specific questions
The version of git installed in my App Platform container is 1.17.2. I’m needing to use a newer version, 1.22 or greater. Is it possible to specify the git version the container uses? If so, how?
Grateful for any insight!
]]>Does anyone have a command on hand which could show the remote URL of a specific local git repository?
I usually use the cat
command to check the content of the .git/config
and look for the remote origin
section in there.
But is there a better way of doing this?
Thanks!
]]>Netlify let’s you set custom headers for static sites by adding to their TOML file:
[[headers]]
for = "/*"
[headers.values]
Referrer-Policy = "no-referrer"
X-Content-Type-Options = "nosniff"
X-Frame-Options = "deny"
X-XSS-Protection = "1; mode=block"
What’s the equivalent for an app spec on Digital Ocean?
]]>Visual Studio Code(VS Code)は、Web開発で最も人気のあるエディターのひとつになっています。このような人気を博した理由は、多くの組み込み機能によります。Gitなどのソース管理統合もそのひとつです。VS Code でGitの機能を活用すると、ワークフローがより効率的で堅牢になります。
このチュートリアルでは、GitのVS Code でのソース管理統合について学習します。
このチュートリアルには、次が必要です。
ソース管理統合を利用するために先ず、Git リポジトリとしてプロジェクトを初期化します。
Visual Studio Code を開き、組み込みターミナルにアクセスします。これを開くには、Linux、macOS、Windows でショートカットCTRL + `
キーを押します。
端末に新しいプロジェクトのディレクトリを作成し、そのディレクトリに移動します。
- mkdir git_test
- cd git_test
次に、Gitレポジトリを作成します。
- git init
Visual Studio Code でこれを行うには、左パネルのSource Controlタブ(分かれ道のように見えるアイコン) を開きます。
次に、Open Folder を選択します。
カレントディレクトリでファイルエクスプローラーが開きます。好みのプロジェクトディレクトリを選択し、Open をクリックします。
次に、Initialize Repository を選択します。
今ファイルシステムを確認すれば、.git
ディレクトリがあるのがわかります。これを行うには、ターミナルでプロジェクトディレクトリに移動して、ディレクトリの中身をすべて表示します。
- ls -la
生成された.git
ディレクトリが表示されます。
- Output.
- ..
- .git
repo が初期化されたので、index.html
というファイルを追加します。
その後、Source Control panelにU字付きの新しいファイルが現れます。Uは、 untracked file(追跡対象外のファイル)の略で、 新しいファイル、変更されたファイル、 まだリポジトリにまだ追加されていないファイルのいずれかを意味します。
一覧のindex.html
ファイル横のプラスアイコン(+)をクリックし、リポジトリでそのファイルを追跡します。
リポジトリに追加されると、ファイルの横にある文字がAに変化します。**A **はリポジトリに追加された新しいファイルを表します。
変更をコミットするには、Source Control panelの上部にある入力ボックスにコミットメッセージを入力します。次に、チェックアイコンをクリックしてコミットを実行します。
その後、保留中の変更がなくなったのがわかります。
次に、index.html
ファイルに内容を少し加えます。
!
キー、続けてTab
キーを押すと、Emmetショートカットを使用してVS CodeのHTML5のスケルトン(雛形)を生成できます。そのまま進み、<body>
にヘディング<h1>
などを追加し、保存します。
Source Control panelで、ファイルが変更されたのがわかります。ファイル名の横にmodified(修正済)を表すM字インジケーターが表示されます。
練習として、この変更もコミットに進めます。
Source Control panelとのやりとりに慣れたので、ガターインジケーターの解釈に進みます。
このステップでは、VS codeで「ガター」と呼ばれるものを見てみましょう。ガター とは、行番号の右側の細い 領域を指します。
コード折りたたみ機能を使用したことがあれば、ガター に最大化アイコンと最小化アイコンがあります。
はじめに、<h1>
タグ内の内容を変更するなど、index.html
ファイルに少し変更を加えてみましょう。その後、変更した行のガターに青の縦線が表示されるのがわかります。青の縦線は、コードの修正行を表します。
今度は、コードの行を削除してみます。index.html
ファイルの<body>
セクションからどれか1行を削除します。ガター に赤の三角が表示されるのがわかります。赤の三角は、削除された行またはグループ行を表します。
最後に、<body>
セクションの下部に新しいコード行を追加すると、緑の縦線が表示されます。緑の縦線は、コードの新規行を表します。
この例では、修正行、削除された行、新規行のガターインジケーターを示します。
VS Codeには、ファイルにdiff を実行する機能もあります。通常、これを行うには、別のdiffツールをダウンロードする必要があるので、この組み込み機能は作業効率を上げます。
diff を表示するには、Source Control panelを開き、変更されたファイルをダブルクリックします。ここではindex.html
ファイルをダブルクリックします。典型的なdiff ビューが表示されます。左側が変更前のバージョン、右側が現行バージョンになります。
この例では、現行バージョンで行が追加されたことを示しています。
下部バーに移動すると、ブランチを作成して切り替えることができます。エディターの下部左側に、Source Controlアイコン(分かれ道のように見えるもの)と、大抵の場合**master
**または現在の作業ブランチ名が表示されています。
ブランチを作成するには、そのブランチ 名をクリックします。ポップアップメニューが開き、新しいブランチが作成できます。
test
という新しいブランチの作成に進みます。
ここで、index.html
ファイルを変更します。これは新しいテストブランチです
などのテキストを追加して、新しいテスト
ブランチにいることを表します。
これらの変更をテスト
ブランチにコミットします。次に、左下のブランチ名をもう一度クリックして、master
ブランチに戻ります。
master
ブランチに切り替えると、テスト
ブランチにコミットされたテキストこれは新しいテストブランチです
が消えているのがわかります。
このチュートリアルでは詳細に触れませんが、Source Control panelを介して、リモートリポジトリにアクセスして作業できます。リモートリポジトリを使用して作業した経験があれば、pull 、 sync 、 publish 、 stashなどの見慣れたコマンドがあるのがわかります。
VS CodeにはGit用の多くの組み込み機能があるだけでなく、機能を高める人気の拡張機能もいくつか追加できます。
この拡張機能は、選択した行のGit Blame 情報をステータスバーで表示できます。
脅迫的な響きがありますが、心配無用です。Git Blame拡張機能は誰かの気を悪くさせるというより、実用性を高めるものです。コード変更について誰かを「blame」(非難する)というアイディアは、辱めることが目的ではなく、任意のコード部分について正しい質問相手を見つけることを目的としています。
スクリーンショットでわかるように、この拡張機能は、作業中のコードのカレント行に関して、ちょっとしたメッセージを下部ツールバーに表示します。
VS Code の内蔵機能を使用して、作業ファイルの変更確認、diffs の実行やブランチの管理ができますが、Git履歴については詳細情報が得られません。Git History拡張機能はこの問題を解決します。
下のイメージで見るように、この拡張機能を使用すると、ファイルの履歴、変更者、ブランチなどを完全に確認できます。下記のGit Historyウインドウをアクティブにするには、ファイルを右クリックしてGit: View File Historyを選択します。
さらに、ブランチとコミットを比較したり、コミットからブランチを作成したり、他にもさまざまなことができます。
GitLensは、Visual Studio Code に組み込まれたGit 機能をパワーアップしたものです。コードの作者はGit blameの注釈やCode Lensを介してひと目でわかるように視覚化され、Gitリポジトリへの移動や探索はシームレスに実行でき、強力な比較コマンドにより有益な見識が得られ、他にもさまざまなことができます。
Git Lens拡張機能はコミュニティで一番人気があり、最強でもあります。ほとんどの場合、先の2つの拡張機能の代用として使用できます。
「blame」情報については、作業中のカレント行の右側に、ちょっとしたメッセージが表示されます。そこには変更者、変更日時、関連するコミットメッセージが記されます。このメッセージ上にカーソルを置くと、コード変更そのものやタイムスタンプ他、追加情報がいくつかポップアップします。
Git History情報について、この拡張機能は多くの機能を備えています。多数のオプションに簡単にアクセスして、ファイル履歴を表示したり、diffで前のバージョンとの差分を比較したり、任意の改訂版を開いたりできます。これらのオプションを開くには、カレント行のコード編集者名、編集後の経過日数が記された下部ステータスバーのテキスト部分をクリックます。
次のウィンドウが開きます。
この拡張機能は機能が満載なので、すべてを把握するには時間がかかります。
このチュートリアルでは、VS Code でのソース管理統合の使用方法を説明しました。VS Codeは、かつて別途ツールのダウンロードが必要だった多くの機能を処理できます。
]]>Visual Studio Code (VS Code) telah menjadi salah satu editor paling populer untuk pengembangan web. Editor ini memperoleh popularitas demikian banyak berkat fitur bawaannya, termasuk integrasi kontrol sumber, yang dinamai Git. Memanfaatkan kemampuan Git dari dalam VS Code dapat membuat alur kerja Anda lebih efisien dan tangguh.
Dalam tutorial ini, Anda akan mendalami penggunaan Integrasi Kontrol Sumber di VS Code dengan Git.
Untuk mengikuti tutorial ini, Anda membutuhkan hal berikut ini:
Hal pertama yang perlu dilakukan untuk memanfaatkan integrasi kontrol sumber adalah menginisialisasi proyek sebagai repositori Git.
Buka Visual Studio Code dan akseslah terminal bawaan. Anda dapat membukanya dengan pintasan keyboard CTRL + `
di Linux, macOS, atau Windows.
Di terminal, buat direktori untuk proyek baru dan berpindah ke direktori itu:
- mkdir git_test
- cd git_test
Kemudian, buat repositori Git:
- git init
Cara lain untuk melakukannya pada Visual Studio Code adalah dengan membuka tab Source Control (ikon yang terlihat seperti persimpangan jalan) di panel sisi kiri:
Selanjutnya, pilih Open Folder:
Ini akan membuka penjelajah berkas Anda ke direktori saat ini. Pilih direktori proyek yang disukai dan klik Open.
Kemudian, pilih Initialize Repository:
Jika Anda memeriksa sistem berkas, Anda akan melihatnya menyertakan direktori .git
. Caranya, gunakan terminal untuk menyusuri direktori proyek Anda dan menampilkan daftar semua konten:
- ls -la
Anda akan melihat direktori .git
yang dibuat:
- Output.
- ..
- .git
Karena repositori telah diinisialisasi, tambahkan berkas bernama index.html
.
Setelah melakukannya, Anda akan melihat berkas baru di panel Source Control ditampilkan dengan huruf U di sebelahnya.** U** singkatan untuk untracked file, artinya berkas yang baru atau telah diubah, tetapi belum ditambahkan ke repositori:
Kini Anda dapat mengklik ikon plus (+) melalui daftar berkas index.html
untuk melacak berkas menurut repositori.
Setelah ditambahkan, huruf di sebelah berkas akan berubah menjadi A.A menyatakan berkas baru yang telah ditambahkan ke repositori.
Untuk menerapkan perubahan, ketikkan pesan penerapan ke kotak masukan di bagian atas panel Source Control. Kemudian, klik ikon centang untuk melakukan penerapan.
Setelah melakukannya, Anda akan melihat bahwa tidak ada perubahan yang menunggu.
Selanjutnya, tambahkan sedikit konten ke berkas index.html
.
Anda dapat menggunakan pintasan Emmet untuk menghasilkan kerangka HTML5 di VS Code dengan menekan tombol !
, diikuti dengan tombol Tab
. Lanjutkan dan tambahkan sesuatu di <body>
seperti judul <h1>
lalu simpan.
Di panel kontrol sumber, Anda akan melihat bahwa berkas telah berubah. Huruf M akan ditampilkan di sebelahnya, yang menyatakan bahwa berkas telah dimodifikasi:
Untuk latihan, lanjutkan dan terapkan juga perubahan ini.
Karena kini Anda sudah terbiasa berinteraksi dengan panel kontrol sumber, Anda akan melanjutkan ke menafsirkan indikator gutter.
Dalam langkah ini, Anda akan mempelajari sesuatu yang disebut dengan “Gutter” dalam VS Code. Gutter adalah area tipis di sebelah kanan nomor baris.
Jika Anda telah menggunakan pelipatan kode sebelumnya, ikon maximize dan minimize berada di dalam gutter.
Mari kita mulai dengan membuat sedikit perubahan pada berkas index.html
, misalnya perubahan pada konten dalam tag <h1>
. Setelah melakukannya, Anda akan melihat tanda vertikal biru di gutter baris yang Anda ubah. Tanda biru vertikal menunjukkan bahwa baris kode yang bersangkutan telah diubah.
Sekarang, coba hapus sebuah baris kode. Anda dapat menghapus salah satu baris di bagian <body>
dari berkas index.html
. Sekarang perhatikan, di gutter ada segitiga merah. Segitiga merah merupakan sebuah baris atau sekelompok baris yang telah dihapus.
Terakhir, di bawah bagian <body>
, tambahkan baris kode baru dan perhatikan bilah hijau. Bilah hijau vertikal menandakan sebuah baris kode yang telah ditambahkan.
Contoh ini menggambarkan indikator gutter untuk baris yang dimodifikasi, baris yang dihapus, dan baris baru:
VS Code juga memiliki kemampuan untuk menjalankan diff pada berkas. Biasanya, Anda harus mengunduh alat diff terpisah untuk melakukannya, jadi fitur bawaan ini dapat membantu Anda bekerja lebih efisien.
Untuk melihat diff, buka panel kontrol sumber dan klik ganda berkas yang diubah. Dalam hal ini, klik ganda pada berkas index.html
. Anda akan diarahkan ke tampilan umum diff dengan versi berkas saat ini di sebelah kiri dan versi berkas yang diterapkan sebelumnya di sebelah kanan.
Contoh ini menunjukkan bahwa sebuah baris telah ditambahkan dalam versi saat ini:
Dengan berpindah ke bilah bawah, Anda dapat membuat dan berpindah cabang. Jika Anda perhatikan bagian kiri bawah editor, Anda akan melihat ikon kontrol sumber (yang terlihat seperti persimpangan jalan) yang biasanya diikuti dengan master
atau nama cabang saat ini.
Untuk membuat cabang, klik nama cabang itu. Menu yang akan muncul memberi Anda kemampuan membuat cabang baru:
Lanjutkan dan buat cabang baru bernama test
.
Sekarang, buat perubahan pada berkas index.html
yang menandakan Anda berada di cabang test
yang baru, misalnya menambahkan tulisan this is the new test branch
.
Terapkan perubahan itu pada cabang test
. Kemudian, klik lagi nama cabang di bagian kiri bawah untuk berpindah kembali ke cabang master
.
Setelah berpindah kembali ke cabang master
, Anda akan melihat bahwa tulisan this is the new test branch
yang diterapkan pada cabang test
tidak ada lagi.
Tutorial ini tidak akan membahasnya secara mendalam, tetapi melalui panel Source Control, Anda memiliki akses untuk menggunakan repositori jauh. Jika sebelumnya Anda pernah menggunakan repositori jauh, Anda akan melihat perintah yang familier seperti pull, sync, publish, stash, dll.
VS Code tidak hanya dilengkapi dengan banyak fungsionalitas bawaan untuk Git, juga ada beberapa ekstensi yang sangat populer untuk menambahkan fungsionalitas tambahan.
Ekstensi ini menyediakan kemampuan untuk melihat informasi Git Blame di bilah status baris yang dipilih saat ini.
Mungkin terdengar merepotkan, tetapi jangan khawatir, ekstensi Git Blame hanyalah masalah kepraktisan, bukan menyulitkan. Pemikiran “menyalahkan” (blame) orang lain karena perubahan kode jauh dari niat mempermalukannya, melainkan agar mengetahui kepada siapa sebaiknya menanyakan potongan kode tertentu.
Seperti yang Anda lihat di tangkapan layar, ekstensi ini menyediakan pesan singkat menyangkut baris kode saat ini yang sedang Anda kerjakan di bilah alat bawah yang menjelaskan siapa yang melakukan perubahan dan kapan diubah.
Meskipun Anda dapat melihat perubahan saat ini, menjalankan diff, dan mengelola cabang dengan fitur bawaan di VS Code, alat ini tidak memberikan gambaran mendalam mengenai riwayat Git Anda. Ekstensi Git History mengatasi masalah itu.
Seperti yang dapat Anda lihat pada gambar di bawah, ekstensi ini memungkinkan Anda untuk mendalami secara menyeluruh tentang riwayat berkas, penulisnya, cabang, dll. Untuk mengaktifkan jendela Git History di bawah, klik kanan pada berkas dan pilih Git: View File History:
Selain itu, Anda dapat membandingkan cabang dan penerapan, membuat cabang dari penerapan itu, dan banyak lagi.
Git Lens banyak menambah kemampuan Git yang disertakan dalam Visual Studio Code. Ini membantu Anda untuk memvisualisasikan secara cepat penulisan kode melalui anotasi Git blame dan lensa kode, menyusuri repositori Git, memperoleh gambaran berharga melalui perbandingan perintah-perintah, dan banyak lagi.
Ekstensi Git Lens adalah salah satu ekstensi paling populer di komunitas, sekaligus paling andal. Sejauh ini, ekstensi ini dapat menggantikan setiap dua ekstensi sebelumnya dengan fungsionalitasnya.
Untuk informasi “blame” (menyalahkan), pesan singkat muncul di sebelah kanan baris yang sedang Anda kerjakan untuk memberi tahu siapa yang membuat perubahan, kapan melakukannya, pesan commit yang bersangkutan. Ada beberapa potongan informasi tambahan yang muncul saat menggerakkan kursor di atas pesan ini seperti perubahan kode itu sendiri, rekaman waktu, dan banyak lagi.
Untuk informasi riwayat Git, ekstensi ini menyediakan banyak fungsionalitas. Anda memiliki akses mudah ke banyak opsi, termasuk menampilkan riwayat berkas, menjalankan diff terhadap versi sebelumnya, membuka revisi tertentu, dan banyak lagi. Untuk membuka berbagai opsi ini, Anda dapat mengklik teks di bilah status bawah yang berisikan penulis yang mengedit baris kode tersebut dan sudah berapa lama mengeditnya.
Hal ini akan membuka jendela berikut:
Ekstensi ini dilengkapi fungsionalitas, dan perlu sebentar saja untuk menerapkan semua yang ditawarkannya.
Dalam tutorial ini, Anda telah mendalami cara menggunakan integrasi kontrol sumber dengan VS Code. VS Code dapat menangani banyak fitur yang sebelumnya mengharuskan Anda mengunduh alat terpisah.
]]>Using Git, developers can modify code in parallel and track changes over time, reduce code conflicts and increase workflow efficiency among developers of the same application.
To learn more about Git, visit:
An introductory series on working with open-source projects through Git.
A complete list of resources on Git can be found on our Git page.
]]>Continuous Integration/Continuous Deployment (CI/CD) is a development practice that allows software teams to build, test, and deploy applications easier and quicker on multiple platforms. CircleCI is a popular automation platform that allows you to build and maintain CI/CD workflows for your projects.
Having continuous deployment is beneficial in many ways. It helps to standardize the deployment steps of an application and to protect it from un-recorded changes. It also helps avoid performing repetitive steps and lets you focus more on development. With CircleCI, you can have a single view across all your different deployment processes for development, testing, and production.
In this tutorial, you’ll build a Node.js app locally and push it to GitHub. Following that, you’ll configure CircleCI to connect to a virtual private server (VPS) that’s running Ubuntu 18.04, and you’ll go through the steps to set up your code for auto-deployment on the VPS. By the end of the article, you will have a working CI/CD pipeline where CircleCI will pick up any code you push from your local environment to the GitHub repo and deploy it on your VPS.
Before you get started, you’ll need to have the following:
In this step, you’ll create a Node.js project locally that you will use during this tutorial as an example application. You’ll push this to a repo on GitHub later.
Go ahead and run these commands on your local terminal so that you can set up a quick Node development environment.
First, create a directory for the test project:
- mkdir circleci-test
Change into the new directory:
- cd circleci-test
Follow this up by initializing a npm environment to pull the dependencies if you have any. The -y
flag will auto-accept every prompt thrown by npm init
:
- npm init -y
For more information on npm
, check out our How To Use Node.js Modules with npm and package.json tutorial.
Next, create a basic server that serves Hello World!
when someone accesses any route. Using a text editor, create a file called app.js
in the root directory of the project. This tutorial will use nano:
- nano app.js
Add the following code to the app.js
file:
const http = require('http');
http.createServer(function (req, res) {
res.write('Hello World!');
res.end();
}).listen(8080, '0.0.0.0');
This sample server uses the http
package to listen to any incoming requests on port 8080
and fires a request listener function that replies with the string Hello World
.
Save and close the file.
You can test this on your local machine by running the following command from the same directory in the terminal. This will create a Node process that runs the server (app.js
):
- node app.js
Now visit the http://localhost:8080
URL in your browser. Your browser will render the string Hello World!
. Once you have tested the app, stop the server by pressing CTRL+C
on the same terminal where you started the Node process.
You’ve now set up your sample application. In the next step, you will add a configuration file in the project so that CircleCI can use it for deployment.
CircleCI executes workflows according to a configuration file in your project folder. In this step, you will create that file to define the deployment workflow.
Create a folder called .circleci
in the root directory of your project:
- mkdir .circleci
Add a new file called config.yml
in it:
- nano .circleci/config.yml
This will open a file with the YAML file extension. YAML is a language that is often used for configuration management.
Add the following configuration to the new config.yml
file:
version: 2.1
# Define the jobs we want to run for this project
jobs:
pull-and-build:
docker:
- image: arvindr226/alpine-ssh
steps:
- checkout
- run: ssh -oStrictHostKeyChecking=no -v $USER@$IP "./deploy.sh"
# Orchestrate our job run sequence
workflows:
version: 2
build-project:
jobs:
- pull-and-build:
filters:
branches:
only:
- main
Save this file and exit the text editor.
This file tells the CircleCI pipeline the following:
pull-and-build
whose steps involve spinning up a Docker container, SSHing from it to the VPS, and then running the deploy.sh
file.steps
section of the config. In this case, all you need to do is SSH into the VPS and run sh deploy.sh
command, so the environment needs to be lightweight but still allow the SSH command. The Docker image arvindr226/alpine-ssh
is an Alpine Linux image that supports SSH.deploy.sh
is a file that you will create in the VPS. It will run every time as a part of the deployment process and will contain steps specific to your project.workflows
section, you inform CircleCI that it needs to perform this job based on some filters, which in this case is that only changes to the main branch will trigger this job.Next, you will commit and push these files to a GitHub repository. You will do this by running the following commands from the project directory.
First, initialize the Node.js project directory as a git repo:
- git init
Go ahead and add the new changes to the git repo:
- git add .
Then commit the changes:
- git commit -m "initial commit"
If this is the first time committing, git will prompt you to run some git config
commands to identify you.
From your browser navigate to GitHub and log in with your GitHub account. Create a new repository called circleci-test
without a README or license file. Once you’ve created the repository, return to the command line to push your local files to GitHub.
To follow GitHub protocol, rename your branch main
with the following command:
- git branch -M main
Before you push the files for the first time, you need to add GitHub as a remote repository. Do that by running:
- git remote add origin https://github.com/GitHub_username/circleci-test
Follow this with the push
command, which will transfer the files to GitHub:
- git push -u origin main
You have now pushed your code to GitHub. In the next step, you’ll create a new user in the VPS that will execute the steps
in the pull-and-build
part.
Now that you have the project ready, you will create a deployment user in the VPS.
Connect to your VPS as your sudo
user
- ssh your_username@your_server_ip
Next, create a new user that doesn’t use a password for login using the useradd
command.
- sudo useradd -m -d /home/circleci -s /bin/bash circleci
This command creates a new user on the system. The -m
flag instructs the command to create a home directory specified by the -d
flag.
circleci
will be the new deployment user in this case. For security purposes, you are not going to add this user to the sudo
group, since the only job of this user is to create an SSH connection from the VPS to the CircleCI network and run the deploy.sh
script.
Make sure that the firewall on your VPS is open to port 8080
:
- sudo ufw allow 8080
You now need to create an SSH key, which the new user can use to log in. You are going to create an SSH key with no passphrase, or else CircleCI will not be able to decrypt it. You can find more information in the official CircleCI documentation. Also, CircleCI expects the format of the SSH keys to be in the PEM format, so you are going to enforce that while creating the key pair.
Back on your local system, move to your home folder:
- cd
Then run the following command:
- ssh-keygen -m PEM -t rsa -f .ssh/circleci
This command creates an RSA key with the PEM format specified by the -m
flag and the key type specified by the -t
flag. You also specify the -f
to create a new key pair called circleci
and circleci.pub
. Specifying the name will avoid overwriting your existing id_rsa
file.
Print out the new public key:
- cat ~/.ssh/circleci.pub
This outputs the public key that you generated. You will need to register this public key in your VPS. Copy this to your clipboard.
Back on the VPS, create a .ssh
directory for the circleci
user:
- sudo mkdir /home/circleci/.ssh
Here you’ll add the public key you copied from the local machine into a file called authorized_keys
:
- sudo nano /home/circleci/.ssh/authorized_keys
Add the copied public key here, save the file, and exit the text editor.
Give the circleci
user its directory permissions so that it doesn’t run into permission issues during deployment.
- sudo chown -R circleci:circleci /home/circleci
Verify if you can log in as the new user by using the private key. Open a new terminal on your local system and run:
- ssh circleci@your_server_ip -i ~/.ssh/circleci
You will now log in as the circleci
user into your VPS. This shows that the SSH connection is successful. Next, you will connect your GitHub repo to CircleCI.
In this step, you’ll connect your GitHub account to your CircleCI account and add the circleci-test
project for CI/CD. If you signed up with your GitHub account, then your GitHub will be automatically linked with your CircleCI account. If not, head over to https://circleci.com/account
and connect it.
To add your circleci-test
project, navigate to your CircleCI project dashboard at https://app.circleci.com/projects/project-dashboard/github/your_username
:
Here you will find all the projects from GitHub listed. Click on Set Up Project for the project circleci-test. This will bring you to the project setup page:
You’ll now have the option to set the config for the project, which you have already set in the repo. Since this is already set up, choose the Use Existing Config option. This will bring up a popup box confirming that you want to build the pipeline:
From here, go ahead and click on Start Building. This will bring you to the circleci-test
pipeline page. For now, this pipeline will fail. This is because you must first update the SSH keys for your project.
Navigate to the project settings at https://app.circleci.com/settings/project/github/your_username/circleci-test
and select the SSH keys section on the left.
Retrieve the private key named circleci
you created earlier from your local machine by running:
- cat ~/.ssh/circleci
Copy the output from this command.
Under the Additional SSH Keys section, click on the Add SSH Key button.
This will open up a window asking you to enter the hostname and the SSH key. Enter a hostname of your choice, and add in the private SSH key that you copied from your local environment.
CircleCI will now be able to log in as the new circleci
user to the VPS using this key.
The last step is to provide the username and IP of the VPS to CircleCI. In the same Project Settings page, go to the Environment Variables tab on the left:
Add an environment variable named USER with a value of circleci
and IP with the value of the IP address of your VPS (or domain name of your VPS, if you have a DNS record).
Once you’ve created these variables, you have completed the setup needed for CircleCI. Next, you will give the circleci
user access to GitHub via SSH.
You now need to provide a way that the circleci
user can authenticate with GitHub so that it can perform git
operations like git pull
.
To do this, you will create an SSH key for this user to authenticate against GitHub.
Connect to the VPS as the circleci
user:
- ssh circleci@your_server_ip -i ~/.ssh/circleci
Create a new SSH key pair with no passphrase:
- ssh-keygen -t rsa
Then output the public key:
- cat ~/.ssh/id_rsa.pub
Copy the output, then head over to your circleci-test
GitHub repo’s deploy key settings at https://github.com/your_username/circleci-test/settings/keys
.
Click on Add deploy key to add the copied public key. Fill the Title field with your desired name for the key, then add the copied public key in the Key field. Finally, click the Add key button to add the key to your account.
Now that the circleci
user has access to your GitHub account, you’ll use this SSH authentication to set up your project.
Now for setting up the project, you are going to clone the repo and make the initial setup of the project on the VPS as the circleci
user.
On your VPS, run the following command:
- git clone git@github.com:your_username/circleci-test.git
Navigate into it:
- cd circleci-test
First, install the dependencies:
- npm install
Now test the app out by running the server you built:
- node app.js
Head over to your browser and try the address http://your_vps_ip:8080
. You will receive the output Hello World!
.
Stop this process with CTRL+C
and use pm2
to run this app as a background process.
Install pm2
so that you can run the Node app as an independent process. pm2
is a versatile process manager written in Node.js. Here it will help you keep the sample Node.js project running as an active process even after you log out of the server. You can read a bit more about this in the How To Set Up a Node.js Application for Production on Ubuntu 18.04 tutorial.
- npm install -g pm2
Note: On some systems such as Ubuntu 18.04, installing an npm package globally can result in a permission error, which will interrupt the installation. Since it is a security best practice to avoid using sudo
with npm install
, you can instead resolve this by changing npm’s default directory. If you encounter an EACCES
error, follow the instructions at the official npm documentation.
You can use the pm2 start
command to run the app.js
file as a Node process. You can name it app
using the --name
flag to identify it later:
- pm2 start app.js --name "app"
You will also need to provide the deployment instructions. These commands will run every time the circleci
user deploys the code.
Head back to the home directory since that will be the path the circleci
user will land in during a successful login attempt:
- cd ~
Go ahead and create the deploy.sh
file, which will contain the deploy instructions:
- nano deploy.sh
You will now use a Bash script to automate the deployment:
#!/bin/bash
#replace this with the path of your project on the VPS
cd ~/circleci-test
#pull from the branch
git pull origin main
# followed by instructions specific to your project that you used to do manually
npm install
export PATH=~/.npm-global/bin:$PATH
source ~/.profile
pm2 restart app
This will automatically change the working directory to the project root, pull the code from GitHub, install the dependencies, then restart the app. Save and exit the file.
Make this file an executable by running:
- chmod u+x deploy.sh
Now head back to your local machine and make a quick change to test it out. Change into your project directory:
- cd circleci-test
Open up your app.js
file:
- nano circleci-test/app.js
Now add in the following highlighted line:
const http = require('http');
http.createServer(function (req, res) {
res.write('Foo Bar!');
res.end();
}).listen(8080, '0.0.0.0');
Save the file and exit the text editor.
Add this change and commit it:
- git add .
- git commit -m "modify app.js"
Now push this to your main branch:
- git push origin main
This will trigger a new pipeline for deployment. Navigate to https://app.circleci.com/pipelines/github/your_username
to view the pipeline in action.
Once it’s successful, refresh the browser at http://your_vps_ip:8080
. Foo Bar!
will now render in your browser.
These are the steps to integrate CircleCI with your GitHub repository and Linux-based VPS. You can modify the deploy.sh
for more specific instructions related to your project.
If you would like to learn more about CI/CD, check out our CI/CD topic page. For more on setting up workflows with CircleCI, head over to the CircleCI documentation.
]]>But, as soon as I set my github repository to private, I cannot deploy. How can I accomplish this? I prefer to have my source code private.
Thanks in advance, and sorry if this has already been asked (couldn’t find it).
]]>I need to commit an empty directory to my Git project, but when I create a new directory with:
- mkdir my_dir
And then check the status with:
- git status
Git says that there is nothing to commit, so running git add .
does not do anything.
How can I add an empty directory/folder to my Git repository?
]]>Редактор Visual Studio Code (VS Code) стал одним из самых популярных для веб-разработки. Его популярность обусловлена множеством встроенных возможностей, в том числе интеграции с системой контроля исходного кода, а именно с Git. Использование возможностей Git из VS Code позволяет сделать рабочие процессы более эффективными и надежными.
В этом учебном модуле мы изучим интеграцию контроля исходного кода в VS с помощью Git.
Для этого обучающего модуля вам потребуется следующее:
Прежде всего, чтобы воспользоваться преимуществами интеграции контроля исходного кода, следует инициализировать проект как репозиторий Git.
Откройте Visual Studio Code и запустите встроенный терминал. Вы можете открыть его, используя сочетание клавиш CTRL + `
в Linux, macOS или Windows.
Используя терминал, создайте каталог для нового проекта и перейдите в этот каталог:
- mkdir git_test
- cd git_test
Затем создайте репозиторий Git:
- git init
Также вы можете сделать это в Visual Studio Code, открыв вкладку Source Control (иконка выглядит как развилка дороги) в левой панели:
Затем нажмите кнопку Open Folder:
При нажатии кнопки откроется проводник файлов, где будет открыт текущий каталог. Выберите предпочитаемый каталог проекта и нажмите Open.
Затем нажмите Initialize Repository:
Если теперь вы посмотрите на свою файловую систему, вы увидите, что она содержит каталог .git
. Чтобы сделать это, используйте терминал для перехода в каталог проекта и вывода его содержимого:
- ls -la
Вы увидите созданный каталог .git
:
- Output.
- ..
- .git
Это означает, что репозиторий инициализирован, и теперь вам следует добавить в него файл index.html
.
После этого на панели Source Control вы увидите, что рядом с именем вашего нового файла отображается буква U. Обозначение U означает, что файл не отслеживается, то есть, что это новый или измененный файл, который еще не был добавлен в репозиторий:
Вы можете нажать значок плюс (+) рядом с файлом index.html
, чтобы включить отслеживание файла в репозитории.
После этого рядом с файлом появится буква A. A обозначает новый файл, который был добавлен в репозиторий.
Чтобы записать изменения, введите команду отправки в поле ввода в верхней части панели Source Control. Затем нажмите иконку отметки check для отправки файла в репозиторий.
После этого вы увидите, что несохраненных изменений нет.
Теперь добавьте немного содержания в файл index.html
.
Вы можете использовать ярлык Emmet для генерирования базовой структуры кода HTML5 в VS Code, нажав !
, а затем клавишу Tab
. Теперь добавьте что-нибудь в раздел <body>
, например, заголовок <h1>
, и сохраните файл.
На панели исходного кода вы увидите, что ваш файл изменился. Рядом с именем файла появится буква M, означающая, что файл изменен:
Для практики давайте запишем это изменение в репозиторий.
Теперь вы познакомились с работой через панель контроля исходного кода, и мы переходим к интерпретации показателей gutter.
На этом шаге мы рассмотрим концепцию Gutter («Желоб») в VS Code. Gutter — это небольшая область справа от номера строки.
Если ранее вы использовали сворачивание кода, то в области Gutter находятся иконки «Свернуть» и «Развернуть».
Для начала внесем небольшое изменение в файл index.html
, например, изменим содержание внутри тега <h1>
. После этого вы увидите, что измененная строка помечена в области Gutter синей вертикальной чертой. Синяя вертикальная черта означает, что соответствующая строка кода была изменена.
Теперь попробуйте удалить строку кода. Вы можете удалить одну из строк в разделе <body>
вашего файла index.html
. Обратите внимание, что в области Gutter появился красный треугольник. Красный треугольник означает строку или группу строк, которые были удалены.
Теперь добавьте новую строку в конец раздела <body>
и обратите внимание на зеленую полосу. Вертикальная зеленая полоса обозначает добавленную строку кода.
В этом примере описаны индикаторы области Gutter для случаев изменения, удаления и добавления строки:
VS Code также позволяет посмотреть отличия между разными версиями файла. Обычно для этого нужно загружать отдельный инструмент diff, так что встроенная функция повысит эффективность работы.
Чтобы посмотреть отличия, откройте панель контроля исходного кода и дважды нажмите на измененный файл. В этом случае следует дважды нажать на файл index.html
. Откроется типовое окно сравнения, где текущая версия файла отображается слева, а ранее сохраненная в репозитории версия — справа.
В этом примере мы видим, что в текущей версии добавлена строка:
Вы можете использовать нижнюю панель для создания и переключения ветвей кода. В нижней левой части редактора отображается иконка контроля исходного кода (которая выглядит как дорожная развилка), после которой обычно идет имя главной
ветви или ветви, над которой вы сейчас работаете.
Чтобы создать ветвление, нажмите на имя ветви. Откроется меню, где вы сможете создать новую ветвь:
Создайте новую ветвь с именем test
.
Теперь внесите изменение в файл index.html
, чтобы перейти в новую ветвь test
, например, добавьте текст this is the new test branch
.
Сохраните эти изменения ветви test
в репозитории. Затем нажмите на имя ветви в левом нижнем углу еще раз, чтобы переключиться обратно на главную ветвь master
.
После переключения обратно на ветвь master
вы увидите, что текст this is the new test branch
, сохраненный для ветви test
, отсутствует в главной ветви.
В этом учебном модуле мы не будем вдаваться в детали, но панель Source Control также предоставляет доступ для работы с удаленными репозиториями. Если вы уже работали с удаленными репозиториями, то вы увидите знакомые вам команды, такие как pull, sync, publish, stash и т. д.
В VS Code имеется не только множество встроенных функций для Git, но и несколько очень популярных расширений, добавляющих дополнительные функции.
Это расширение дает возможность просматривать информацию Git Blame в панели состояния для текущей выделенной строки.
Английское слово Blame имеет значение «винить», но не стоит беспокоиться — расширение Git Blame призвано сделать процесс разработки более практичным, а не обвинять кого-то в чем-то плохом. Идея «винить» кого-то за изменения кода относится не к буквальному возложению вины, а к идентификации человека, к которому следует обращаться с вопросами в отношении определенных частей кода.
Как вы видите на снимке экрана, это расширение выводит на нижней панели инструментов небольшое сообщение, указывающее, кто изменял текущую строку кода, и когда было сделано это изменение.
Хотя вы можете просматривать текущие изменения, сравнивать версии и управлять ветвлением с помощью встроенных функций VS Code, они не дают возможности просматривать историю Git. Расширение Git History решает эту проблему.
Как можно увидеть на снимке ниже, это расширение позволяет тщательно изучать историю файла, автора, ветви и т. д. Чтобы активировать показанное ниже окно Git History, нажмите на файл правой кнопкой мыши и выберите пункт Git: View File History:
Также вы сможете сравнивать ветви и записанные в репозиторий версии, создавать ветви из записанных версий и т. д.
GitLens дополняет возможности Git, встроенные в Visual Studio Code. Это расширение помогает визуализировать принадлежность кода через аннотации Git Blame и линзу кода, просматривать и изучать репозитории Git из среды VS Code, получать полезные аналитические данные с помощью мощных команд сравнения, а также выполнять многие другие задачи.
Расширение Git Lens — одно из самых мощных и популярных среди сообщества разработчиков расширений. В большинстве случаев его функции могут заменить каждое из вышеперечисленных двух расширений.
В правой части текущей строки, над которой вы работаете, отображается небольшое сообщение о том, кто внес изменение, когда это было сделано, а также сообщение о записи изменения в репозиторий. При наведении курсора на это сообщение выводится всплывающий блок с дополнительной информацией, включая само изменение кода, временную метку и т. д.
Также данное расширение предоставляет много функций, связанных с историей Git. Вы можете легко получить доступ к разнообразной информации, включая историю файлов, сравнение с предыдущими версиями, открытие определенных редакций и т. д. Чтобы открыть эти опции, вы можете нажать на текст на нижней панели состояния, где указан автор, изменивший строку кода, а также время ее изменения.
При этом откроется следующее окно:
Это расширение имеет очень много функций, и потребуется время, чтобы разобраться со всеми открываемыми им возможностями.
В этом учебном модуле вы научились использовать интеграцию с системой контроля исходного кода в VS Code. VS Code предоставляет множество функций, для использования которых раньше нужно было загружать отдельный инструмент.
]]>O Visual Studio Code (VS Code) tornou-se um dos editores mais populares disponíveis para o desenvolvimento Web. Ele ganhou toda essa popularidade graças às suas muitas funcionalidades integradas, incluindo a integração do controle de código-fonte, sendo esta, feita com o Git. Aproveitar o poder do Git dentro do VS Code pode tornar seu fluxo de trabalho mais eficiente e robusto.
Neste tutorial, você irá explorar o uso da Integração de controle de código-fonte no VS Code com o Git.
Para concluir este tutorial, você precisará do seguinte:
A primeira coisa que você precisa fazer para aproveitar a integração do controle de código-fonte é inicializar um projeto como um repositório do Git.
Abra o Visual Studio Code e acesse o terminal integrado. Abra ele usando o atalho no teclado CTRL + `
no Linux, macOS ou Windows.
Em seu terminal, crie um diretório para um novo projeto e vá até esse diretório:
- mkdir git_test
- cd git_test
Em seguida, crie um repositório do Git:
- git init
Outra maneira de fazer isso com o Visual Studio Code é abrindo a guia de controle de código-fonte (o ícone se parece com uma divisão na estrada) no painel esquerdo:
Em seguida, selecione Open Folder:
Isso irá abrir seu explorador de arquivos no diretório atual. Selecione o diretório de projeto de sua preferência e clique em Open.
Em seguida, selecione Initialize Repository:
Se você verificar agora seu sistema de arquivos, verá que ele inclui um diretório .git
. Para fazer isso, use o terminal para navegar até o diretório do seu projeto e liste todo o seu conteúdo:
- ls -la
Você verá o diretório .git
que foi criado:
- Output.
- ..
- .git
Agora que o repositório foi inicializado, adicione um arquivo chamado index.html
.
Depois de fazer isso, você verá no painel Source Control que seu arquivo novo aparece com a letra U ao seu lado. U representa untracked file (arquivo não rastreado), o que representa um arquivo novo ou alterado, mas que ainda não foi adicionado ao repositório:
Agora, clique no ícone mais (+) ao lado da listagem de arquivos do index.html
para rastrear o arquivo pelo repositório.
Depois de adicionado, a letra ao lado do arquivo irá mudar para um A. A letra A representa um novo arquivo que foi adicionado ao repositório.
Para fazer o commit com suas alterações, digite uma mensagem de confirmação na caixa de entrada no topo do painel Source Control. Em seguida, clique no ícone confirma para fazer o commit.
Depois de fazer isso, ficará evidente que não há alterações pendentes.
Em seguida, adicione um pouco de conteúdo ao seu arquivo index.html
.
Use um atalho do Emmet para gerar um esqueleto HTML5 no VS Code pressionando o !
seguido pela tecla Tab
. Vá em frente e adicione algo no <body>
, como um título <h1>
, e salve.
No painel do controle de código-fonte, você verá que seu arquivo foi alterado. Ele irá exibir a letra M ao lado dele, que representa um arquivo ter sido modificado:
Para fins de prática, vá em frente e também confirme essa alteração.
Agora que está familiarizado com o painel do controle de código-fonte, vamos seguir em frente para a interpretação de indicadores de medianiz.
Neste passo, você irá dar uma olhada naquilo que chamamos de “Medianiz” no VS Code. A medianiz é a área estreita à direita do número da linha.
Se você já usou o dobramento de código antes, os ícones maximize e minimize ficam localizados na medianiz.
Vamos começar fazendo uma pequena alteração no seu arquivo index.html
, como uma mudança no conteúdo dentro da etiqueta <h1>
. Depois de fazer isso, você irá notar uma marca azul vertical na medianiz da linha que você mudou. A marca azul vertical significa que a linha de código correspondente foi alterada.
Agora, tente excluir uma linha de código. Exclua uma das linhas na seção <body>
do seu arquivo index.html
. Observe agora na medianiz que há um triângulo vermelho. O triângulo vermelho indica uma linha ou grupo de linhas que foi excluída.
Por fim, no final da sua seção <body>
, adicione uma nova linha de código e note a barra verde. A barra vertical verde indica uma linha de código que foi adicionada.
Este exemplo retrata os indicadores de medianiz para uma linha modificada, uma linha removida e uma nova linha:
O VS Code também tem a capacidade de executar uma comparação em um arquivo. Normalmente, seria necessário baixar uma ferramenta de comparação separada para fazer isso, de forma que essa funcionalidade integrada pode ajudar a aumentar a eficiência do seu trabalho.
Para visualizar uma comparação, abra o painel do controle de código-fonte e clique duas vezes em um arquivo alterado. Neste caso, clique duas vezes no arquivo index.html
. Você será levado para uma visualização de comparação típica, com a versão atual do arquivo à esquerda e a versão previamente confirmada do arquivo à direita.
Este exemplo mostra que uma linha foi adicionada na versão atual:
Indo para a barra inferior, você tem a capacidade de criar e trocar ramificações. Observando na região mais baixa e à esquerda do editor, você deve ver o ícone do controle de código-fonte (aquele ícone que se parece com uma divisão na estrada) muito provavelmente seguido por master
ou o nome da ramificação de trabalho atual.
Para criar um branch, clique no nome do branch. Um menu deve aparecer dando-lhe a capacidade de criar um novo branch:
Vá em frente e crie uma nova ramificação chamada test
.
Agora, faça uma alteração em seu arquivo index.html
que indique que você está no novo branch
test
.
Confirme essas alterações na ramificação test
. Em seguida,clique no nome da ramificação no canto inferior esquerdo novamente e volte para a ramificação master
.
Depois de retornar para a ramificação master
, você irá notar que o texto this is the new test branch
(esta é a nova ramificação de teste) confirmado na ramificação test
não está mais presente.
Esse tutorial não irá abordar este tema em profundidade, mas através do painel do controle de código-fonte, o trabalho com repositórios remotos está disponível. Se você já trabalhou com um repositório remoto, irá notar alguns comandos familiares como pull, sync, publish, stash, etc.
O VS Code vem com muitas funcionalidades integradas para o Git, mas, além delas, também existem diversas outras extensões bastante populares que adicionam funcionalidades extras.
Essa extensão dá a capacidade de visualizar informações do Git Blame na barra de status para a linha atualmente selecionada.
Isso pode parecer intimidante, mas não se preocupe, a extensão do Git Blame tem muito mais a ver com a praticidade do que fazer qualquer pessoa se sentir mal. A ideia de “blaming” (culpar) alguém por uma alteração no código não tem a ver com apontar o erro para a pessoa, mas sim identificar o indivíduo certo a se questionar a respeito de determinadas partes do código.
Como pode ser observado na captura de tela, essa extensão fornece uma mensagem sutil relacionada à linha atual de código em que você está trabalhando na barra de ferramentas inferior, explicando quem fez a alteração e quando ela foi feita.
Embora seja possível visualizar alterações atuais, executar comparações e gerenciar ramificações com as funcionalidades integradas do VS Code, ele não oferece uma visualização aprofundada em seu histórico do Git. A extensão do Git History resolve esse problema.
Como pode-se ver na imagem abaixo, essa extensão permite explorar meticulosamente o histórico de um arquivo, um determinado autor, uma ramificação, etc. Para ativar a janela do Git History mostrada abaixo, clique com o botão direito em um arquivo e escolha Git: View File History:
Além disso, é possível comparar branches e commits, criar branches de commits e muito mais.
O GitLens incrementa as capacidades do Git integradas no Visual Studio Code. Ele ajuda a visualizar a autoria do código rapidamente através das anotações do Git blame e lentes de código, navegar e explorar repositórios do Git, ganhar informações valiosas via comandos poderosos de comparação, e muito mais.
A extensão Git Lens é uma das mais populares na comunidade e também é a mais poderosa. Na maioria dos casos, ela pode substituir cada uma das duas extensões previamente discutidas com sua funcionalidade.
Para informações de “culpa”, uma mensagem sutil aparece à direita da linha em que você está atualmente trabalhado e informa quem fez a alteração, quando ela foi feita e a mensagem de confirmação associada. Existem algumas informações adicionais que aparecem ao passar o mouse nesta mensagem, como a alteração do código em si, o carimbo de data/hora e mais.
Para informações do histórico do Git, essa extensão fornece várias funcionalidades. Você tem acesso fácil a diversas opções, incluindo exibir o histórico de arquivos, realizar comparações com versões anteriores, abrir uma revisão específica, e muito mais. Para abrir essas opções, clique no texto na barra de status inferior que contém o autor que editou a linha de código e há quanto tempo ela foi editada.
Isso irá abrir a seguinte janela:
Essa extensão é lotada de funcionalidades, e pode levar um tempo para aprender sobre tudo o que ela tem a oferecer.
Neste tutorial, você explorou como usar a integração do controle de código-fonte com o VS Code. O VS Code é capaz de lidar com muitas funcionalidades que anteriormente exigiriam baixar uma ferramenta separada.
]]>Visual Studio Code (VS Code) est devenu l’un des éditeurs les plus populaires dans le domaine du développement web. Il a acquis une telle popularité grâce à ses nombreuses fonctionnalités intégrées telles que l’intégration du contrôle de source, notamment avec Git. Exploiter la puissance de Git à partir de VS Code peut rendre votre flux de travail plus efficace et plus robuste.
Dans ce tutoriel, vous explorerez l’utilisation de l’intégration du contrôle de source dans VS Code avec Git.
Pour terminer ce tutoriel, vous aurez besoin des éléments suivants :
La première chose à faire pour profiter de l’intégration du contrôle de source est d’initialiser un projet en tant que référentiel Git.
Ouvrez Visual Studio Code et accédez au terminal intégré. Vous pouvez l’ouvrir en utilisant le raccourci clavier CTRL + `
sur Linux, macOS ou Windows.
Dans votre terminal, créez un répertoire pour un nouveau projet et changez dans ce répertoire :
- mkdir git_test
- cd git_test
Ensuite, créez un référentiel Git :
- git init
Une autre façon d’y parvenir avec Visual Studio Code consiste à d’ouvrir l’onglet Source Control (l’icône ressemble à une fissure sur une route) dans le panneau de gauche :
Ensuite, sélectionnez Open Folder:
Cela ouvrira votre explorateur de fichiers au répertoire actuel. Sélectionnez le répertoire de projets préféré et cliquez sur Open.
Ensuite, sélectionnez Initialize Repository:
Si vous vérifiez maintenant votre système de fichiers, vous verrez qu’il comprend un répertoire .git
. Pour ce faire, utilisez le terminal pour naviguer dans le répertoire de votre projet et lister tous les contenus :
- ls -la
Vous verrez le répertoire .git
qui a été créé :
- Output.
- ..
- .git
Maintenant que le repo a été initialisé, ajoutez un fichier appelé index.html
.
Après avoir fait cela, vous verrez dans le panneau Source Control que votre nouveau fichier apparaît avec la lettre U à côté. U signifie untracked file, c’est-à-dire un fichier qui est nouveau ou modifié (ou fichier non traqué), mais qui n’a pas encore été ajouté au référentiel :
Vous pouvez maintenant cliquer sur l’icône plus (+) de la liste des fichiers index.html
pour suivre le fichier par le référentiel.
Une fois ajoutée, la lettre à côté du fichier se transforme en A. A représente un nouveau fichier qui a été ajouté au référentiel.
Pour valider vos modifications, tapez un message de commit dans la zone de saisie située en haut du panneau Source Control. Ensuite, cliquez sur l’icône check pour effectuer le commit.
Vous constaterez alors qu’il n’y a pas de changements en cours.
Ensuite, ajoutez un peu de contenu à votre fichier index.html
.
Vous pouvez utiliser un raccourci Emmet pour générer un squelette HTML5 en VS Code en appuyant sur le bouton !
touche suivie de la touche Tab
. Allez-y et ajoutez quelque chose dans le <body>
comme un en-tête <h1>
et le sauvegarder.
Dans le panneau Source Control, vous verrez que votre fichier a été modifié. La lettre M apparaîtra à côté et représentant un dossier qui a été modifié :
Pour la pratique, allez-y et validez aussi ce changement.
Maintenant que vous êtes familiarisé avec le panneau de contrôle de source, vous allez passer à l’interprétation des indicateurs de gouttière.
Au cours de cette étape, vous examinerez ce que l’on appelle la gouttière en VS Code. La gouttière est la zone étroite située à droite du numéro de ligne.
Si vous avez déjà utilisé le pliage de code, les icônes de maximise et de minimise sont situées dans la gouttière.
Commençons par apporter une petite modification à votre fichier index.html
, comme par exemple une modification du contenu de la balise <h1>
. Vous remarquerez alors une marque verticale bleue dans la gouttière de la ligne que vous avez modifiée. La marque bleue verticale signifie que la ligne de code correspondante a été modifiée.
Maintenant, essayez d’effacer une ligne de code. Vous pouvez supprimer une des lignes de la section <body>
de votre fichier index.html
. Remarquez maintenant dans la gouttière qu’il y a un triangle rouge. Le triangle rouge signifie qu’une ligne ou un groupe de lignes a été supprimé.
Enfin, au bas de votre section <body>
ajoutez une nouvelle ligne de code et remarquez la barre verte. La barre verte verticale signifie qu’une ligne de code a été ajoutée.
Cet exemple illustre les indicateurs de gouttière pour une ligne modifiée, une ligne supprimée et une nouvelle ligne :
VS Code a également la capacité d’effectuer un diff sur un fichier. Pour ce faire, vous devez généralement télécharger un outil de comparaison distinct. Cette fonction intégrée peut donc vous aider à travailler plus efficacement.
Pour afficher un diff, ouvrez le panneau de contrôle de source et double-cliquez sur un fichier modifié. Dans ce cas, double-cliquez sur le fichier index.html
. Vous serez amené(e) à une vue de diff typique avec la version actuelle du fichier sur la gauche et la version validée précédente du fichier sur la droite.
Cet exemple montre qu’une ligne a été ajoutée dans la version actuelle :
En vous déplaçant vers la barre inférieure, vous avez la possibilité de créer et de changer de branche. Si vous regardez tout en bas à gauche de l’éditeur, vous devriez voir l’icône de contrôle de source (celle qui ressemble à une fissure dans la route) suivie très probablement par master
ou le nom de la branche en cours de fonctionnement.
Pour créer une branche, cliquez sur le nom de cette branche. Un menu devrait s’afficher pour vous permettre de créer une nouvelle branche :
Allez-y et créez une nouvelle branche appelée test
.
Maintenant, modifiez votre fichier index.html
qui signifie que vous êtes dans la nouvelle branche test
, comme l’ajout du texte this is the new test branch
.
Validez ces changements à la branche test
. Ensuite, cliquez à nouveau sur le nom de la branche en bas à gauche pour revenir à la branche master
.
Après être retourné(e) à la branche master
, vous remarquerez que le texte this is the new test branch
appliqué à la branche test
n’est plus présent.
Ce tutoriel ne l’abordera pas en profondeur, mais grâce au panneau Source Control, vous avez accès pour travailler avec les référentiels à distance. Si vous avez déjà travaillé avec un référentiel distant, vous remarquerez des commandes familières comme « pull », « sync », « publish », « stash », etc.
Non seulement VS Code est livré avec de nombreuses fonctionnalités intégrées pour Git, mais il existe également plusieurs extensions très populaires pour ajouter des fonctionnalités supplémentaires.
Cette extension permet de visualiser les informations Git Blame dans la barre d’état de la ligne actuellement sélectionnée.
Cela peut sembler intimidant, mais ne vous inquiétez pas, l’extension Git Blame est bien plus une question de praticabilité qu’une question de culpabilité. L’idée de « blâmer » quelqu’un pour un changement de code consiste moins à lui faire honte qu’à trouver la bonne personne à qui poser des questions pour certains éléments du code.
Comme vous pouvez le voir sur la capture d’écran, cette extension fournit un message subtil lié à la ligne de code actuelle sur laquelle vous travaillez dans la barre d’outils inférieure, expliquant qui a fait le changement et quand il l’a fait.
Bien que vous puissiez visualiser les changements en cours, effectuer des diffs et gérer des branches grâce aux fonctionnalités intégrées dans VS Code, cela ne fournit pas une vue approfondie de Git History. L’extension Git History résout ce problème.
Comme vous pouvez le voir dans l’image ci-dessous, cette extension vous permet d’explorer en profondeur l’historique d’un fichier, d’un auteur donné, d’une branche, etc. Pour activer la fenêtre Git History ci-dessous, faites un clic droit sur un fichier et choisissez **Git: View File History **(Afficher l’historique du fichier) :
De plus, vous pouvez comparer les branches et les engagements, créer des branches à partir de commits, et plus encore.
GitLens amplifie les capacités de Git intégrées dans Visual Studio Code. Il vous aide à visualiser la paternité du code en un coup d’œil via les annotations de Git Blame et code lens, à naviguer et à explorer les référentiels Git de manière transparente, à obtenir des informations précieuses via de puissantes commandes de comparaison, et bien plus encore.
L’extension Git Lens est l’une des plus populaires dans la communauté et est également la plus puissante. Dans la plupart des cas, elle peut remplacer chacune des deux extensions précédentes par ses fonctionnalités.
Pour les informations de « blame », un message subtil apparaît à droite de la ligne sur laquelle vous travaillez actuellement pour vous informer de qui a fait le changement, quand il l’a fait, et le message de commit associé. D’autres informations apparaissent lorsque l’on survole ce message, comme le changement de code lui-même, l’horodatage, etc.
Pour les informations sur Git History, cette extension offre de nombreuses fonctionnalités. Vous avez facilement accès à des tonnes d’options, y compris l’affichage de l’historique des fichiers, l’exécution de différences avec les versions précédentes, l’ouverture d’une révision spécifique, et plus encore. Pour ouvrir ces options, vous pouvez cliquer sur le texte dans la barre d’état inférieure qui contient l’auteur qui a édité la ligne de code et la date à laquelle elle a été éditée.
Cela ouvrira la fenêtre suivante :
Cette extension est pleine de fonctionnalités, et il faudra un certain temps pour qu’elle puisse accueillir tout ce qu’elle a à offrir.
Dans ce tutoriel, vous avez exploré comment utiliser l’intégration du contrôle de source avec VS Code. VS Code peut gérer de nombreuses fonctionnalités qui, auparavant, auraient nécessité le téléchargement d’un outil distinct.
]]>Visual Studio Code (VS Code) se ha convertido en uno de los editores más populares que existen para el desarrollo web. Ha conseguido tanta popularidad gracias a sus muchas funciones integradas, incluyendo la integración del Control de código fuente, es decir, con Git. Aprovechar el poder de Git desde VS Code puede hacer que su flujo de trabajo sea más eficiente y robusto.
En este tutorial, explorará usar la Integración del Control de código fuente con VS Code con Git.
Para completar este tutorial, necesitará lo siguiente:
Lo primero que debe hacer para aprovechar la integración del Control de código fuente es iniciar un proyecto como repositorio Git.
Abra Visual Studio Code y acceda al terminal integrado. Puede abrirlo usando el atajo de teclado ``CTRL + ``` en Linux, macOS o Windows.
En su terminal, cree un directorio para un nuevo proyecto y cambie a ese directorio:
- mkdir git_test
- cd git_test
A continuación, cree un repositorio Git:
- git init
Otra forma de conseguir esto con Visual Studio Code es abriendo la pestaña Control de código fuente (el icono parece una brecha en la carretera) en el panel izquierdo:
A continuación, seleccione Abrir carpeta:
Esto abrirá su explorador de archivos al directorio actual. Seleccione el directorio de proyecto preferido y haga clic en Abrir.
A continuación, seleccione Iniciar repositorio:
Si ahora comprueba su sistema de archivos, verá que incluye un directorio .git
. Para hacer esto, use el terminal para navegar al directorio de su proyecto y liste todo su contenido:
- ls -la
Verá el directorio .git
que se creó:
- Output.
- ..
- .git
Ahora que se ha iniciado el repositorio, añada un archivo llamado index.html
.
Tras hacer eso, verá en el panel Control de código fuente que su nuevo archivo se muestra con la letra U junto a él. U significa archivo sin seguimiento (untracked file), lo que significa que es nuevo o se ha cambiado, pero aún no ha sido añadido al repositorio:
Ahora puede hacer clic en el icono plus (+) junto al listado de archivos index.html
para que el repositorio realice un seguimiento del archivo.
Una vez añadido, la letra junto al archivo cambiará a una A. A indica un nuevo archivo que se añadió al repositorio.
Para confirmar sus cambios, escriba un mensaje commit en el cuadro de entrada en la parte superior del panel Control de código fuente. A continuación, haga clic en el icono check para realizar la confirmación.
Tras hacerlo, observará que no hay cambios pendientes.
A continuación, añada un poco de contenido a su archivo index.html
.
Puede usar un atajo Emmet para generar un esqueleto HTML5 en VS Code pulsando la tecla !
seguida de la tecla Tab
. Añada algo en el <body>
como un encabezado <h1>
y guárdelo.
En el panel de control de código fuente, verá que su archivo ha cambiado. Mostrará la letra M junto a él, que representa un archivo que se ha modificado.
Para practicar, confirme también este cambio.
Ahora que está familiarizado con la interacción con el panel de código fuente, pasará a interpretar los indicadores de margen.
En este paso, echará un vistazo a lo que se denomina el “margen” en VS Code. El margen es el área muy delgada a la derecha del número de línea.
Si ha usado code folding antes, los iconos maximize y minimize están en el margen.
Vamos a comenzar realizando un pequeño cambio en su archivo index.html
, como un cambio en el contenido en la etiqueta <h1>
. Tras hacerlo, observará una marca vertical azul en el margen de la línea que ha cambiado. La marca azul vertical significa que la línea correspondiente de código ha cambiado.
Ahora, intente eliminar una línea de código. Puede eliminar una de las líneas en la sección <body>
de su archivo index.html
. Observe que en el margen ahora hay un triángulo rojo. El triángulo rojo significa una línea o grupo de líneas que se han eliminado.
Finalmente, en la parte inferior de su sección <body>
, añada una nueva línea de código y observe la barra verde. La barra verde vertical significa una línea de código que se ha añadido.
Este ejemplo muestra indicadores de margen para una línea modificada, una línea eliminada y una nueva línea:
VS Code también tiene la capacidad de realizar un diff sobre un archivo. Normalmente, tendría que descargar una herramienta diff independiente para hacer esto, de forma que esta función integrada puede ayudarle a trabajar de forma más eficiente.
Para ver un diff, abra el panel de control del código fuente y haga doble clic en un archivo cambiado. En este caso, haga doble clic en el archivo index.html
. Se lo dirigirá a una vista diff normal con la versión actual del archivo a la izquierda y la versión previamente confirmada del archivo a la derecha.
Este ejemplo muestra que se ha añadido una línea en la versión actual:
Si se mueve a la barra inferior, tiene la capacidad de crear y cambiar ramas. Si echa un vistazo a la parte inferior izquierda del editor, debería ver el icono control de código fuente (el que parece una división en la carretera) seguido de probablemente el master
o el nombre de la rama actualmente en funcionamiento.
Para crear una rama, haga clic en ese nombre de la rama. Debería aparecer un menú que le da la capacidad para crear una nueva rama:
Cree una nueva rama llamada test
.
Ahora, realice un cambio a su archivo index.html
que significa que está en la nueva rama test
, como añadir el texto this is the new test branch
.
Confirme esos cambios a la rama test
. A continuación, haga clic en el nombre de la rama en la parte inferior izquierda de nuevo para volver a la rama master
.
Tras volver a la rama master
, observará que el texto this is the new test branch
confirmado para la rama test
ya no está presente.
Este tutorial no tratará esto en profundidad, pero a través del panel Control de código fuente tiene acceso para trabajar con los repositorios remotos. Si ha trabajado con un repositorio remoto antes, observará comandos familiares como pull, sync, publish, stash, etc.
VS Code no solo cuenta con numerosas funciones integradas para Git, sino que también tiene algunas extensiones muy populares para añadir funciones adicionales.
Esta extensión ofrece la capacidad de ver información Git Blame en la barra de estado para la línea seleccionada actualmente.
Esto puede sonar intimidante, pero no se preocupe, la extensión Git Blame es más sobre funcionalidad, no para hacer que alguien se sienta mal. La idea de “culpar” a alguien por un cambio de código no es para avergonzarlos, sino para averiguar quién es la persona adecuada a quien hacerle preguntas sobre ciertas piezas de código.
Como puede ver en la captura de pantalla, esta extensión proporciona un mensaje sutil relacionado con la línea de código actual sobre la que está trabajando en la barra de herramientas inferior, explicando quién realizó el cambio y cuándo lo realizó.
Aunque puede ver los cambios actuales, realizar diffs, y administrar ramas con las funciones integradas en VS Code, no proporciona una vista profunda del historial de Git. La extensión Git History resuelve ese problema.
Como puede ver en la siguiente imagen, esta extensión le permite explorar de forma exhaustiva el historial de un archivo, un autor concreto, una rama, etc. Para activar la ventana Git History, haga clic con el botón derecho sobre un archivo y seleccione Git: View File History:
Además, puede comparar ramas y confirmaciones, crear ramas a partir de confirmaciones y mucho más.
GitLens sobrecarga las capacidades de Git integradas en Visual Studio Code. Lo ayuda a visualizar rápidamente la autoría del código mediante anotaciones de Git Blame y Code Lens, navegar de forma impecable y explorar repositorios de Git, obtener información valiosa mediante potentes comandos comparativos, y mucho más.
La extensión Git Lens es una de las más populares en la comunidad y también la más potente. En la mayoría de las maneras, puede sustituir a cada una de las dos extensiones anteriores con su funcionalidad.
Para la información “blame” aparece un mensaje sutil a la derecha de la línea en la que está trabajando actualmente para informarle de quién realizó el cambio, cuándo lo hizo y el mensaje de confirmación asociado. Hay algunos fragmentos adicionales de información que aparecen cuando pasa el ratón sobre este mensaje como el cambio en el código, el sello de tiempo y mucho más.
Para la información del historial de Git, esta extensión proporciona muchas funciones. Tiene acceso fácil a muchas opciones como mostrar el historial de archivos, realizar diffs con versiones anteriores, abrir una revisión específica y mucho más. Para abrir estas opciones, puede hacer clic en el texto en la barra de estado inferior que contiene el autor que editó la línea de código y hace cuánto se editó.
Esto abrirá la siguiente ventana:
Esta extensión cuenta con numerosas funciones y tardará un tiempo en aprender todo lo que ofrece.
En este tutorial, exploró cómo usar la integración de control de código fuente con VS Code. VS Code puede gestionar muchas funciones que previamente habrían requerido la descarga de una herramienta independiente.
]]>Visual Studio Code (VS-Code) hat sich zu einem der beliebtesten Editoren für die Webentwicklung entwickelt. Dank seiner vielen integrierten Funktionen wie der Integration der Quellcodeverwaltung, insbesondere mit Git, hat er eine solche Popularität erlangt. Die Nutzung der Leistungsfähigkeit von Git innerhalb von VS-Code kann Ihren Workflow effizienter und robuster machen.
In diesem Tutorial erkunden Sie die Verwendung der Quellcodeverwaltungs-Integration in VS-Code mit Git.
Um dieses Tutorial zu absolvieren, benötigen Sie Folgendes:
Was Sie zuerst tun müssen, um die Vorteile der Quellcodeverwaltung-Integration zu nutzen, ist die Initialisierung eines Projekts als Git-Repository.
Öffnen Sie Visual Studio Code und greifen Sie auf das integrierte Terminal zu. Sie können dies durch Verwendung der Tastenkombination ``STRG+``` unter Linux, macOS oder Windows öffnen.
Erstellen Sie in Ihrem Terminal ein Verzeichnis für ein neues Projekt und wechseln Sie in dieses Verzeichnis:
- mkdir git_test
- cd git_test
Erstellen Sie dann Git-Repository:
- git init
Eine weitere Möglichkeit, dies mit Visual Studio Code zu bewerkstelligen, ist das Öffnen der Registerkarte Quellcodeverwaltung (das Symbol sieht aus wie eine Straßenverzweigung) im linken Bereich:
Wählen Sie als Nächstes Open Folder (Ordner öffnen):
Dadurch wird Ihr Datei-Explorer für das aktuelle Verzeichnis geöffnet. Wählen Sie das bevorzugte Projektverzeichnis und klicken Sie auf Open (Öffnen).
Wählen Sie dann Initialize Repository (Repository initialisieren):
Wenn Sie nun Ihr Dateisystem überprüfen, sehen Sie, dass es ein Verzeichnis .git
enthält. Verwenden Sie dazu das Terminal, um zu Ihrem Projektverzeichnis zu navigieren und alle Inhalte aufzulisten:
- ls -la
Sie sehen das erstellte Verzeichnis .git
:
- Output.
- ..
- .git
Nachdem das Repository nun initialisiert wurde, fügen Sie eine Datei namens index.html
hinzu.
Nachdem Sie dies getan haben, sehen Sie im Fenster Source Control (Quellcodeverwaltung), dass Ihre neue Datei mit dem Buchstaben U daneben angezeigt wird. U steht für unverfolgte Datei, d. h. eine Datei, die neu ist oder geändert wurde, aber dem Repository noch nicht hinzugefügt wurde:
Sie können jetzt auf das Plus-Symbol (+) neben der Auflistung der Datei index.html
klicken, um die Datei nach dem Repository zu verfolgen.
Sobald die Datei hinzugefügt wurde, ändert sich der Buchstabe neben der Datei an eine A. A steht für eine neue Datei, die dem Repository hinzugefügt wurde.
Um Ihre Änderungen zu übergeben, geben Sie eine Übergabe-Nachricht in das Eingabefeld oben im Fenster Source Control (Quellcodeverwaltung) ein. Klicken Sie dann auf das Prüfsymbol, um die Übergabe auszuführen.
Danach werden Sie feststellen, dass keine Änderungen ausstehen.
Fügen Sie als Nächstes Ihrer Datei index.html
etwas Inhalt hinzu.
Sie können eine Emmet-Verknüpfung verwenden, um ein HTML5-Skelett in VS-Code zu generieren, indem Sie die Taste !
drücken, gefolgt von der Taste Tab
. Fahren Sie fort und fügen Sie dem <body>
etwas wie eine Überschrift <h1>
hinzu, und speichern Sie anschließend.
Im Fenster Quellcodeverwaltung sehen Sie, dass Ihre Datei geändert wurde. Daneben wird der Buchstabe M angezeigt, der für eine modifizierte Datei steht:
Zur Übung sollten Sie auch diese Änderung übergeben.
Nachdem Sie nun mit der Bedienung des Quellcodeverwaltungsfensters vertraut sind, fahren Sie mit der Interpretation der „Gutter“-Indikatoren fort.
In diesem Schritt werfen Sie einen Blick auf das sogenannte „Gutter“ in VS-Code. Das Gutter ist der schmale Bereich rechts neben der Zeilennummer.
Wenn Sie zuvor Code-Folding verwendet haben, befinden sich die Symbole für Maximieren und Minimieren im Gutter.
Beginnen wir damit, eine kleine an Ihrer Datei index.html
vorzunehmen, beispielsweise eine Änderung des Inhalts innerhalb des Tags <h1>
. Danach werden Sie eine blaue vertikale Markierung im Gutter der Zeile, die Sie geändert haben, bemerken. Die vertikale blaue Markierung bedeutet, dass die entsprechende Codezeile geändert wurde.
Versuchen Sie nun, eine Codezeile zu löschen. Sie können eine der Zeilen im Abschnitt <body>
Ihrer Datei index.html
löschen. Beachten Sie, dass im Gutter nun ein rotes Dreieck vorhanden ist. Das rote Dreieck kennzeichnet eine Zeile oder eine Gruppe von Zeilen, die gelöscht.
Fügen Sie schließlich am Ende Ihres Abschnitts <body>
eine neue Codezeile hinzu und beachten Sie den grünen Balken. Der vertikale grüne Balken steht für eine hinzugefügte Codezeile.
Dieses Beispiel zeigt Gutter-Indikatoren für eine geänderte Zeile, eine entfernte Zeile und eine neue Zeile:
VS-Code bietet auch die Möglichkeit, einen Vergleich mit einer Datei auszuführen. Normalerweise müssten Sie dafür ein separates Diff-Tool herunterladen, sodass diese integrierte Funktion Ihnen dabei helfen kann, effizienter zu arbeiten.
Um einen Vergleich anzuzeigen, öffnen Sie das Quellcodeverwaltungsfenster und doppelklicken Sie auf eine geänderte Datei. Doppelklicken Sie in diesem Fall auf die Datei index.html
. Sie werden zu einer typischen Diff-Ansicht mit der aktuellen Version der Datei auf der linken und der zuvor übergebenen Version der Datei auf der rechten Seite gebracht.
Dieses Beispiel zeigt, dass in der aktuellen Version eine Zeile hinzugefügt wurde:
In der unteren Leiste haben Sie die Möglichkeit, Zweige zu erstellen und zu wechseln. Wenn Sie einen Blick auf die unterste linke Ecke des Editors werfen, sollten Sie das Quellcodeverwaltungs-Symbol sehen (dasjenige, das wie eine Weggabelung aussieht), gefolgt höchstwahrscheinlich von master
oder dem Namen des aktuellen Arbeitszweigs.
Um einen Zweig zu erstellen, klicken Sie auf diesen Zweignamen. Es sollte sich ein Menü öffnen, das Ihnen die Möglichkeit gibt, einen neuen Zweig zu erstellen:
Fahren Sie fort und erstellen Sie einen neuen Zweig namens test
.
Nehmen Sie nun eine Änderung an Ihrer Datei index.html
vor, die anzeigt, dass Sie sich in dem neuen Zweig test
befinden, indem Sie beispielsweise den Text „this is the new test branch
“ (Dies ist der neue Testzweig) hinzufügen.
Übergeben Sie diese Änderungen an den Zweig test
. Klicken Sie dann erneut auf den Zweignamen unten links, um wieder zu dem Zweig master
zu wechseln.
Nach dem Wechsel zum Zweig master
werden Sie feststellen, dass der Text this is the new test branch
(Dies ist der neue Testzweig), der dem Zeig test
übergeben wurde, nicht mehr vorhanden ist.
In diesem Tutorial wird nicht im Detail darauf eingegangen, aber über das Quellcodeverwaltungsfenster haben Sie Zugriff auf die Arbeit mit Remote-Repositorys. Wenn Sie schon mit einem Remote-Repository gearbeitet haben, werden Ihnen vertraute Befehle wie pull, sync, publish, stash usw. auffallen.
VS-Code enthält nicht nur viele integrierte Funktionen für Git, es gibt auch einige sehr beliebte Erweiterungen, um zusätzliche Funktionalität hinzuzufügen.
Diese Erweiterung bietet die Möglichkeit, Git Blame-Informationen in der Statusleiste für die aktuell ausgewählte Zeile anzuzeigen.
Dies kann einschüchternd klingen, aber keine Sorge, bei der Erweiterung „Git Blame“ geht es viel mehr um Praktikabilität als darum, jemandem ein schlechtes Gewissen einzureden. Bei der Idee, jemandem „ein schlechtes Gewissen“ für eine Code-Änderung einzureden geht es weniger darum, diese Person zu beschämen, als vielmehr darum, die richtige Person zu finden, der man Fragen zu bestimmten Teilen des Codes stellen kann.
Wie Sie auf dem Screenshot sehen können, bietet diese Erweiterung in der unteren Symbolleiste eine subtile, auf die aktuelle Codezeile, die Sie gerade bearbeiten, bezogene Nachricht, die erklärt, wer die Änderung vorgenommen hat und wann sie vorgenommen wurde.
Obwohl Sie mit den integrierten Funktionen in VS-Code aktuelle Änderungen anzeigen, Vergleiche durchführen und Zweige verwalten können, bietet es keinen detaillierten Einblick in Ihren Git-Verlauf. Die Erweiterung Git History löst dieses Problem.
Wie Sie in der nachstehenden Abbildung sehen können, ermöglicht diese Erweiterung Ihnen, den Verlauf einer Datei, eines bestimmten Autors, eines Zweigs usw. sorgfältig zu untersuchen. Um das nachstehende Fenster „Git History“ zu aktivieren, klicken Sie mit der rechten Maustaste auf eine Datei und wählen Sie Git: View File History:
Zusätzlich können Sie Zweige und Übergaben vergleichen, Zweige aus Übergaben erstellen und vieles mehr.
Git Lens erweitert die in Visual Studio Code integrierten Git-Funktionen. Es hilft Ihnen, die Code-Autorenschaft auf einen Blick mittels Git Blame-Annotationen und Code Lens zu visualisieren, nahtlos durch Git-Repositorys zu navigieren und diese zu erkunden, wertvolle Erkenntnisse über leistungsstarke Vergleichsbefehle zu gewinnen und vieles mehr.
Die Erweiterung „Git Lens“ ist eine der beliebtesten in der Gemeinschaft und ist auch die leistungsstärkste. In den meisten Fällen kann sie mit ihrer Funktionalität jede der beiden vorherigen Erweiterungen ersetzen.
Für „Blame“-Informationen erscheint rechts neben der Zeile, an der Sie gerade arbeiten, eine subtile Nachricht, die Sie darüber informiert, werde die Änderung vorgenommen hat, wann sie vorgenommen wurde und die zugehörige Übergabemeldung. Es gibt einige zusätzliche Informationen, die angezeigt werden, wenn Sie mit der Maus über diese Nachricht fahren, wie die Code-Änderung selbst, der Zeitstempel und mehr.
Für Git-Verlaufsinformationen bietet diese Erweiterung eine Vielzahl von Funktionen. Sie haben einfachen Zugriff auf zahlreiche Optionen, darunter die Anzeige des Dateiverlaufs, die Durchführung von Vergleichen mit vorherigen Versionen, das Öffnen einer bestimmten Revision und vieles mehr. Um diese Optionen zu öffnen, können Sie auf den Text in der unteren Statusleiste klicken, der den Autor enthält, der die Codezeile bearbeitet hat und wie lange es her ist, dass sie bearbeitet wurde.
Dadurch wird das folgende Fenster geöffnet:
Diese Erweiterung ist vollgepackt mit Funktionalität und es wird eine Weile dauern, bis Sie alles, was sie zu bieten hat, aufgenommen haben.
In diesem Tutorial haben Sie erkundet, wie die Integration der Quellcodeverwaltung mit VS-Code verwendet wird. VS-Code kann mit vielen Funktionen umgehen, für die zuvor ein separates Tool heruntergeladen werden musste.
]]>If a commit message contains unclear, incorrect, or sensitive information, you can amend it locally and push a new commit with a new message to GitHub. You can also change a commit message to add missing information.
]]>While this seemed to work first. I am having some issues now. The folder in www/codepixelz is created. But when I clone the bare repo to local and then add my files and push. It is pushed to git. However, the folder under “www/codepixelz” is deleted. and I can no longer access codepixelz.tech/xyz What could be the issue?
The code to create repo is as below:
if [ "$#" -eq 1 ]; then
if [ -d "$1.git" ]; then
# Control will enter here if $DIRECTORY exists.
echo "Repository $1.git already exists"
exit 0
fi
echo "*Setting up repo and web root folder for product $1"
if [ -d "/var/www/codepixelz/$1" ]; then
echo "Project already exists /product/$1"
exit 0
fi
echo "Creating git repo $1.git ..."
mkdir $1.git
mkdir /var/www/codepixelz/$1
chown dev:dev -R /var/www/codepixelz/$1
chmod 775 -R /var/www/codepixelz/$1
cd $1.git
git init --bare
git --bare update-server-info
git config core.bare false
git config receive.denycurrentbranch ignore
git config core.worktree /var/www/codepixelz/$1
echo "#!/bin/sh" | tee hooks/post-receive
echo "git checkout -f" | tee -a hooks/post-receive
echo "rm -rf /var/www/codepixelz/$1" | tee -a hooks/post-receive
chmod +x hooks/post-receive
echo "Created git repo $1.git"
echo "Clone: git clone ssh://dev@codepixelz.tech/home/dev/git/$1.git"
echo "Add into existing git repo: git remote add origin ssh://dev@codepixelz.tech/home/dev/git/$1.git"
echo "/home/dev/git/$1.git created on " >> gitLog $(date)
elif [ $project_dir -eq 2 ] ; then
if [ -d "/var/www/codepixelz/$1" ]; then
echo "Project already exists./codepixelz/$1"
exit 0
fi
echo "Creating git repo $1.git ..."
mkdir $1.git
mkdir /var/www/codepixelz/$1
chmod 775 -R /var/www/codepixelz/$1
cd $1.git
git init --bare
git --bare update-server-info
git config core.bare false
git config receive.denycurrentbranch ignore
git config core.worktree /var/www/codepixelz/$1
echo "#!/bin/sh" | tee hooks/post-receive
echo "git checkout -f" | tee -a hooks/post-receive
chmod +x hooks/post-receive
echo "Created git repo $1.git"
echo "Clone: git clone ssh://dev@codepixelz.tech/home/dev/git/$1.git"
echo "Add into existing git repo: git remote add origin ssh://dev@codepixelz.tech/home/dev/git/$1.git"
echo "/home/dev/git/$1.git created on " >> gitLog $(date)
else
echo "Enter project name to create."
fi
The error that I get is:
fatal: sha1 file ‘<stdout>’ write error: Broken pipe KiB/s error: remote unpack failed: unable to create temporary object directory error: failed to push some refs to ‘ssh://dev@codepixelz.tech/home/dev/git/xyz.git’
Any lead would be of great help.
Thanks.
]]>git checkout -b branch_name
and I would realize that I’ve made a typo or I would come up with a better name for the branch later on.
If I have just created the branch it is ok as I can create a new one, but sometimes I would notice this after a couple of commits.
So here’s how you could rename a local Git branch via your command line!
]]>If you’ve built a static website in a local environment, the next step is to decide how to publish it to the web. One way to publish your site is to deploy it as an application through DigitalOcean App Platform, which offers free hosting for three static sites. Deploying applications often requires setting up underlying server infrastructure. App Platform automates this work, allowing you to deploy your static website to the cloud from a GitHub repository.
This tutorial will guide you through all the steps of deploying a static website to the cloud using App Platform, GitHub (a software development platform), and GitHub’s Desktop Application. The instructions here should work for any static website you’ve built in a local environment, including websites created with our tutorial series How To Build a Website With HTML. We will also walk you through how to use our sample HTML website for this tutorial if you don’t have a website ready to deploy, or would just like to test out App Platform. By the end of this tutorial, you should have a published website and an understanding of how to deploy websites to the cloud from a GitHub repository with App Platform.
Note: If you already have a GitHub account and a GitHub repository for your website project, you can skip to Step 6 for instructions on getting started with App Platform.
Deploy your frontend applications from GitHub using DigitalOcean App Platform. Let DigitalOcean focus on scaling your app.
If you don’t already have a GitHub account, you’ll need to register for one so that you can create a GitHub repository for your project. GitHub is a software development platform that allows developers to host, share, and collaborate on coding projects. To create a free account, sign up on GitHub’s homepage.
Once you have confirmed your account, you are ready to proceed to the next step. Remember your login credentials, as you’ll need them in Step 3.
Many developers use the command-line interface (CLI) tool Git to interact with GitHub, but you can also use the GitHub Desktop app if you are not familiar with using your computer’s terminal. (If you’d like to learn more about using the CLI tool Git, you can visit our guide How To Contribute To Open Source: Getting Started With Git. This tutorial will proceed with instructions for using the GitHub Desktop app.
Download the GitHub Desktop app by following the instructions on the GitHub Desktop homepage. Next, open the downloaded application file and complete the installation process as instructed.
After the installation is complete, you are ready to proceed to the next step.
In this step, we’ll use the GitHub Desktop app to create a local repository on your machine for your website project.
First, open the GitHub Desktop app. Click the “Sign in to GitHub.com” blue button :
Follow the prompts to connect the GitHub Desktop app with your GitHub account. Once the Desktop app is connected with your account, a window should appear with options for getting started. Click on the “Create a New Repository on Your Hard Drive” button (third large button from the top):
Next, you will be prompted to fill out the details of your new repository:
In this window, enter the following information:
You can leave the automatically-generated Local Path as it is. This is where GitHub Desktop will store your project on your local machine.
If you’d like to add a file to store your site’s documentation, you can check the option to initialize the repository with a README. In general, it is good practice to create a README for your repositories, but you can also leave this option unchecked for the purpose of the tutorial.
The Git Ignore option allows you to select a template for ignoring certain files. The “License” option allows you to choose an open source license for your work. To learn more about different open source license options, you can visit the Open Source Initiative’s list of Licenses and Standards. If you don’t know what to select for these options, you can keep “none” selected for both options for the purpose of this tutorial.
Click “Create Repository.” The Desktop app should now show the details of your newly-created repository. We’ll go over what these different panels display in the next step.
Once your repository is created, you should be ready to proceed to the next step.
In this step, we’ll copy the files of your website project and place them in the newly-created GitHub repository folder.
Note: If you want to use our sample website to explore App Platform, download the zip file from the GitHub repository by clicking the green “Code” button in the upper right and selecting the option to “Download ZIP”:
Once the ZIP file has finished downloading, unzip the file to access the folder that contains the website files. This folder will serve as your website project’s working folder in the steps below.
First, on your desktop, open your website project’s working folder, or the folder that is currently storing all of your website project’s files and folders. In this example, the working folder is called “html-site”.
Next, find and open the newly created repository folder that you named in Step 3. In this example, the repository folder is called “my-static-site”.
Copy the files from your working folder to your repository folder. To copy the files you can select all of your website files and simultaneously click Right Click
(on Windows) or CTRL + Left Click
(on Mac) and select “Copy X items”. Then, to paste copies of your files into the repository folder, click into the repository folder, click Right Click
(on Windows) or CTRL + Left Click
(on Mac), and select “Paste X items”:
After pasting the files into your repository folder, the GitHub Desktop app should display the files in the “Changes” panel on the left side of the app window:
If you are using a macOS operating system, don’t be alarmed if you see the addition of a .DS_STORE
file in the “Changes” panel. This is an automatically-generated file that stores information about the folder and should not affect your project.
Once your folders are in your local repository folder, you are ready to save your changes to the repository. On GitHub, saved changes are called commits. Each time you commit changes, you must make a comment describing your changes.
To commit your changes, add a comment in the field that says “Summary (required)” and any additional info you’d like to include in the field “Description” in the bottom left corner of the Desktop app:
Then click the blue button “Commit to master” located below the text fields, This action will save your changes to the “main” branch of your project. Note that GitHub previously used the word “master” instead of “main” for the primary branch of users’ repositories. Please see their information on renaming these conventions, and refer to GitHub on their timeline for rolling these changes out. On GitHub, the main or master branch is the definitive branch of the project, which can be copied to work on different versions of the same repository simultaneously. To learn more about branches, you can visit our tutorial How To Use Git Branches or GitHub’s documentation.
Once you have committed your changes to the main branch, your files in the left hand panel will disappear as this panel only displays files that contain uncommitted changes. You should receive a message in the bottom left corner noting that your commit was successful.
In the last step, you committed your changes to the repository on your local machine. This repository, however, has not yet been pushed to your GitHub account. In this step, we will push this commit to your repository on GitHub, which will add your website files to your GitHub repository.
To publish your local repository to your GitHub repository, click on the blue “Publish repository” button:
Once you click the button, a modal will appear asking you to fill out the name and description of your repository. Fill out your details. You may keep the repository private if you wish.
After filling out your details, click the blue “Publish repository” button. Once your files finish uploading, your repository should be available on your account on GitHub. To check, visit the relevant URL, which will have the following format:
https://github.com/your_github_account_name/your_repository_name
Be sure to replace the highlighted text with your account name and repository name. You should receive a webpage that shows your repository’s files:
Now that your website files are hosted on GitHub, we can use them with App Platform. First, however, we’ll need to create a DigitalOcean account.
To create a DigitalOcean account, visit the sign up page and choose among the following options:
If you choose to use an email address and password, you will need to verify your email address using the email automatically sent to you.
Note that you will need to enter a payment method to verify your identity and keep spammers out. You will not be charged. You may see a temporary pre-authorization charge to verify the card, which will be reversed within a week.
Once you have verified your account, you should be able to access App Platform. For complete documentation about signing up for a DigitalOcean account, please visit our guide Sign up for a DigitalOcean Account
You are now ready to proceed to the next step.
In this step we’ll deploy our static website with App Platform.
First, visit the DigitalOcean App Platform portal and click on the blue “Launch Your App” button:
On the next page, you will be prompted to select your GitHub repository. Since you have not yet connected your App Platform account to your GitHub account, you’ll need to click on the “Link Your GitHub Account” button:
You will then be prompted to sign into your GitHub account (if you aren’t already signed in) and select the account that you want to connect to App Platform. Once selected, you will be directed to a page where you can select which repositories to permit App Platform to access. Click the “Only select repositories” button and select the repository that you pushed to your GitHub account in Step 5:
When you are done, click the “Save” button at the bottom of the webpage. You will now be directed back to App Platform, where you should now be able to select your repository in the dropdown menu:
After selecting your repository, click “Next.” You will then be prompted to choose the name, branch, and options for Autodeploy. If the Autodeploy box is checked, any future changes you make to your repository files will be immediately pushed to your live site. Make your selections and click “Next”:
Next, you will be taken to a page where you can configure your App. This page should automatically detect your component type as a “Static Site":
You should not need to make any changes on this page. Scroll down and click the blue button “Next” at the bottom of the page. You will be directed to a new window where you can select the “Starter” plan if you’d like to deploy this site as one of your free three static sites:
Select your desired plan and click the “Launch Your Starter App” button. You will be directed to your app’s admin page. When your app is finished deploying, you will see the “Deployed Successfully!” message:
.
You will also see a link under your app’s name at the top of the page. Click on the link to make sure your site is working properly. You should be directed to a new web page with your published website. If your site is not appearing, go back and check for errors.
Your static site should now be published to the web through App Platform. Anyone with the app link will be able to access your site. If you’d like to add a custom domain to your site, please visit our How To Manage Custom Domains guide in App Platform product documentation.
In this tutorial, you have learned how to deploy a static site using App Platform, with a GitHub account, and the GitHub Desktop app. If you wish to make changes to your website, edit your files on your local machine and commit and push changes to your GitHub repository as instructed in Steps 4 and 5. Once your changes are pushed to your GitHub repository, they should automatically update on your site if you kept the “Automatically deploy on push” option selected in Step 7.
For further information about App Platform, please visit the official App Platform product documentation. Remember, you can host up to three free static sites. If you wish to delete your app, please follow the instructions in the section Destroy an App in the product documentation.
]]>I installed an ssh keygen pair for connecting my home computer to the DO web server as per this tutorial: https://www.digitalocean.com/docs/droplets/how-to/add-ssh-keys/to-account/.
Now, I want to connect my remote DO Droplet Ubuntu 20.04 OS to my Github repository and I also want to do this with ssh identification.
Do I need to install a new pair of ssh keys? If yes or no, where can I find the best information to get one of my DO Droplet directories linked to my Github repository?
Thank you!
]]>I see on the blogpost that one of the upcoming features is gitlab and bitbucket support
Support for GitLab and Bitbucket so that you can deploy code from your repositories on these services.
Will the gitlab integration support both private self-hosted gitlab or just gitlab.com? Any schedule on this?
Can’t wait to test the App Platform. Thanks! :)
]]>The first user is able to link/install DigitalOcean into the organization (with all repository access), and is able to see the repositories when creating new components.
When a second user (from the same team and github organization tries to create a component, DO doesn’t seem to understand that the app already contains a link) - it asks to link the github account again, and since it`s already installed it gets redirected to the github settings page instead of showing the repository list.
It seems that the link is getting stored in a per-user basis?
Is it a bug or am I doing something wrong?
]]>Here are the steps that you need to follow in order to get the newest changes from the original repository pulled into your fork!
]]>create-react-app
and have pushed it to GitHub.
I’d like to use GitHub actions to automatically build and then deploy to a Digital Ocean droplet each time I push a change to GitHub.
I can’t find a template on GitHub Actions to do this. And cant find a tutorial or blog that explains how to do this.
]]>Hacktoberfest is a month-long celebration of open source software, run by DigitalOcean and open to everyone in our global community. To participate, you’ll need to submit four quality pull requests to public GitHub repositories in the month of October. Upon completing the challenge, you’ll earn special prizes, including an exclusive Hacktoberfest t-shirt.
You can sign up anytime between October 1 and October 31, and we encourage you to connect with other developers and Hacktoberfest enthusiasts for virtual events and information sessions, starting in September.
In this tutorial we’ll introduce you to Git, the version control system that you’ll use to submit your pull request, and GitHub, the repository hosting service that we’ll be using to track your progress. By the end of this tutorial, you’ll be ready to submit your first pull request and will be well on your way to participating in Hacktoberfest!
##Version Control
Before we begin with Git and GitHub, let’s talk about version control. When developers work on a project together, oftentimes they’ll need to work on the same code base. While they’re working, each developer needs to know about what changes the other developer made, so as not to duplicate work or write code over what has already been done.
A version control system serves as a saving program for code, where it assigns a version to a project and tracks changes made over time to each file in the project. In this way, developers can work together on a project by checking in with the latest version, to see the changes made before working on their portion of the project’s code.
##Git and GitHub
Git, a version control system used to manage developer projects of all sizes, was created in 2005 by Linus Torvalds, the creator of Linux, to help developers contribute code and share code revisions in a way that was fast, efficient, and inexpensive. Git enables developers to edit, share, and publish code, facilitating collaboration and teamwork.
GitHub is a cloud-based git repository hosting service that allows developers to take code that they’ve written on their local machines and share it with the world. It provides a way to share the version-tracked projects on your local computer publicly through repositories, or central file storage locations, and depending on the project’s availability (it can be either a public or private repository), other developers can download the project to edit the code, provide insight, and more.
To get started with GitHub, you can create an account at GitHub. For more details on how to do that, please refer to the Hacktoberfest resources page.
##Cloning a Repository
We’ll now clone and edit our first GitHub repository. First, let’s navigate to the repository that we’d like to clone. For the sake of this tutorial, we’ll use the Cloud Haiku repository. Before you clone this repository, that is, take a copy of the code on GitHub onto your local machine, you’ll need to take a copy of the whole repository into your own GitHub account. This is called a fork of the repository, and it allows you to develop your code without affecting the main code base.
To fork a repository, click the Fork
button at the top right in the repository. To clone, click the code
button, take a copy of the link provided, then watch as GitHub takes this repository and adds it as a copy to your account. Your name should now appear as the creator of this repository, which is a ‘fork’ of the main haiku repository.
Next, navigate to your command line interface to clone the project on your local machine.You can do that with the git clone
command, which will clone, or copy, the fork that I just created from the haiku repository down to my local machine. This will enable you to make changes to the codebase locally (on your own machine).
- cd ~
- git clone https://github.com/sammy/cloud_haiku
##Editing Code Content
You now have a copy of the Cloud Haiku repository on your local machine, so you’re ready to prepare your contribution. Using the command line interface, navigate to the folder of your cloned repository. If you followed along, you should have a cloud_haiku
folder inside your home directory:
- cd ~/cloud_haiku
There’s a number of text editors and Integrated Development Environments (IDEs) that you can use to edit your code. IDEs are typically segmented by programming language and include a series of helpful features to streamline the process of developing an application in that language. If you don’t have an IDE currently set up within your work machine, consider checking out Hacktoberfest’s resources page for advice on how to choose an IDE.
It’s important to take the time to read and understand how the project is organized and the contribution guidelines, and find parts of the code that you can work on. Read any associated documentation provided before making changes. Next, let’s submit a haiku!
##Adding Content to the Remote Repository
Now that we have a change made to the haiku repository, we’ll need to track and save that change. The first step in tracking your change is to add it to the version you’re working on. To do that, we’ll execute the command git add .
:
- git add .
Writing the command in this way allows you to add all changes made to the repository, across files. If you need to only submit changes to an individual file, use git add filename
:
- git add sammyhaiku.md
After running the add
command, you’ll get no confirmation. To see if your changes have been included in the list of files that is ready to be committed, you can execute the command git status
:
- git status
This allows you to check on the status of your tracked changes — you’ll see that your file has been added, but not committed as a change. Git provides this step in the event that you need to amend a change before officially tracking it as new or edited code.
Next, let’s commit our change. Execute the command git commit
, and add a message so that other developers who are collaborating on this project will know about the changes you’ve made:
git commit -m "added sammy haiku"
Writing your commit with a message allows developers to be informed of changes that are made — this message is tracked along with a commit ID and your username.
After committing, we’ll need to push
changes from our local machine to the remote repository on GitHub. To do this, let’s execute the command git push
:
- git push
Here, we can designate an origin for the push — in this instance, we want our contributions to go to our forked version of DigitalOcean’s haiku repository.
To recap, so far we’ve identified a repository that we’d like to edit, and took a copy of the repository into our GitHub account and local machine using fork
and clone
. We made a change, and submitted our change with git add
. We then solidified our change by running git commit
, which committed the change. git push
pushed our change from our local machines to the remote repository on GitHub. If we look on GitHub, we’ll see that the change we made is reflected in the files in our copy of the haiku repository.
##Creating a Pull Request
We’re now ready to let the maintainers of the project know that we have a change to the repository that we’re confident about and ready to submit. To do this, we’ll click the pull request button to the right.
After the pull request button is clicked, a new page will open with a form that explains what change we made, and shows if the changes made will in any way conflict with the existing content. We’ll add in an appropriate title that details the change, and in the description add an explanation of what changes were made and why. What you add here can vary depending on the project - take a look at the project’s collaboration guidelines to make sure your pull request is formatted correctly.
After adding in a title and description of the changes made, we’ll scan the pull request page to make sure that our committed change does not conflict with existing changes made to the code repository. If everything checks out, we’ll get a green submit pull request button at the bottom that escalates our request to make a change to the original haiku poem codebase, a contribution that will be live for anyone viewing that main branch. Be patient — it may take maintainers some time to review your request. Amendments and comments can be added on the pull request page, and new commits made to the same affected files will appear in the request’s history.
Congratulations- we’ve successfully submitted our first pull request!
#Conclusion
In this tutorial, you learned about Git and GitHub, and successfully identified and submitted a change to a public repository. For Hacktoberfest, you’ll need to submit 4 meaningful pull requests, so again, find the project that resonates with you and have fun hacking!
To see this tutorial in action, here’s a helpful video that walks you through the process of submitting your first pull request:
For more information about Hacktoberfest, visit our main page. To learn more about Git, visit How to Use Git: A Reference Guide. For additional information about GitHub, visit GitHub.
]]>Contributing to open source software is not only a way to share your skill in a particular language or tech stack, it can be a rewarding practice to share your engineering knowledge and collaborate with the developer community. Although there’s a wide range of open source projects out there waiting for your expertise, knowing where to find them and how to contribute in a way that is meaningful to the project can sometimes prove to be a barrier for interested contributors.
In this Hacktoberfest-flavored guide, we’ll share some tips and information that will aid in finding and contributing meaningfully to open source projects.
If you are new to engaging with the open source community, finding a new project to contribute to may feel daunting. Here’s a few resources and ideas to help you find a project you’d love to help thrive.
Open source software is software that’s freely available to use and modify, typically shared via a public repository hosting service like Github. Projects that follow the open source model usually thrive through contributions from the developer community, and may allow for redistribution depending on which open source license they have adopted.
Most successful open source projects have transparent, well-delineated processes for maintenance and improvement, which helps to build a community around them. As a result, they benefit from regular contributions from end-users, who bring with them diverse perspectives to solutions that may otherwise be overlooked.
To learn more in detail about open source, visit our tutorial series, An Introduction to Open Source.
After deciding to commit your time and talent to an open source project, it’s important to take a moment to consider your passions and the type of project that resonates with you. Considering that you may spend a number of hours contributing to a specific project, you want to select a project that is not only something you’d personally use, but have a deeper interest in beyond contributing for Hacktoberfest. Think about the software you use today and consider the following:
These beginning considerations may lead you to discover that your favorite software is open source and waiting for your contribution. If that’s the case, be sure to dive into the CONTRIBUTING.MD
file that typically delineates how to contribute before starting. This resource will usually introduce you to the codebase, conventions, and ways to gain support when contributing to the software.
If you’re just starting out, the idea of committing large amounts of code to an unfamiliar codebase could bring out the imposter syndrome that lies dormant in many of us. Luckily, each developer was a beginner once, and to foster appreciation and adoption of open source, there’s a wealth of publicly-available repositories shared by fellow developers that are beginner-friendly. Here’s a few that we suggest to browse:
More resources for open source projects to try can be found on our Hacktoberfest Resources Page.
After identifying an open source project to contribute to and diving into the resource material that the codebase offers, you may be wondering exactly what to contribute. While the way in which you contribute may vary by project, here’s some general ideas of contributions that are impactful and meaningful to the codebase and software you’re working on.
Bugs are small errors in code that may cause an annoyance, a blocker, or be debilitating to software. Bugs often produce unexpected results that cause incorrect responses or actions — for the sake of a software user’s experience, it’s imperative and important that a codebase is maintained to be bug-free (or as bug-free as possible).
You can contribute your knowledge and expertise to ‘squash’ or solve the issue surrounding a bug. By working on bugs of varying priorities, your ability to strengthen a codebase by solving errors will grow, and you’ll have a meaningful contribution to add.
Open source projects benefit from a diversity of thought. Although software may have been developed by one or more engineers with an opinion of how their product can solve an existing problem, your personal experience and outlook on how to improve a project can be invaluable. Once you’re comfortable with a project’s codebase and understand how it works for end users, try to think of a new feature that could be useful or improve the user’s experience and create an issue to propose it to the project maintainers. It is important to have this conversation before investing time in writing code, since sometimes your idea might not coincide with the project’s roadmap. With a positive response, it’s time to implement your idea and bring that feature to production.
While there may be a wealth of technical contributions that can be made to a codebase, writing good documentation is a contribution that is often overlooked. If you’re linguistically-inclined or speak a language other than the one reflected in the initial documentation, consider making a contribution. Contributions in documentation can revolve around providing editing help to an existing doc or authoring new pages within the documentation. Refer to your project’s contribution guidelines to learn more about how to contribute this and other non-technical help.
After you’ve made a meaningful contribution to an open source project’s codebase, it’s time to submit your pull request. We’ve created a helpful video that walks you through this process via Github, that can be found here.
Sharing your expertise with an open source project is a rewarding experience that allows you to practice your talent, collaborate with and learn from others, and give back to the developer community. While it may initially seem daunting to find your place within the open source community, finding a project that speaks to your passions and contributing meaningfully to its codebase is a great way to start.
For Hacktoberfest, while making four (4) meaningful contributions to open source projects will qualify you for prizes, we hope that you’ll continue to enjoy the benefits of contributing to the open source community well beyond the event. For more information or to learn more about open source, Git, or Github, you can visit the Hacktoberfest resources page. Happy hacking!
]]>I am supporting a few projects on GitHub, I’m at a point where I have hundreds of branches across all projects and deleting the branches manually is not really an option.
Does anyone have a script on hand to delete all merged branches which do not have any new commits since a specific date?
Any help will be appreciated!
]]>How To Submit Your First Pull Request on GitHub Workshop Kit Materials
This workshop kit is designed to help an instructor guide an audience without a background in version control or contributing to open source projects through the steps of submitting a pull request from start to finish in roughly thirty minutes. Attendees will finish the workshop with an understanding of version control, open source, Git, and GitHub.
No prior coding experience is assumed on the part of the audience. Instructors without experience in open source, Git, or GitHub should be able to teach the course after reviewing the material first.
The aim of this workshop kit is to provide a complete set of resources for a speaker to host a workshop about version control and contributing to open source projects. It includes:
This workshop kit page is intended to help instructors prepare for the workshop and provide a starting point for learners. Instructors should point learners to this page so they can have access to the slides (which contain useful links).
If desired, learners can prepare for the workshop by reading the introduction below and making sure that they have the prerequisites ready before the workshop starts.
If you are interested in participating in this year’s Hacktoberfest, this workshop is a great place to start! This project-based workshop will introduce you to open source, version control, Git, and GitHub using the Cloud Haiku repository as a model. Once you learn the fundamentals, you will know how to contribute to open source projects and submit a pull request on GitHub. No prior coding experience is necessary to follow along in the workshop.
When software developers work on a project together, oftentimes they’ll need to work on the same code base. While they’re working, each developer needs to know about what changes the others made to the code, so as not to duplicate work or write code over what has already been done. Git, a version control system used to manage developer projects of all sizes, was created in 2005 by Linus Torvalds, the creator of Linux, to help developers contribute to code and share code revisions in a way that was fast, efficient, and inexpensive. Git creates code repositories to help developers edit, share, and publish code for all. GitHub is a cloud-based Git repository hosting service that allows developers to take code that they’ve written on their local machines and share it with the world.
With Git and GitHub, developers from all over the world collaborate on all sorts of projects — many of the websites you visit regularly are maintained using GitHub. Knowing how to use Git and GitHub, and learning how to contribute to open source projects will provide new developers with a strong start in gaining the skills they need to join the software engineering community at large.
In this workshop, we’ll introduce you to Git and GitHub, the version control system that Hacktoberfest uses to track your progress, and the repository hosting service that shares projects to collaborate on. By the end of this tutorial, you’ll be ready to submit your first pull request and will be well on your way to participating in Hacktoberfest!
To participate as a workshop leader or learner, you will need the following:
Once you have your prerequisites ready, you will be ready to begin the workshop. Refer to the speaker slides for helpful links after the workshop or watch the How to Submit Your First Pull Request video to review.
]]>PHP info: http://167.172.155.165/phpinfo.php
Any tips? Thanks!
]]>Системы контроля версий, например Git, необходимы для передовых методов разработки программного обеспечения. Контроль версий помогает отслеживать изменения программного обеспечения на уровне исходного кода. Вы можете отслеживать изменения, возвращаться к предыдущим версиям и создавать ответвления для создания альтернативных версий файлов и директорий.
Многие проектные файлы ПО хранятся в репозиториях Git, а такие платформы, как GitHub, GitLab и Bitbucket, упрощают работу над проектами разработки программного обеспечения и совместную работу.
В этом руководстве мы расскажем, как установить и настроить Git на сервере Ubuntu 20.04. Мы узнаем о двух способах установки программного обеспечения: посредством встроенного диспетчера пакетов и из файла с исходным кодом. Каждый из этих подходов имеет собственные преимущества, зависящие от конкретных потребностей.
Вам потребуется сервер Ubuntu 20.04 с учетной записью non-root superuser.
Чтобы выполнить настройку, воспользуйтесь руководством по первоначальной настройке сервера Ubuntu 20.04.
После настройки сервера и пользователя вы можете продолжить.
Данный вариант установки лучше всего подходит тем, кто хочет быстро начать работать с Git, предпочитает широко используемую стабильную версию, и кому не нужны самые последние функциональные возможности. Если вас интересует самая последняя версия, переходите к разделу об установке из файла с исходным кодом.
Git, вероятнее всего, уже установлен на вашем сервере Ubuntu 20.04. Это можно подтвердить на вашем сервере с помощью следующей команды:
- git --version
Если вы получите вывод, аналогичный следующему, то Git уже установлен.
Outputgit version 2.25.1
Если в вашем случае это так, тогда можно перейти на настройку Git или прочитать следующий раздел о том, как установить из файла с исходным кодом, если нужна более новая версия.
Однако если вы не получили вывод с номером версии Git, его можно установить с помощью диспетчера пакетов APT по умолчанию в Ubuntu.
Во-первых, воспользуйтесь инструменты управления пакетами apt для обновления локального индекса пакетов.
- sudo apt update
После завершения обновления вы можете выполнить установку Git:
- sudo apt install git
Убедиться в правильности установки Git можно, запустив следующую команду и проверив получение соответствующего вывода.
- git --version
Outputgit version 2.25.1
После успешной установки Git вы можете перейти к разделу Настройка Git данного обучающего руководства и выполнению настройки.
Если вы ищете более гибкий метод установки Git, возможно, вы захотите компилировать программное обеспечение из файла с исходным кодом, о чем мы расскажем подробнее в этом разделе. Это метод требует больше времени, а полученный результат не будет сохранен в диспетчере пакетов, но он позволяет загрузить последнюю версию и дает определенный контроль над параметрами, которые вы включаете в ПО при необходимости индивидуальной настройки.
Проверьте текущую версию Git, установленную на сервере:
- git --version
Если Git установлен, вы получите вывод, аналогичный следующему:
Outputgit version 2.25.1
Перед началом установки вам нужно установить программное обеспечение, от которого зависит Git. Его можно найти в репозиториях по умолчанию, поэтому мы можем обновить локальный индекс пакетов, а после этого установить соответствующие пакеты.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
После установки необходимых зависимостей создайте временную директорию и перейдите в нее. В эту директорию мы загрузим тар-архив Git.
- mkdir tmp
- cd /tmp
На сайте проекта Git перейдите в список тар-архивов на странице https://mirrors.edge.kernel.org/pub/software/scm/git/ и загрузите желаемую версию. На момент написания последней версией была версия 2.26.2, поэтому для демонстрационных целей мы загрузим именно эту версию. Мы используем curl и выведем загружаемый файл в git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Распакуйте тар-архив:
- tar -zxf git.tar.gz
Перейдите в новую директорию Git:
- cd git-*
Теперь вы можете создать пакет и установить его, введя эти две команды:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Теперь замените процесс оболочки, чтобы использовать только что установленную версию Git:
- exec bash
Теперь вы можете проверить версию, чтобы убедиться в успешной установке.
- git --version
Outputgit version 2.26.2
Мы успешно выполнили установку Git и теперь можем завершить настройку.
Для завершения работы по установке версии Git необходимо выполнить настройку Git таким образом, чтобы сгенерированные сообщения о внесении изменений содержали корректную информацию и поддерживали вас при разработке проекта программного обеспечения.
Это можно сделать с помощью команды git config
. В частности, нам нужно указать наше имя и адрес электронной почты, поскольку Git вставляет эту информацию в каждое внесение изменений. Мы можем двигаться дальше и добавить эту информацию с помощью следующей команды:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Мы можем просмотреть все настроенные пункты конфигурации, введя следующую команду:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Информация, которую вы вводите, сохраняется в файле конфигурации Git, и вы можете при желании изменить ее вручную с помощью текстового редактора на ваш выбор (мы используем nano):
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Нажмите CTRL
и X
, затем Y
, затем ENTER
, чтобы выйти из текстового редактора.
Существует множество других вариантов настроек, но эти две опции устанавливаются в обязательном порядке. Если вы пропустите этот шаг, вы, скорее всего, будете видеть предупреждения при внесении изменений в Git. Это будет требовать дополнительной работы, поскольку вам нужно будет исправлять вносимые изменения, которые вы делали, вводя корректную информацию.
Вы установили Git и готовы к его использованию в системе.
Чтобы узнать больше об использовании Git, прочитайте эти статьи и разделы:
]]>Sistemas de controle de versão como o Git são essenciais para as melhores práticas de desenvolvimento de softwares modernos. O controle de versão permite que você acompanhe seu software a nível de código-fonte. É possível rastrear as alterações, retornar a etapas anteriores, e os ramos para criar versões alternativas de arquivos e diretórios.
Muitos arquivos de projetos são mantidos em um repositório Git, e plataformas como o GitHub, o GitLab, e o Bitbucket ajudam a facilitar o compartilhamento e colaboração de projetos de desenvolvimento de software.
Neste guia, iremos demonstrar como instalar e configurar o Git em um servidor Ubuntu 20.04. Trataremos a instalação do software de duas maneiras diferentes: através do gerenciador de pacotes integrado e através da origem. Cada uma destas abordagens tem seus próprios benefícios em diferentes situações. Você deve escolher entre elas de acordo com sua necessidade específica.
Será necessário ter um servidor Ubuntu 20.04 com uma conta de superusuário não root.
Para configurar isto, siga nosso guia Configuração do servidor inicial para o Ubuntu 20.04
Com seu servidor e usuário configurados, tudo está pronto para começar.
A opção de instalar com pacotes padrão é melhor se você quiser que tudo esteja funcionando rapidamente com o Git, caso prefira uma versão estável amplamente utilizada, ou se você não estiver procurando as mais recentes funcionalidades disponíveis. Se estiver procurando pela versão mais recente, vá para a seção de instalação a partir da origem.
O Git provavelmente já está instalado em seu servidor Ubuntu 20.04. Você pode confirmar se este é o caso em seu servidor com o seguinte comando:
- git --version
Se receber um resultado semelhante ao seguinte, o Git já está instalado.
Outputgit version 2.25.1
Se este for o caso para você, configure o Git ou confira a próxima seção sobre como instalar a partir da origem se precisar de uma versão mais atualizada.
No entanto, caso não tenha obtido um resultado de um número de versão do Git, você pode instalá-lo com o gerenciador de pacotes padrão APT do Ubuntu.
Primeiramente, utilize as ferramentas de gerenciamento de pacotes apt para atualizar seu índice de pacotes local.
- sudo apt update
Com a atualização finalizada, instale o Git:
- sudo apt install git
Você pode confirmar se instalou o Git corretamente executando o seguinte comando e verificando se recebe um resultado relevante.
- git --version
Outputgit version 2.25.1
Com o Git instalado com sucesso, agora é possível seguir em frente para a seção Como configurar o Git deste tutorial para completar sua configuração.
Caso esteja procurando por um método mais flexível de instalar o Git, pode ser interessante compilar o software a partir da origem, que vamos abordar nesta seção. Isso leva mais tempo e não será mantido através do seu gerenciador de pacotes, mas irá permitir que você baixe a versão mais recente e lhe dará controle sobre as opções que desejar personalizar.
Verifique a versão do Git atualmente instalado no servidor:
- git --version
Se o Git estiver instalado, você receberá um resultado semelhante ao seguinte:
Outputgit version 2.25.1
Antes de começar, é necessário instalar o software de que o Git depende. Tudo isso está disponível nos repositórios padrão, logo, podemos atualizar nosso índice de pacotes e em seguida instalar os pacotes relevantes.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
Após instalar as dependências necessárias, crie um diretório temporário e vá até ele. Aqui será onde baixaremos nosso Git tarball.
- mkdir tmp
- cd /tmp
Na página do projeto Git, podemos navegar até a lista tarball disponível em https://mirrors.edge.kernel.org/pub/software/scm/git/, e baixar a versão que você desejar. No momento em que este artigo foi escrito, a versão mais recente era a 2.26.0. Baixaremos esta versão para fins demonstrativos. Usaremos curl e direcionaremos o arquivo que baixamos para git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Descompacte o arquivo comprimido tarball:
- tar -zxf git.tar.gz
Em seguida, vá para o novo diretório Git:
- cd git-*
Agora, é possível fazer o pacote e instalá-lo digitando esses dois comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Agora, substitua o processo shell para que a versão do Git que acabamos de instalar seja usada:
- exec bash
Com isso finalizado, confirme que a instalação foi bem-sucedida verificando sua versão.
- git --version
Outputgit version 2.26.2
Após a instalação do Git, finalize a configuração.
Depois de estar satisfeito com sua versão do Git, você deve configurar o Git para que as mensagens de confirmação geradas por você contenham suas informações corretas e deem-lhe suporte à medida que você constroi seu projeto de software.
Isso pode ser alcançado utilizando o comando git config
. Especificamente, precisamos dar nosso e endereço de e-mail porque o Git incorpora esta informação em cada entrega que fazemos. Podemos seguir em frente e adicionar esta informação digitando:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Podemos exibir todos os itens de configuração que foram configurados digitando:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
A informação que digitou está armazenada no seu arquivo de configuração do Git, que você pode editar manualmente de maneira opcional com um editor de texto como este (usaremos o nano):
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Pressione CTRL
e X
, em seguida Y
e então ENTER
para sair do editor de texto.
Há muitas outras opções que é possível definir, mas essas duas são necessárias. Se pular este passo, provavelmente verá avisos quando colocar o Git em funcionamento. Isso dará mais trabalho para você pois será necessário revisar as entregas que tiver feito com as informações corretas.
Agora, você deve ter o Git instalado e pronto para usar no seu sistema.
Para aprender mais sobre como usar o Git, verifique esses artigos e séries:
]]>Les systèmes de contrôle version comme Git sont essentiels aux meilleures pratiques de développement logiciel moderne. Le versionnage vous permet de garder une trace de votre logiciel au niveau de la source. Vous pouvez suivre les modifications, revenir aux étapes précédentes et créer des versions alternatives de fichiers et de répertoires.
Les fichiers de nombreux projets sont conservés dans un référentiel Git, et des sites comme GitHub, GitLab et Bitbucket facilitent le partage et la collaboration dans le cadre de projets de développement de logiciels.
Dans ce guide, nous passerons en revue la façon d’installer et de configurer Git sur un serveur Ubuntu 20.04. Nous aborderons la manière d’installer le logiciel de deux manières différentes : via le gestionnaire de paquets intégré et via la source. Chacune de ces approches a ses propres avantages en fonction de vos besoins spécifiques.
Vous aurez besoin d’un serveur Ubuntu 20.04 avec un compte superutilisateur non root.
Pour configurer cela, vous pouvez notre guide de configuration initiale du serveur pour Ubuntu 20.04.
Une fois votre serveur et votre utilisateur configurés, vous êtes prêt à commencer.
L’option d’installer avec les paquets par défaut est meilleure si vous souhaitez être rapidement opérationnel avec Git, si vous préférez une version stable largement utilisée ou si vous ne cherchez pas les plus récentes fonctionnalités disponibles. Si vous recherchez la version la plus récente, vous devriez passer à la section sur l’installation à partir de la source.
Git est probablement déjà installé dans votre serveur Ubuntu 20.04. Vous pouvez confirmer que c’est le cas sur votre serveur avec la commande suivante :
- git --version
Si vous recevez une sortie semblable à ce qui suit, cela signifie que Git est déjà installé.
Outputgit version 2.25.1
Si c’est le cas pour vous, alors vous pouvez passer à la configuration de Git, ou vous pouvez lire la section suivante sur Comment installer à partir de la source si vous avez besoin d’une version plus à jour.
Cependant, si vous n’avez pas obtenu de numéro de version de Git, vous pouvez l’installer avec le gestionnaire de paquets par défaut Ubuntu APT.
Tout d’abord, utilisez les outils de gestion de paquets apt pour mettre à jour votre index de paquets local.
- sudo apt update
Une fois la mise à jour terminée, vous pouvez installer Git :
- sudo apt install git
Vous pouvez confirmer que vous avez correctement installé Git en exécutant la commande suivante et en vérifiant que vous recevez la sortie appropriée.
- git --version
Outputgit version 2.25.1
Une fois Git correctement installé, vous pouvez passer à la section Configuration de Git de ce tutoriel pour terminer votre installation.
Si vous cherchez une méthode plus flexible pour installer Git, vous pouvez vouloir compiler le logiciel à partir de la source, que nous allons passer en revue dans cette section. Cela prend plus de temps et ne sera pas maintenu par votre gestionnaire de paquets, mais cela vous permettra de télécharger la dernière version et vous donnera un certain contrôle sur les options que vous incluez si vous souhaitez les personnaliser.
Vérifiez la version de Git actuellement installée sur le serveur :
- git --version
Si Git est installé, vous recevrez une sortie semblable à ce qui suit :
Outputgit version 2.25.1
Avant de commencer, vous devez installer le logiciel dont Git dépend. Tout cela est disponible dans les référentiels par défaut, afin que nous puissions mettre à jour notre index local de paquets et ensuite installer les paquets correspondants.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
Une fois que vous avez installé les dépendances nécessaires, créez un répertoire temporaire et déplacez-vous dans celui-ci. C’est là que nous téléchargerons notre tarball Git.
- mkdir tmp
- cd /tmp
Depuis le site web du projet Git, nous pouvons naviguer vers la liste des tarballs disponible à l’adresse https://mirrors.edge.kernel.org/pub/software/scm/git/ et télécharger la version souhaitée. Au moment où nous écrivons ces lignes, la version la plus récente est la 2.26.2, nous la téléchargerons donc à des fins de démonstration. Nous utiliserons curl et sortirons le fichier que nous téléchargerons sur git.tar.gz
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Décompressez le fichier tarball compressé :
- tar -zxf git.tar.gz
Ensuite, déplacez-vous dans le nouveau répertoire Git :
- cd git-*
Maintenant, vous pouvez créer le paquet et l’installer en tapant ces deux commandes :
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Maintenant, remplacez le processus shell pour que la version de Git que nous venons d’installer soit utilisée :
- exec bash
Une fois cela terminé, vous pouvez vous assurer que votre installation a réussi en vérifiant la version.
- git --version
Outputgit version 2.26.2
Une fois Git correctement installé, vous pouvez maintenant terminer votre installation.
Une fois que vous êtes satisfait de votre version Git, vous devriez configurer Git de manière à ce que les messages de validation générés contiennent vos informations correctes et vous aident à construire votre projet logiciel.
Cette configuration peut être réalisé en utilisant la commande git config
. Plus précisément, nous devons fournir notre nom et notre adresse e-mail car Git intègre ces informations dans chaque commit que nous faisons. Nous pouvons continuer et ajouter ces informations en tapant :
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Nous pouvons afficher tous les éléments de configuration qui ont été définis en tapant :
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Les informations que vous saisissez sont stockées dans votre fichier de configuration Git, que vous pouvez éventuellement modifier manuellement avec un éditeur de texte de votre choix comme celui-ci (nous utiliserons nano) :
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Appuyez sur CTRL
et X
, puis sur Y
, ensuite sur ENTER
pour quitter l’éditeur de texte.
Il existe de nombreuses autres options que vous pouvez définir, mais ce sont les deux essentielles. Si vous sautez cette étape, vous verrez probablement des avertissements lorsque vous vous committez à Git. Cela vous donne plus de travail, car vous devrez alors réviser les engagements que vous avez faits avec les informations corrigées.
Vous devriez maintenant avoir Git installé et prêt à être utilisé sur votre système.
Pour en savoir plus sur la façon d’utiliser Git, consultez ces articles et ces séries :
]]>Los sistemas de control de versiones como Git son imprescindibles para las prácticas recomendadas del desarrollo de software moderno. El control de versiones le permite realizar un seguimiento de su software a nivel de fuente. Puede rastrear cambios, volver a etapas anteriores y producir ramificaciones para crear versiones alternativas de archivos y directorios.
Los archivos de muchos proyectos de software se mantienen en repositorios de Git y las plataformas como GitHub, GitLab y Bitbucket facilitan el intercambio y la colaboración en proyectos de desarrollo de software.
En esta guía, mostraremos cómo instalar y configurar Git en un servidor de Ubuntu 20.04. Abordaremos la instalación del software de dos formas diferentes: a través del administrador de paquetes integrado y a través de la fuente. Cada uno de estos enfoques ofrece sus propios beneficios, dependiendo de sus necesidades específicas.
Necesitará un servidor Ubuntu 20.04 con una cuenta de superusuario no root.
Para configurarlo, puede consultar nuestra Guía de configuración inicial de servidores para Ubuntu 20.04.
Una vez que tenga el servidor y el usuario configurados, estará listo para comenzar.
La opción de instalar con paquetes predeterminados es recomendable si quiere activar y ejecutar con Git rápidamente, si prefiere una versión estable ampliamente utilizada o si no busca las funciones más recientes disponibles. Si busca la versión más reciente, debería ir directamente a la sección sobre la instalación desde la fuente.
Es probable que Git ya esté instalado en el servidor Ubuntu 20.04. Puede confirmar que ese es el caso de su servidor con el siguiente comando:
- git --version
Si obtiene un resultado similar al siguiente, significa que Git ya está instalado.
Outputgit version 2.25.1
De ser así, puede pasar a la configuración de Git, o bien si necesita una versión más actualizada, puede leer la siguiente sección sobre cómo instalar desde la fuente.
Sin embargo, si no obtuvo como resultado un número de versión de Git, puede instalarlo con el administrador de paquetes predeterminado APT de Ubuntu.
Primero, utilice las herramientas de administración de paquetes apt para actualizar su índice local de paquetes.
- sudo apt update
Una vez completada la actualización, puede instalar Git:
- sudo apt install git
Para confirmar que instaló Git correctamente, ejecute el siguiente comando y compruebe que recibe un resultado pertinente.
- git --version
Outputgit version 2.25.1
Una vez que instale Git correctamente, podrá pasar a la sección Configuración de Git de este tutorial para completar su configuración.
Si busca un método más flexible para instalar Git, puede optar por compilar el software desde la fuente, lo cual explicaremos en esta sección. Esto toma más tiempo y no se mantendrá en su administrador de paquetes, pero le permitirá descargar la versión más reciente y le brindará mayor control sobre las opciones que incluya si quiere personalizarlo.
Verifique la versión de Git que está instalada actualmente en el servidor:
- git --version
Si Git está instalado, obtendrá un resultado similar al siguiente:
Outputgit version 2.25.1
Antes de comenzar, debe instalar el software necesario para Git. Todo se encuentra disponible en los repositorios predeterminados, de modo que podemos actualizar nuestro índice local de paquetes y luego instalar los paquetes pertinentes.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
Tras haber instalado las dependencias necesarias, cree un directorio temporal y vaya a él. Aquí es donde descargaremos nuestro tarball de Git.
- mkdir tmp
- cd /tmp
Desde el sitio web del proyecto Git, podemos navegar a la lista de tarball disponible en https://mirrors.edge.kernel.org/pub/software/scm/git/ y descargar la versión que quiera utilizar. En el momento de escribir este artículo, la versión más reciente es 2.26.2, así que descargaremos esa versión para nuestra demostración. Utilizaremos curl y enviaremos el archivo que descarguemos a git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Descomprima el archivo tarball:
- tar -zxf git.tar.gz
A continuación, vaya al nuevo directorio de Git:
- cd git-*
Ahora, podrá crear el paquete e instalarlo escribiendo estos dos comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Ahora, sustituya el proceso de shell para que se utilice la versión de Git que acabamos de instalar:
- exec bash
Una vez completado esto, puede estar seguro de que su instalación se realizó correctamente comprobando la versión.
- git --version
Outputgit version 2.26.2
Con Git instalado correctamente, ahora puede finalizar su configuración.
Una vez que esté satisfecho con la versión de Git, debería configurar Git de modo que los mensajes de confirmación que genere contengan la información correcta y lo respalden a medida que compile su proyecto de software.
Esta configuración es posible si aplicamos el comando git config
. Específicamente, debemos proporcionar nuestro nombre y nuestra dirección de correo electrónico debido a que Git inserta esta información en cada confirmación que hacemos. Podemos añadir esta información escribiendo lo siguiente:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Podemos ver todos los elementos de configuración creados escribiendo lo siguiente:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
La información que ingresa se almacena en el archivo de configuración de Git. Tendrá la opción de editarlo manualmente con el editor de texto que prefiera (en este tutorial utilizaremos nano) como se muestra a continuación:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Para salir del editor de texto pulse CTRL
y X
, luego Y
y, a continuación, ENTER
.
Existen muchas otras opciones que puede configurar, pero estas son las dos esenciales que se necesitan. Si omite este paso, probablemente verá mensajes de advertencia cuando realice una confirmación con Git. Esto implica un mayor trabajo para usted, pues tendrá que revisar las confirmaciones que haya realizado con la información corregida.
De esta manera, deberá tener Git instalado y listo para utilizarlo en su sistema.
Para obtener más información sobre cómo utilizar Git, consulte los artículos y las series siguientes:
]]>Systeme zur Versionskontrolle wie Git sind integraler Bestandteil von bewährten Methoden für moderne Softwareentwicklung. Durch Versionierung können Sie Ihre Software auf der Quellebene verfolgen. Sie können Änderungen verfolgen, zu früheren Stadien zurückkehren und verzweigen, um alternative Versionen von Dateien und Verzeichnissen zu erstellen.
Dateien aus verschiedenen Softwareprojekten werden in einem Git-Repository und Plattformen wie GitHub, GitLab und Bitbucket verwaltet, um das Freigeben von und gemeinsame Arbeiten an Softwareentwicklungsprojekten zu erleichtern.
In diesem Leitfaden erfahren Sie, wie Sie Git auf einem Ubuntu 20.04-Server installieren und konfigurieren können. Wir werden die Installation der Software auf zwei verschiedene Arten behandeln: über den integrierten Paketmanager sowie per Quellcode. Jeder der Ansätze bietet je nach Bedarf bestimmte Vorteile.
Sie benötigen einen Ubuntu 20.04-Server mit einem non-root-Superuser-Konto.
Sie können zur Einrichtung unserem Leitfaden zur Ersteinrichtung eines Servers für Ubuntu 20.04 folgen.
Nach der Einrichtung Ihres Servers und Benutzers können Sie loslegen.
Die Installationsoption mit Standardpaketen ist besser geeignet, wenn Sie schnell mit der Nutzung von Git beginnen möchten, eine verbreitete stabile Version bevorzugen oder nicht die neuesten verfügbaren Funktionen benötigen. Wenn Sie die aktuellste Version wünschen, sollten Sie zum Abschnitt Installieren aus Quellcode springen.
Git ist wahrscheinlich bereits auf Ihrem Ubuntu 20.04-Server installiert. Sie können prüfen, ob dies auf Ihrem Server der Fall ist, indem Sie folgenden Befehl ausführen:
- git --version
Wenn Sie eine Ausgabe erhalten, die der folgenden ähnelt, ist Git bereits installiert.
Outputgit version 2.25.1
Wenn das der Fall ist, können Sie mit Einrichten von Git fortfahren oder den nächsten Abschnitt zum Installieren aus Quellcode lesen, wenn Sie eine aktuellere Version benötigen.
Wenn Sie keine Git-Versionsnummer als Ausgabe erhalten haben, können Sie Git mit APT, dem standardmäßigen Paketmanager von Ubuntu, installieren.
Verwenden Sie zunächst die APT-Paketmanagement-Tools zur Aktualisierung Ihres lokalen Paketindex.
- sudo apt update
Nach abgeschlossener Aktualisierung können Sie Git installieren:
- sudo apt install git
Sie können prüfen, ob Git korrekt installiert wurde, indem Sie folgenden Befehl ausführen und sich vergewissern, dass Sie eine relevante Ausgabe erhalten.
- git --version
Outputgit version 2.25.1
Nach erfolgreicher Installation von Git können Sie nun mit dem Abschnitt Einrichten von Git dieses Tutorials fortfahren, um Ihre Einrichtung fertigzustellen.
Wenn Sie sich eine flexiblere Methode zur Installation von Git wünschen, kompilieren Sie die Software lieber aus Quellcode. Darauf werden wir in diesem Abschnitt eingehen. Der Vorgang dauert länger und wird nicht von Ihrem Paketmanager verwaltet; dafür können Sie aber die neueste Version herunterladen. Außerdem haben Sie mehr Kontrolle über die Optionen, sollten Sie Anpassungen vornehmen wollen.
Prüfen Sie die auf dem Server aktuell installierte Version von Git:
- git --version
Wenn Git installiert ist, erhalten Sie eine Ausgabe, die der folgenden ähnelt:
Outputgit version 2.25.1
Bevor Sie beginnen, müssen Sie die Software installieren, auf die Git angewiesen ist. Diese ist vollständig in den Standard-Repositorys verfügbar, sodass Sie Ihren lokalen Paketindex aktualisieren und dann die relevanten Pakete installieren können.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
Nachdem Sie die erforderlichen Abhängigkeiten installiert haben, erstellen Sie ein temporäres Verzeichnis und wechseln dorthin. Hier laden wir unseren Git-Tarball herunter.
- mkdir tmp
- cd /tmp
Von der Git-Projekt-Website können wir zur Tarball-Liste navigieren, die unter https://mirrors.edge.kernel.org/pub/software/scm/git/ verfügbar ist, und die gewünschte Version herunterladen. Zum Zeitpunkt der Verfassung dieses Artikels ist die aktuellste Version 2.26.2, sodass wir diese zu Veranschaulichungszwecken herunterladen. Wir verwenden curl und geben die Datei, die wir herunterladen, in git.tar.gz
aus.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Entpacken Sie die komprimierte Tarball-Datei:
- tar -zxf git.tar.gz
Als Nächstes gehen Sie in das neue Git-Verzeichnis:
- cd git-*
Jetzt können Sie das Paket erstellen und installieren, indem Sie diese beiden Befehle eingeben:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Als Nächstes ersetzen Sie den Shell-Prozess, damit die gerade installierte Version von Git verwendet wird:
- exec bash
Nach abgeschlossener Ausführung können Sie sehen, ob Ihre Installation erfolgreich war, indem Sie die Version überprüfen.
- git --version
Outputgit version 2.26.2
Nach erfolgreicher Installation von Git können Sie nun Ihre Einrichtung abschließen.
Wenn Sie mit Ihrer Git-Version zufrieden sind, sollten Sie Git so konfigurieren, dass die generierten Commit-Nachrichten die richtigen Daten enthalten und Sie bei der Erstellung Ihres Softwareprojekts unterstützen.
Eine Konfiguration lässt sich durch Verwendung des Befehls git config
vornehmen. Insbesondere müssen wir unseren Namen und unsere E-Mail-Adresse angeben, da Git diese Informationen in jedem unserer Commits einbettet. Wir können fortfahren und diese Informationen einfügen, indem wir Folgendes eingeben:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Wir können alle Konfigurationselemente anzeigen, die wir festgelegt haben, indem wir Folgendes eingeben:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Die Informationen, die Sie eingeben, werden in Ihrer Git-Konfigurationsdatei gespeichert. Diese können Sie bei Bedarf mit einem Texteditor bearbeiten (in unserem Fall nano):
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Drücken Sie STRG
und X
, dann Y
und anschließend die Eingabetaste
, um den Texteditor zu verlassen.
Es gibt viele weitere Optionen, die Sie festlegen können. Diese beiden sind aber obligatorisch. Wenn Sie diesen Schritt überspringen, werden wahrscheinlich Warnungen angezeigt, wenn Sie einen Commit für Git ausführen. Dadurch werden Sie mehr Arbeit haben, da Sie dann die von Ihnen vorgenommenen Commits mit den korrigierten Informationen korrigieren müssen.
Git sollte nun installiert und auf Ihrem System benutzbar sein.
Weitere Informationen zur Verwendung von Git finden Sie in diesen Artikeln und Reihen:
]]>Drone is an open-source container-native CI/CD platform written in Go. It works with configuration files written in YAML, JSON, JSONNet, or Starlark, which define multiple build pipelines consisting of a number of steps.
Drone integrates with multiple source code managers. Currently, three different SCMs are supported: GitHub (cloud/enterprise), BitBucket (cloud/server), and Gitea. In general, each provider supports all Drone functionality.
Drone also supports different runners for executing jobs. These runners are not interchangeable (except with the simplest pipelines), because their configuration formats, features, and execution environments differ. Here is a brief summary of your options:
In this tutorial, you will set up a Drone CI/CD server for source code on GitHub, add a Docker runner, use Let’s Encrypt to secure your instance, and then create a YAML pipeline. You will also encounter options to scale your runner using Drone Autoscaler and to store your logs on an S3-compatible server, such as DigitalOcean Spaces.
Before you start this tutorial, you will need:
sudo
priveliges. You can follow our initial server setup to configure your machine.Note: This tutorial will optionally configure Drone to use its Autoscaler feature with DigitalOcean, which will automatically scale your number of Droplets on an as-needed basis. If you choose this route, ensure that your account will remain within your limit. On DigitalOcean, the default limit for most users is 5 Droplets, but you can contact support and request an increase. To do so, visit your account’s Cloud Dashboard and find ACCOUNT in the left-hand menu. A sub-menu will appear; click Settings. A page will open that displays your account username and your Member since
date. Beneath this date you will see a line like Droplet Limit:5 Increase
. Click Increase to submit a request for more Droplets.
Moreover, the Autoscaler path described in this tutorial will require a Personal Access Token from DigitalOcean. If you do choose to install Autoscaler, you can follow this tutorial to retrieve a token from your DigitalOcean Control Panel. Copy this key down before leaving the page; it will disappear once you leave or refresh the page and you will need to input it during Step 6.
Lastly, if you choose not to install Drone’s Autoscaler feature, you will need at least another 2GB of RAM and 10GB of free disk space to ensure that you can run pipelines.
drone.your_domain
.your_s3_access_key
and your_s3_secret_key
. Alternately, you can use a different S3-compatible service, or skip this step—Step 3—entirely. Note that skipping this step is only advisable if you are trying out Drone or if you know that your build volume will be quite low.To access code, authenticate users, and add webhooks to receive events, Drone requires an OAuth application for GitHub. For other providers, you can read Drone’s official documentation here.
To set up an OAuth application for GitHub, log in to your GitHub account and then click on your user menu in the top-right. Click Settings, then find the Developer Settings category in the menu on the left, and then click OAuth Applications. Alternatively, you can navigate directly to Github’s Developer Settings page.
Next, create a new application. Click on the New OAuth App button in the upper-right corner and a blank form will appear.
Use Drone
for your application name. Replace drone.your_domain
with your own domain, add a brief explanation of your app, and then add drone.your_domain/login
for your Authorization callback URL.
Click Register application and you will see a dashboard containing information about your application. Included here are your app’s Client ID and Client Secret. Copy these two values somewhere safe; you will need to use them in the following steps wherever you see your_github_client_id
and your_github_client_secret
.
With your app now registered on GitHub, you are ready to configure Drone.
Now start preparing your Docker configurations, which will build your Drone server. First, generate a shared secret to authenticate runners with the main Drone instance. Create one using the openssl
command:
- openssl rand -hex 16
openssl
will generate a random 16-bit hexadecimal number. It will produce an output like this:
Output918...46c74b143a1719594d010ad24
Copy your own output to your clipboard. You will add it into the next command, where it will replace your_rpc_secret
.
Now create your Drone configuration file. Rather than continually opening and closing this configuration file, we will leverage the tee
command, which will split your command’s output to your console while also appending it to Drone’s configuration file. An explanation will follow every command block in this tutorial, but you can find a detailed description of all available Drone options in their official documentation.
Now begin building your Drone server’s configuration. Copy the following command to your terminal. Be sure to replace drone.your_domain
with your domain. Also replace your_github_client_id
and your_github_client_secret
with your GitHub OAuth credentials, and then replace your_rpc_secret
with the output from your openssl
command. Lastly, replace sammy_the_shark
with your GitHub username. This will grant you administrative privileges:
- cat << 'EOF' | sudo tee /etc/drone
- DRONE_SERVER_HOST=drone.your_domain
- DRONE_SERVER_PROTO=https
- DRONE_GITHUB_CLIENT_ID=your_github_client_id
- DRONE_GITHUB_CLIENT_SECRET=your_github_client_secret
- DRONE_RPC_SECRET=your_rpc_secret
- DRONE_USER_CREATE=username:sammy_the_shark,admin:true
- EOF
This command makes use of a heredoc. A heredoc uses the <<
redirection operator followed by an arbitrary word, where EOF
is conventionally used to represent end-of-file. It allows the user to write a multiline input, ending with the EOF
or whatever word the user has chosen. The quotes around the end-of-file affect how the text is parsed with regards to variable substitution, in a similar way to how they work around literals. It is a very useful tool, which in this case you are using to create a file and then add lines to it. Here you are adding your first Drone configurations and ending them with EOF
. This input is then redirected to the cat
command, and the output of the cat
command is then piped to the tee
command via the |
pipe operator. Heredocs are a great way to quickly create or append text to a file.
Next, in order to prevent arbitrary users from logging in to your Drone server and having access to your runners, limit registration to specified usernames or organizations. If you need to add users at this time, run the following command, replacing users
with a comma-separated list of GitHub usernames or organization names:
- echo 'DRONE_USER_FILTER=users' | sudo tee -a /etc/drone
If you are not using an external load balancer or SSL proxy, you will also need to enable Let’s Encrypt for HTTPS:
- echo 'DRONE_TLS_AUTOCERT=true' | sudo tee -a /etc/drone
You will note that your tee
command now includes the -a
switch, which instructs tee
to append, and not overwrite, this output to your Drone configuration file. Let’s now set up your log storage system.
For heavily used installations, the volume of build logs can increase quite quickly to multiple gigabytes. By default, these logs are stored in the server’s database, but for performance, scalability, and stability, consider setting up external storage for your build logs. In this step, you’ll use DigitalOcean Spaces to do just that. You are welcome to modify these steps and use another S3-compatible storage service, or none at all if you are still prototyping your CI/CD workflow, or if you know that your build volume will be very low. In those cases you may continue to Step 4 now.
To store your logs on DigitalOcean Spaces, make sure that you have completed the necessary prerequisites and have set up a Spaces bucket and generated a matching Spaces Access Key and Secret. Copy that key to your clipboard and then update your configuration file with the following command:
- cat << 'EOF' | sudo tee -a /etc/drone
- DRONE_S3_ENDPOINT=your_s3_endpoint
- DRONE_S3_BUCKET=your_s3_bucket_name
- AWS_ACCESS_KEY_ID=your_s3_access_key
- AWS_SECRET_ACCESS_KEY=your_s3_secret_key
- EOF
Remember to replace your_s3_endpoint
with the URL for your Space, your_s3_bucket_name
with the name of the Space you created, your_s3_access_key
with your access key, and your_s3_secret_key
with your secret. You can find the first two values in your Control Panel by clicking the Manage menu button, then clicking Spaces, and then choosing your new Space. You can retrieve your Spaces Access Key by clicking the Account menu button, and then clicking the API button, and then scrolling down until you find the Spaces section. If you have misplaced your secret key, then you will need to generate a new access key/secret pair.
Your Drone configuration file is now complete. Run a cat
command to view it:
- cat /etc/drone
Your configuration file will look something like the following, depending on the options you chose:
OutputDRONE_SERVER_HOST=drone.your_domain
DRONE_SERVER_PROTO=https
DRONE_GITHUB_CLIENT_ID=your_github_client_id
DRONE_GITHUB_CLIENT_SECRET=your_github_client_secret
DRONE_RPC_SECRET=your_rpc_secret
DRONE_USER_CREATE=username:sammy_the_shark,admin:true
DRONE_USER_FILTER=the_shark_org
DRONE_TLS_AUTOCERT=true
DRONE_S3_ENDPOINT=your_s3_endpoint
DRONE_S3_BUCKET=your_s3_bucket
AWS_ACCESS_KEY_ID=your_s3_access_key
AWS_SECRET_ACCESS_KEY=your_s3_secret_key
Once you have confirmed that your configuration file is complete, you can start your Drone server.
With your proper configurations in place, your next step is to install and start Drone.
First, pull the Drone Server Docker image:
- docker pull drone/drone:1
Next, create a volume to store the SQLite database:
- docker volume create drone-data
Finally, start the server, set it to restart on boot, and forward port 80
and 443
to it:
- docker run --name=drone --detach --restart=always --env-file=/etc/drone --volume=drone-data --publish=80:80 --publish=443:443 drone/drone:1
If you followed the DigitalOcean Initial Server Setup Guide, then you will have enabled ufw
and only allowed OpenSSH
through your firewall. You will now need to open ports 80 and 443:
- sudo ufw allow 80
- sudo ufw allow 443
Now reload ufw
and check that your rules updated:
- sudo ufw reload
- sudo ufw status
You will see an output like this:
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80 ALLOW Anywhere
443 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
At this point, you will be able to access your server, log in, and manage your repositories. Go to https://drone.your_domain
, enter your GitHub credentials, and authorize your new application when prompted.
Your Drone server is now live and nearly ready for use. Configuring your Drone runner and/or the DigitalOcean Autoscaler are all that remains.
Before your server can execute jobs, you need to set up a runner.
If you want to automatically scale the runner with DigitalOcean Droplets, skip down to Option 2: Installing the Drone Autoscaler for DigitalOcean. If you want to use another runner, you can set that up instead and skip to Step 7. Otherwise, follow Option 1 and install the Docker runner.
First, pull the Docker image for the runner:
- docker pull drone/drone-runner-docker:1
Next, start the runner. Replace drone.your_domain
and your_rpc_secret
with your personal values. You can change the DRONE_RUNNER_CAPACITY
to increase the number of pipelines that will be executed at once, but be mindful of your available system resources:
- docker run --name drone-runner --detach --restart=always --volume=/var/run/docker.sock:/var/run/docker.sock -e DRONE_RPC_PROTO=https -e DRONE_RPC_HOST=drone.your_domain -e DRONE_RPC_SECRET=your_rpc_secret -e DRONE_RUNNER_CAPACITY=1 -e DRONE_RUNNER_NAME=${HOSTNAME} drone/drone-runner-docker:1
Finally, ensure that the runner started successfully:
- docker logs drone-runner
You will see an output like this:
Outputtime="2020-06-13T17:58:33-04:00" level=info msg="starting the server" addr=":3000"
time="2020-06-13T17:58:33-04:00" level=info msg="successfully pinged the remote server"
time="2020-06-13T17:58:33-04:00" level=info msg="polling the remote server" arch=amd64 capacity=1 endpoint="https://drone.your_domain" kind=pipeline os=linux type=docker
If you ever need to change the runner configuration or secret, delete your container with docker rm drone-runner
, and repeat this step. You can now proceed to Step 7 and create a basic pipeline.
The Drone Autoscaler for DigitalOcean can automatically create and destroy Droplets with the Docker runner as needed.
First, go to your Drone server, log in, and click on User settings in the user menu. Find and copy your Personal Token. This is your_drone_personal_token
.
Next, generate a new character string with the following command:
- openssl rand -hex 16
Outpute5cd27400...92b684526c622
Copy the output like you did in Step 2. This new output is your drone_user_token
.
Now add a new machine user with these new credentials:
- docker run --rm -it -e DRONE_SERVER=https://drone.your_domain -e DRONE_TOKEN=your_drone_personal_token --rm drone/cli:1 user add autoscaler --machine --admin --token=drone_user_token
Now, if you haven’t already, you’ll need to create a DigitalOcean API Token with read/write privileges. We will refer to this as your_do_token
. If you did not complete this step in the prerequisites section then you can use this guide to create one now. Keep this token very safe; it grants full access to all resources on your account.
Finally, you can start the Drone Autoscaler. Make sure to replace all the highlighted variables with your own matching credentials:
- docker volume create drone-autoscaler-data
- docker run --name=drone-autoscaler --detach --restart=always --volume=drone-autoscaler-data -e DRONE_SERVER_PROTO=https -e DRONE_SERVER_HOST=drone.your_domain -e DRONE_SERVER_TOKEN=drone_user_token -e DRONE_AGENT_TOKEN=your_rpc_secret -e DRONE_POOL_MIN=0 -e DRONE_POOL_MAX=2 -e DRONE_DIGITALOCEAN_TOKEN=your_do_token -e DRONE_DIGITALOCEAN_REGION=nyc1 -e DRONE_DIGITALOCEAN_SIZE=s-2vcpu-4gb -e DRONE_DIGITALOCEAN_TAGS=drone-autoscaler,drone-agent drone/autoscaler
You can also configure the minimum/maximum number of Droplets to create and the type/region for the Droplet. For faster build start times, set the minimum to 1 or more. Also, note that by default, the autoscaler will determine if new Droplets need to be created or destroyed every minute, and Droplets will be left running for at least 1 hour after creation before being automatically destroyed when inactive.
Afterwards, verify that the autoscaler started correctly with:
- docker logs drone-autoscaler
If you decide you no longer want to use the autoscaler, delete the container with docker rm drone-autoscaler
, delete the leftover Droplets (if any) from your account, and revoke the DigitalOcean API Token. You are now prepared to test your new CI/CD workflow.
To test your new Drone installation, let’s create a YAML pipeline.
First, create a new repository on GitHub. From your GitHub profile page click on the Repositories menu, then click the green New button at the upper right. Give your repository a name on the following page and then click the green Create repository button. Now navigate to your Drone server, press SYNC, refresh the page, and your newly created repository should appear. Press the ACTIVATE button beside it.
Afterwards, create a new file in your repo named .drone.yml
. You can do this using GitHub’s UI or from the command line using git
. From the GitHub UI, click on the Repositories menu, then click on your new repository, and then click the Add file dropdown menu. Choose Create new file, name the file .drone.yaml
, and add the following contents:
name: drone-test
kind: pipeline
type: docker
steps:
- name: test
image: alpine
commands:
- echo "It worked!"
If you are using the GitHub UI, press the green Commit new file button at the bottom of the page. If you are using the command line then commit and push your changes. In either case, now open and watch your Drone dashboard in a browser.
If the build remains pending and doesn’t start, ensure that your runners are set up correctly (and that a Droplet was created if using the autoscaler). You can view the logs for the runner with docker logs drone-runner
and the logs for the autoscaler with docker logs drone-autoscaler
.
If you are using the autoscaler, it may take up to a minute for the initial build to start (the last log message during that time would be starting the server
).
After the build completes, you will see the text It worked!
in the logs for the test
stage of the drone-test
pipeline. If the logs fail to load, ensure that your S3 credentials and bucket name are correct. You can use docker logs drone
to view Drone’s logs for more information.
You’ve now set up and installed a Drone server to handle your CI/CD workflow.
In this tutorial you set up the Drone CI/CD server for use with your GitHub projects and optionally set up external storage for build logs. You also set up a local runner or a service to automatically scale them using DigitalOcean Droplets.
You can continue adding teammates and other authorized users to Drone using the process outlined in Step 2. Should you ever want to prevent any new users from signing up, run the following commands:
- echo 'DRONE_REGISTRATION_CLOSED=true' | sudo tee -a /etc/drone
- docker restart drone
Drone is a very capable tool. From here, you might consider learning more about their pipeline syntax and Drone’s other functionalities, or reviewing the fundamentals of Docker.
]]>Is it possible to point a subdomain of a site hosted on a Digital Ocean Droplet to a Netlify site? I have my sites DNS hosted here on Digital Ocean. Netlify uses a modern CDN so I can’t point an A record at an IP as their is no dedicated IP to point at.
Here is what I would like to accomplish;
example.com -> hosted on a digital ocean VPS blog.example.com -> hosted on a Netlify deployment (build from Git)
Is this possible with basic DNS?
]]>Version control systems help you collaborate on software development projects. Git is one of the most popular version control systems currently available.
This tutorial will walk you through installing and configuring Git from source on an Ubuntu 20.04 server. For a more detailed version of this tutorial, with more thorough explanations of each step, please refer to How To Install Git on Ubuntu 20.04.
Verify whether you have a version of Git currently installed on the server:
- git --version
If Git is installed, you’ll receive output similar to the following:
Outputgit version 2.25.1
Whether or not you have Git installed already, it is worth checking to make sure that you install a more recent version during this process.
You’ll next need to install the software that Git depends on. This is all available in the default Ubuntu repositories, so we can update our local package index and then install the relevant packages.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
Press y
to confirm if prompted. Necessary dependencies should now be installed.
Create a temporary directory to download our Git tarball and move into it.
- mkdir tmp
- cd /tmp
From the Git project website, we can navigate to the tarball list available at https://mirrors.edge.kernel.org/pub/software/scm/git/ and download the version you would like. At the time of writing, the most recent version is 2.26.2, so we will download that for demonstration purposes. We’ll use curl and output the file we download to git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Unpack the compressed tarball file:
- tar -zxf git.tar.gz
Next, move into the new Git directory:
- cd git-*
Now, you can make the package and install it by typing these two commands:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Now, replace the shell process so that the version of Git we just installed will be used:
- exec bash
You can be sure that your install was successful by checking the version.
- git --version
Outputgit version 2.26.2
With Git successfully installed, you can now complete your setup.
Now that you have Git installed and to prevent warnings, you should configure it with your information.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
If you need to edit this file, you can use a text editor such as nano:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
To learn more about how to use Git, check out these articles and series:
]]>Version control systems like Git are essential to modern software development best practices. Versioning allows you to keep track of your software at the source level. You can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
Many software projects’ files are maintained in Git repositories, and platforms like GitHub, GitLab, and Bitbucket help to facilitate software development project sharing and collaboration.
In this guide, we will go through how to install and configure Git on an Ubuntu 20.04 server. We will cover how to install the software two different ways: via the built-in package manager, and via source. Each of these approaches come with their own benefits depending on your specific needs.
Simplify deploying applications with DigitalOcean App Platform. Deploy directly from GitHub in minutes.
You will need an Ubuntu 20.04 server with a non-root superuser account.
To set this up, you can follow our Initial Server Setup Guide for Ubuntu 20.04.
With your server and user set up, you are ready to begin.
The option of installing with default packages is best if you want to get up and running quickly with Git, if you prefer a widely-used stable version, or if you are not looking for the newest available functionalities. If you are looking for the most recent release, you should jump to the section on installing from source.
Git is likely already installed in your Ubuntu 20.04 server. You can confirm this is the case on your server with the following command:
- git --version
If you receive output similar to the following, then Git is already installed.
Outputgit version 2.25.1
If this is the case for you, then you can move onto setting up Git, or you can read the next section on how to install from source if you need a more up-to-date version.
However, if you did not get output of a Git version number, you can install it with the Ubuntu default package manager APT.
First, use the apt package management tools to update your local package index.
- sudo apt update
With the update complete, you can install Git:
- sudo apt install git
You can confirm that you have installed Git correctly by running the following command and checking that you receive relevant output.
- git --version
Outputgit version 2.25.1
With Git successfully installed, you can now move on to the Setting Up Git section of this tutorial to complete your setup.
If you’re looking for a more flexible method of installing Git, you may want to compile the software from source, which we will go over in this section. This takes longer and will not be maintained through your package manager, but it will allow you to download the latest release and will give you greater control over the options you include if you wish to make customizations.
Verify the version of Git currently installed on the server:
- git --version
If Git is installed, you’ll receive output similar to the following:
Outputgit version 2.25.1
Before you begin, you need to install the software that Git depends on. This is all available in the default repositories, so we can update our local package index and then install the relevant packages.
- sudo apt update
- sudo apt install libz-dev libssl-dev libcurl4-gnutls-dev libexpat1-dev gettext cmake gcc
After you have installed the necessary dependencies, create a temporary directory and move into it. This is where we will download our Git tarball.
- mkdir tmp
- cd /tmp
From the Git project website, we can navigate to the tarball list available at https://mirrors.edge.kernel.org/pub/software/scm/git/ and download the version you would like. At the time of writing, the most recent version is 2.26.2, so we will download that for demonstration purposes. We’ll use curl and output the file we download to git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.2.tar.gz
Unpack the compressed tarball file:
- tar -zxf git.tar.gz
Next, move into the new Git directory:
- cd git-*
Now, you can make the package and install it by typing these two commands:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Now, replace the shell process so that the version of Git we just installed will be used:
- exec bash
With this complete, you can be sure that your install was successful by checking the version.
- git --version
Outputgit version 2.26.2
With Git successfully installed, you can now complete your setup.
After you are satisfied with your Git version, you should configure Git so that the generated commit messages you make will contain your correct information and support you as you build your software project.
Configuration can be achieved by using the git config
command. Specifically, we need to provide our name and email address because Git embeds this information into each commit we do. We can go ahead and add this information by typing:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
We can display all of the configuration items that have been set by typing:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
The information you enter is stored in your Git configuration file, which you can optionally edit by hand with a text editor of your choice like this (we’ll use nano):
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Press CTRL
and X
, then Y
then ENTER
to exit the text editor.
There are many other options that you can set, but these are the two essential ones needed. If you skip this step, you’ll likely see warnings when you commit to Git. This makes more work for you because you will then have to revise the commits you have done with the corrected information.
You should now have Git installed and ready to use on your system.
To learn more about how to use Git, check out these articles and series:
]]>Versionsverwaltungssysteme sind ein unverzichtbarer Teil der modernen Softwareentwicklung. Durch Versionierung können Sie Ihre Software auf der Quellebene verfolgen. Sie können Änderungen verfolgen, zu früheren Stadien zurückkehren und verzweigen, um alternative Versionen von Dateien und Verzeichnissen zu erstellen.
Eines der beliebtesten Versionsverwaltungssysteme derzeit ist Git. Dateien aus vielen Projekten werden in einem Git-Repository und Sites wie GitHub, GitLab und Bitbucket verwaltet, um das Teilen und Zusammenarbeiten bei Softwareentwicklungsprojekten zu erleichtern.
In diesem Leitfaden erfahren Sie, wie Sie Git auf einem CentOS 8-Server installieren und konfigurieren können. Wir behandeln die Installation der Software auf zwei verschiedene Arten: über den integrierten Paketmanager und über Quellcode. Jeder dieser Ansätze bietet je nach Ihren Bedürfnissen eigene Vorteile.
Sie benötigen einen CentOS 8-Server mit einem non-root-Superuser-Konto.
Um dies einzurichten, können Sie unserem Leitfaden zur Ersteinrichtung des Servers für CentOS 8 folgen.
Nach der Einrichtung Ihres Servers und Benutzers können Sie loslegen.
Unsere erste Option zur Installation von Git umfasst CentOS-Standardpakete.
Diese Option eignet sich am besten für Benutzer, die schnell mit Git loslegen möchten, eine verbreitete stabile Version bevorzugen oder keine der neuesten verfügbaren Funktionen benötigen. Wenn Sie nach der aktuellsten Version suchen, sollten Sie zum Abschnitt Installieren aus Quellcode springen.
Wir verwenden das Open-Source-basierte Paketmanager-Tool DNF, was für Dandified YUM, die nächste Generation des Yellowdog Updater, Modified (YUM), steht. DNF ist ein Paketmanager, der nun der Standard-Paketmanager für Red Hat-basierte Linux-Systeme wie CentOS ist. Damit können Sie Softwarepakete auf Ihrem Server installieren, aktualisieren und entfernen.
Verwenden Sie zunächst die APT-Paketmanagement-Tools zur Aktualisierung Ihres lokalen Paketindex.
- sudo dnf update -y
Das Flag -y
weist das System darauf hin, das wir wissen, dass wir Änderungen vornehmen. Damit wird verhindert, dass uns das Terminal zur Bestätigung der Änderungen auffordert.
Nach abgeschlossener Aktualisierung können Sie Git installieren:
- sudo dnf install git -y
Sie können prüfen, ob Sie Git richtig installiert haben, indem Sie folgenden Befehl ausführen:
- git --version
Outputgit version 2.18.2
Nach erfolgreicher Installation von Git können Sie nun mit dem Abschnitt Einrichten von Git dieses Tutorials fortfahren, um Ihre Einrichtung fertigzustellen.
Eine flexiblere Methode zur Installation von Git besteht darin, die Software aus Quellcode zu kompilieren. Das dauert länger und wird nicht von Ihrem Paketmanager verwaltet, ermöglicht Ihnen aber, die neueste Version herunterzuladen. Außerdem können Sie zwischen einigen Optionen wählen, wenn Sie Anpassungen vornehmen möchten.
Bevor Sie beginnen, müssen Sie die Software installieren, auf die Git angewiesen ist. Diese ist vollständig in den Standard-Repositorys verfügbar, sodass wir unseren lokalen Paketindex aktualisieren und dann die Pakete installieren können.
- sudo dnf update -y
- sudo dnf install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel gcc autoconf -y
Nachdem Sie die erforderlichen Abhängigkeiten installiert haben, erstellen Sie ein temporäres Verzeichnis und wechseln dorthin. Hier laden wir unseren Git-Tarball herunter.
- mkdir tmp
- cd /tmp
Von der Git-Projekt-Website können wir zu der Tarball-Liste der Red Hat Linux-Distribution navigieren, die unter https://mirrors.edge.kernel.org/pub/software/scm/git/ verfügbar ist, und die gewünschte Version herunterladen. Zum Zeitpunkt der Verfassung dieses Textes ist die aktuellste Version 2.26.0, sodass wir diese zu Veranschaulichungszwecken herunterladen. Wir verwenden curl und geben die Datei, die wir herunterladen, in git.tar.gz
aus.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
Entpacken Sie die komprimierte Tarball-Datei:
- tar -zxf git.tar.gz
Als Nächstes gehen Sie in das neue Git-Verzeichnis:
- cd git-*
Jetzt können Sie das Paket erstellen und installieren, indem Sie diese beiden Befehle eingeben:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Nach abgeschlossener Ausführung können Sie sehen, ob Ihre Installation erfolgreich war, indem Sie die Version überprüfen.
- git --version
Outputgit version 2.26.0
Nach erfolgreicher Installation von Git können Sie nun Ihre Einrichtung abschließen.
Nachdem Sie Git installiert haben, sollten Sie es nun konfigurieren, damit die erzeugten commit-Nachrichten Ihre richtigen Informationen enthalten.
Dies lässt sich durch Verwendung des Befehls git config
erreichen. Insbesondere müssen wir unseren Namen und unsere E-Mail-Adresse angeben, da Git diese Informationen in jedem unserer Commits einbettet. Wir können fortfahren und diese Informationen einfügen, indem wir Folgendes eingeben:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Wir können alle Konfigurationselemente anzeigen, die wir festgelegt haben, indem wir Folgendes eingeben:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Die Informationen, die Sie eingeben, werden in Ihrer Git-Konfigurationsdatei gespeichert. Diese können Sie bei Bedarf mit einem Texteditor wie dem folgenden bearbeiten:
- vi ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Drücken Sie ESC
und dann :q
, um den Texteditor zu beenden.
Es gibt viele weitere Optionen, die Sie festlegen können. Diese beiden sind aber obligatorisch. Wenn Sie diesen Schritt überspringen, werden wahrscheinlich Warnungen angezeigt, wenn Sie einen Commit für Git ausführen. Dadurch werden Sie mehr Arbeit haben, da Sie dann die von Ihnen vorgenommenen Commits mit den korrigierten Informationen korrigieren müssen.
Sie sollten Git nun installiert haben und bereit sein, Git in Ihrem System zu benutzen.
Weitere Informationen zur Verwendung von Git finden Sie in diesen Artikeln und Reihen:
]]>Your help/feedback would be so much appreciated!
]]>Системы контроля версий являются незаменимой частью современных процессов разработки программного обеспечения. Контроль версий помогает отслеживать изменения программного обеспечения на уровне исходного кода. Вы можете отслеживать изменения, возвращаться к предыдущим версиям и создавать ответвления для создания альтернативных версий файлов и директорий.
Одна из наиболее популярных систем управления версиями в настоящее время — это Git. Многие проектные файлы хранятся в репозитории Git, а такие сайты, как GitHub, GitLab и Bitbucket, упрощают работу над проектами разработки программного обеспечения и совместную работу.
В этом обучающем руководстве мы научимся устанавливать и настраивать Git на сервере CentOS 8. Мы узнаем о двух способах установки программного обеспечения: посредством встроенного диспетчера пакетов и из источника. Каждый из этих подходов имеет собственные преимущества, зависящие от конкретных потребностей.
Вам потребуется сервер CentOS 8 с учетной записью non-root superuser.
Чтобы выполнить настройку, воспользуйтесь руководством по начальной настройке сервера CentOS 8.
После настройки сервера и пользователя вы можете продолжить.
Первый вариант установки Git — использование пакетов CentOS по умолчанию.
Данный вариант лучше всего подходит тем, кто хочет быстро начать работать с Git, предпочитает широко используемую стабильную версию, и кому не нужны самые последние возможности. Если вас интересует самая последняя версия, переходите к разделу об установке из источника.
Мы будем использовать диспетчер пакетов с открытым исходным кодом DNF (Dandified YUM), это новое поколение Yellowdog Updater, Modified (т. е. yum). Диспетчер пакетов DNF теперь используется по умолчанию в системах Linux на базе Red Hat, в том числе в CentOS. С его помощью вы сможете выполнять установку, обновление и удаление программных пакетов на вашем сервере.
Во-первых, воспользуйтесь инструментами управления пакетами apt для обновления локального индекса пакетов.
- sudo dnf update -y
Флаг -y
сообщает системе, что мы знаем о внесении изменений, в результате чего терминал не запрашивает у нас подтверждений.
После завершения обновления вы можете выполнить установку Git:
- sudo dnf install git -y
Вы можете убедиться, что установка Git выполнена корректно, запустив следующую команду:
- git --version
Outputgit version 2.18.2
После успешной установки Git вы можете переходить к разделу Настройка Git данного обучающего руководства и выполнению настройки.
Более гибкий метод установки Git — это компиляция программного обеспечения из исходного кода. Это метод требует больше времени, а полученный результат не будет сохранен в диспетчере пакетов, но он позволяет загрузить последнюю версию и дает определенный контроль над параметрами, которые вы включаете в ПО при необходимости индивидуальной настройки.
Перед началом установки вам нужно установить программное обеспечение, от которого зависит Git. Его можно найти в репозиториях по умолчанию, поэтому мы можем обновить локальный индекс пакетов, а после этого установить пакеты.
- sudo dnf update -y
- sudo dnf install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel gcc autoconf -y
После установки необходимых зависимостей создайте временную директорию и перейдите в нее. В эту директорию мы загрузим тар-архив Git.
- mkdir tmp
- cd /tmp
На сайте проекта Git перейдите в список тар-архивов для разных дистрибутивов Red Hat Linux на странице https://mirrors.edge.kernel.org/pub/software/scm/git/ и загрузите желаемую версию. На момент написания последней версией была версия 2.26.0, поэтому для демонстрационных целей мы загрузим именно эту версию. Мы используем curl и выведем загружаемый файл в git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
Распакуйте тар-архив:
- tar -zxf git.tar.gz
Перейдите в новую директорию Git:
- cd git-*
Теперь вы можете создать пакет и установить его, введя эти две команды:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Теперь вы можете проверить версию, чтобы убедиться в успешной установке.
- git --version
Outputgit version 2.26.0
Мы успешно выполнили установку Git и теперь можем завершить настройку.
Теперь, когда вы установили Git, вам нужно настроить его, чтобы сгенерированные сообщения о внесении изменений содержали корректную информацию.
Это можно сделать с помощью команды git config
. В частности, нам нужно указать наше имя и адрес электронной почты, поскольку Git вставляет эту информацию в каждое внесение изменений. Мы можем двигаться дальше и добавить эту информацию с помощью следующей команды:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Мы можем просмотреть все настроенные пункты конфигурации, введя следующую команду:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Информация, которую вы вводите, сохраняется в файле конфигурации Git, и вы можете при желании изменить ее вручную с помощью текстового редактора:
- vi ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Нажмите ESC
и :q
для выхода из текстового редактора.
Существует множество других вариантов настроек, но эти две опции устанавливаются в обязательном порядке. Если вы пропустите этот шаг, вы, скорее всего, будете видеть предупреждения при внесении изменений в Git. Это будет требовать дополнительной работы, поскольку вам нужно будет исправлять вносимые изменения, которые вы делали, вводя корректную информацию.
Вы установили Git и готовы к его использованию в системе.
Чтобы узнать больше об использовании Git, прочитайте эти статьи и разделы:
]]>Os sistemas de controle de versão constituem uma parte indispensável do desenvolvimento de softwares modernos. O controle de versão permite que você acompanhe seu software a nível de código-fonte. É possível rastrear as alterações, retornar a etapas anteriores, e os ramos para criar versões alternativas de arquivos e diretórios.
Um dos sistemas de controle de versão mais populares disponíveis atualmente é o Git. Muitos arquivos de projetos são mantidos em um repositório Git, e sites como o GitHub, o GitLab, e o Bitbucket ajudam a facilitar o compartilhamento e colaboração de projetos de desenvolvimento de software.
Neste guia abordaremos como instalar e configurar o Git em um servidor CentOS 8. Trataremos a instalação do software de duas maneiras diferentes: através do gerenciador de pacotes integrado e através da origem. Cada uma destas abordagens tem seus próprios benefícios em diferentes situações. Você deve escolher entre elas de acordo com sua necessidade.
Será necessário ter um servidor CentOS 8 com uma conta do superusuário não root.
Para configurar isso, você pode seguir nosso Guia de configuração inicial de servidor para o CentOS 8.
Com seu servidor e usuário configurados, você estará pronto para começar.
Nossa primeira opção de instalação do Git é através dos pacotes padrão do CentOS.
Essa é a melhor opção para os que querem começar a trabalhar rapidamente com o Git, que preferem uma versão mais estável e amplamente utilizada, ou que não estão procurando as opções mais recentes disponíveis. Se estiver procurando pela versão mais recente, vá para a seção de instalação da origem.
Utilizaremos a ferramenta de gerenciamento de pacotes de código aberto DNF, que significa Dandified YUM, a próxima geração da versão Yellowdog Updater, Modified (ou seja, yum). O DNF é o gerenciador de pacotes padrão nos sistemas Linux baseados em Red Hat, como o CentOS. Ele permitirá que você instale, atualize e remova os pacotes de software em seu servidor.
Primeiramente, utilize as ferramentas de gerenciamento de pacotes apt para atualizar seu índice de pacotes local.
- sudo dnf update -y
O sinalizador -y
é usado para alertar o sistema que estamos cientes das alterações que estamos fazendo, impedindo que o terminal nos envie solicitações de confirmação.
Com a atualização finalizada, instale o Git:
- sudo dnf install git -y
É possível confirmar que você instalou o Git corretamente executando o seguinte comando:
- git --version
Outputgit version 2.18.2
Com o Git instalado com sucesso, agora é possível seguir em frente para a seção Como configurar o Git deste tutorial para completar sua configuração.
Um método mais flexível de instalar o Git é compilar o software do código. Isso leva mais tempo e não será mantido através do seu gerenciador de pacotes, mas ele irá permitir que você baixe a versão mais recente e dará a você controle sobre as opções que desejar personalizar.
Antes de começar, é necessário instalar o software que o Git depende. Tudo isso está disponível nos repositórios padrão, para que possamos atualizar nosso índice de pacotes e em seguida instalar os pacotes.
- sudo dnf update -y
- sudo dnf install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel gcc autoconf -y
Após instalar as dependências necessárias, crie um diretório temporário e vá até ele. Aqui será onde baixaremos nosso Git tarball.
- mkdir tmp
- cd /tmp
Na página do projeto Git, acesse a lista tarball de distribuição do Linux Red Hat, disponível em https://mirrors.edge.kernel.org/pub/software/scm/git/, e baixe a versão que você desejar. No momento em que este artigo foi escrito, a versão mais recente era a 2.26.0. Baixaremos esta versão para fins demonstrativos. Usaremos curl e direcionaremos o arquivo que baixamos para git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
Descompacte o arquivo comprimido tarball:
- tar -zxf git.tar.gz
Em seguida, vá para o novo diretório Git:
- cd git-*
Agora, é possível fazer o pacote e instalá-lo digitando esses dois comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Com isso finalizado, confirme que a instalação foi bem-sucedida verificando sua versão.
- git --version
Outputgit version 2.26.0
Após a instalação do Git, finalize a configuração.
Agora que tem o Git instalado, será necessário configurá-lo para que as mensagens de entrega geradas contenham as suas informações corretas.
Isso pode ser alcançado utilizando o comando git config
. Especificamente, precisamos dar nosso e endereço de e-mail porque o Git incorpora esta informação em cada entrega que fazemos. Podemos seguir em frente e adicionar esta informação digitando:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Podemos exibir todos os itens de configuração que foram configurados digitando:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
A informação que digitou está armazenada no seu arquivo de configuração Git, que você pode editar opcionalmente com um editor de texto como este:
- vi ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Pressione ESC
e, em seguida, :q
para sair do editor de texto.
Há muitas outras opções que é possível definir, mas essas duas são necessárias. Se pular este passo, provavelmente verá avisos quando colocar o Git em funcionamento. Isso dará mais trabalho para você pois será necessário revisar as entregas que tiver feito com as informações corretas.
Agora, você deve ter o Git instalado e pronto para usar no seu sistema.
Para aprender mais sobre como usar o Git, verifique esses artigos e séries:
]]>Les systèmes de contrôle de version sont une partie indispensable du développement de logiciels modernes. Le versionnage vous permet de garder une trace de votre logiciel au niveau de la source. Vous pouvez suivre les modifications, revenir aux étapes précédentes et créer des versions alternatives de fichiers et de répertoires.
Git est l’un des systèmes de contrôle de version les plus populaires actuellement. Les fichiers de nombreux projets sont conservés dans un référentiel Git, et des sites comme GitHub, GitLab et Bitbucket facilitent le partage et la collaboration dans le cadre de projets de développement de logiciels.
Dans ce guide, nous passerons en revue la façon d’installer et de configurer Git sur un serveur CentOS 8. Nous aborderons la manière d’installer le logiciel de deux manières différentes : via le gestionnaire de paquets intégré et via la source. Chacune de ces approches a ses propres avantages en fonction de vos besoins spécifiques.
Vous aurez besoin d’un serveur CentOS 8 avec un compte super-utilisateur non root.
Pour le mettre en place, vous pouvez suivre notre Guide de configuration initiale du serveur pour CentOS 8.
Une fois votre serveur et votre utilisateur configurés, vous êtes prêt à commencer.
Notre première option pour installer Git est de passer par les paquets par défaut de CentOS.
Cette option est la meilleure pour ceux qui veulent être rapidement opérationnels avec Git, ceux qui préfèrent une version stable couramment utilisée, ou ceux qui ne recherchent pas les dernières options disponibles. Si vous recherchez la dernière version, vous devriez passer à la section sur l’installation à partir de la source.
Nous allons utiliser l’outil de gestion de paquets open-source DNF, qui signifie Dandified YUM, la version de nouvelle génération du Yellowdog Updater, modifié (c’est-à-dire yum). DNF est un gestionnaire de paquets qui est maintenant le gestionnaire de paquets par défaut pour les systèmes Linux basés sur Red Hat comme CentOS. Il vous permet d’installer, de mettre à jour et de supprimer des paquets logiciels sur votre serveur.
Tout d’abord, utilisez les outils de gestion de paquets apt pour mettre à jour votre index de paquets local.
- sudo dnf update -y
L’indicateur -y
est utilisé pour alerter le système que nous sommes conscients d’effectuer des modifications, empêchant ainsi le terminal de nous demander de confirmer.
Une fois la mise à jour terminée, vous pouvez installer Git :
- sudo dnf install git -y
Vous pouvez confirmer que vous avez installé Git correctement en exécutant la commande suivante :
- git --version
Outputgit version 2.18.2
Une fois Git correctement installé, vous pouvez passer à la section Configuration de Git de ce tutoriel pour terminer votre installation.
Une méthode plus souple pour installer Git est de compiler le logiciel à partir de la source. Cela prend plus de temps et ne sera pas maintenu par votre gestionnaire de paquets, mais cela vous permettra de télécharger la dernière version et vous donnera un certain contrôle sur les options que vous incluez si vous souhaitez les personnaliser.
Avant de commencer, vous devez installer le logiciel dont Git dépend. Tout cela est disponible dans les référentiels par défaut, afin que nous puissions mettre à jour notre index local de paquets et ensuite installer les paquets.
- sudo dnf update -y
- sudo dnf install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel gcc autoconf -y
Une fois que vous avez installé les dépendances nécessaires, créez un répertoire temporaire et déplacez-vous dans celui-ci. C’est là que nous téléchargerons notre tarball Git.
- mkdir tmp
- cd /tmp
Depuis le site web du projet Git, nous pouvons naviguer vers la liste des paquets de la distribution Red Hat Linux disponible à l’adresse https://mirrors.edge.kernel.org/pub/software/scm/git/ et télécharger la version souhaitée. Au moment où nous écrivons ces lignes, la version la plus récente est la 2.26.0, nous la téléchargerons donc à des fins de démonstration. Nous utiliserons curl et sortirons le fichier que nous téléchargerons sur git.tar.gz
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
Décompressez le fichier tarball compressé :
- tar -zxf git.tar.gz
Ensuite, déplacez-vous dans le nouveau répertoire Git :
- cd git-*
Maintenant, vous pouvez créer le paquet et l’installer en tapant ces deux commandes :
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Une fois cela terminé, vous pouvez vous assurer que votre installation a réussi en vérifiant la version.
- git --version
Outputgit version 2.26.0
Une fois Git correctement installé, vous pouvez maintenant terminer votre installation.
Maintenant que vous avez installé Git, vous devez le configurer de manière à ce que les messages de validation générés contiennent vos informations correctes.
Cela peut être réalisé en utilisant la commande git config
. Plus précisément, nous devons fournir notre nom et notre adresse e-mail car Git intègre ces informations dans chaque commit que nous faisons. Nous pouvons continuer et ajouter ces informations en tapant :
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Nous pouvons afficher tous les éléments de configuration qui ont été définis en tapant :
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Les informations que vous saisissez sont stockées dans votre fichier de configuration Git, que vous pouvez éventuellement modifier manuellement avec un éditeur de texte comme celui-ci :
- vi ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Appuyez sur ESC
puis sur :q
pour quitter l’éditeur de texte.
Il existe de nombreuses autres options que vous pouvez définir, mais ce sont les deux essentielles. Si vous sautez cette étape, vous verrez probablement des avertissements lorsque vous vous committez à Git. Cela vous donne plus de travail, car vous devrez alors réviser les engagements que vous avez faits avec les informations corrigées.
Vous devriez maintenant avoir Git installé et prêt à être utilisé sur votre système.
Pour en savoir plus sur la façon d’utiliser Git, consultez ces articles et ces séries :
]]>Visual Studio Code (VS Code) has become one of the most popular editors out there for web development. It has gained such popularity thanks to its many built-in features such as source control integration, namely with Git. Harnessing the power of Git from within VS Code can make your workflow more efficient and robust.
In this tutorial, you will explore using Source Control Integration in VS Code with Git.
To complete this tutorial, you will need the following:
The first thing you need to do to take advantage of source control integration is initialize a project as a Git repository.
Open Visual Studio Code and access the built-in terminal. You can open this by using the keyboard shortcut CTRL + `
on Linux, macOS, or Windows.
In your terminal, make a directory for a new project and change into that directory:
- mkdir git_test
- cd git_test
Then, create a Git repository:
- git init
Another way to accomplish this with Visual Studio Code is by opening up the Source Control tab (the icon looks like a split in the road) in the left-side panel:
Next, select Open Folder:
This will open up your file explorer to the current directory. Select the preferred project directory and click Open.
Then, select Initialize Repository:
If you now check your file system, you will see that it includes a .git
directory. To do this, use the terminal to navigate to your project directory and list all of the contents:
- ls -la
You will see the .git
directory that was created:
- Output.
- ..
- .git
Now that the repo has been initialized, add a file called index.html
.
After doing so, you’ll see in the Source Control panel that your new file shows up with the letter U beside it. U stands for untracked file, meaning a file that is new or changed, but has not yet been added to the repository:
You can now click the plus icon (+) by the index.html
file listing to track the file by the repository.
Once added, the letter next to the file will change to an A. A represents a new file that has been added to the repository.
To commit your changes, type a commit message into the input box at the top of the Source Control panel. Then, click the check icon to perform the commit.
After doing so, you will notice that are no pending changes.
Next, add a bit of content to your index.html
file.
You can use an Emmet shortcut to generate an HTML5 skeleton in VS Code by pressing the !
key followed by Tab
key. Go ahead and add something in the <body>
like a <h1>
heading and save it.
In the source control panel, you will see that your file has been changed. It will show the letter M next to it, which stands for a file that has been modified:
For practice, go ahead and commit this change as well.
Now that you’re familiar interacting with the source control panel, you will move on to interpreting gutter indicators.
In this step you will take a look at what’s called the “Gutter” in VS Code. The gutter is the skinny area to the right of the line number.
If you’ve used code folding before, the maximize and minimize icons are located in the gutter.
Let’s start by making a small change to your index.html
file, such as a change to the content within the <h1>
tag. After doing so, you will notice a blue vertical mark in the gutter of the line that you changed. The vertical blue mark signifies that the corresponding line of code has been changed.
Now, try deleting a line of code. You can delete one of the lines in the <body>
section of your index.html
file. Notice now in the gutter that there is a red triangle. The red triangle signifies a line or group of lines that has been deleted.
Lastly, at the bottom of your <body>
section, add a new line of code and notice the green bar. The vertical green bar signifies a line of code that has been added.
This example depicts gutter indicators for a modified line, a removed line, and a new line:
VS Code also has the ability to perform a diff on a file. Typically, you would have to download a separate diff tool to do this, so this built-in feature can help you work more efficiently.
To view a diff, open up the source control panel and double-click a changed file. In this case, double-click the index.html
file. You will be brought to a typical diff view with the current version of the file on the left and the previously committed version of the file on the right.
This example shows that a line has been added in the current version:
Moving to the bottom bar, you have the ability to create and switch branches. If you take a look at the very bottom left of the editor, you should see the source control icon (the one that looks like a split in the road) followed most likely by master
or the name of the current working branch.
To create a branch, click on that branch name. A menu should pop up giving you the ability to create a new branch:
Go ahead and create a new branch called test
.
Now, make a change to your index.html
file that signifies you are in the new test
branch, such as adding the text this is the new test branch
.
Commit those changes to the test
branch. Then, click the branch name in the bottom left again to switch back to the master
branch.
After switching back to the master
branch, you’ll notice that the this is the new test branch
text committed to the test
branch is no longer present.
This tutorial won’t touch on it in-depth, but through the Source Control panel, you do have access to work with remote repositories. If you’ve worked with a remote repository before you’ll notice familiar commands like pull, sync, publish, stash, etc.
Not only does VS Code come with lots of built-in functionality for Git, there are also several very popular extensions to add additional functionality.
This extension provides the ability to view Git Blame information in the status bar for the currently selected line.
This may sound intimidating, but not to worry, the Git Blame extension is much more about practicality than it is about making someone feel bad. The idea of “blaming” someone for a code change is less about shaming them, and more about figuring out the right person to ask questions to for certain pieces of code.
As you can see in the screenshot, this extension provides a subtle message related to the current line of code you are working on in the bottom toolbar explaining who made the change and when they made it.
Although you can view current changes, perform diffs, and manage branches with the built-in features in VS Code, it does not provide an in-depth view into your Git history. The Git History extension solves that issue.
As you can see in the image below, this extension allows you to thoroughly explore the history of a file, a given author, a branch, etc. To activate the Git History window below, right-click on a file and choose Git: View File History:
Additionally, you can compare branches and commits, create branches from commits, and more.
GitLens supercharges the Git capabilities built into Visual Studio Code. It helps you to visualize code authorship at a glance via Git blame annotations and code lens, seamlessly navigate and explore Git repositories, gain valuable insights via powerful comparison commands, and so much more.
The Git Lens extension is one of the most popular in the community and is also the most powerful. In most ways, it can replace each of the previous two extension with its functionality.
For “blame” information, a subtle message appears to the right of the line you are currently working on to inform you of who made the change, when they made it, and the associated commit message. There are some additional pieces of information that pop up when hovering over this message like the code change itself, the timestamp, and more.
For Git history information, this extension provides a lot of functionality. You have easy access to tons of options including showing file history, performing diffs with previous versions, opening a specific revision, and more. To open up these options you can click the text in the bottom status bar that contains the author who edited the line of code and how long ago it was edited.
This will open up the following window:
This extension is packed with functionality, and it will take a while to take in all that it has to offer.
In this tutorial, you explored how to use source control integration with VS Code. VS Code can handle many features that previously would have required the download of a separate tool.
]]>Los sistemas de control de versión son una parte indispensable del desarrollo del software moderno. El control de versiones le permite realizar un seguimiento de su software a nivel de fuente. Puede rastrear cambios, volver a etapas anteriores y producir ramificaciones para crear versiones alternativas de archivos y directorios.
Git es uno de los sistemas de control de versión más populares disponibles actualmente. Los archivos de muchos proyectos se mantienen en un repositorio Git y, sitios como GitHub, GitLab y Bitbucket facilitan el intercambio y la colaboración en proyectos de desarrollo de software.
En esta guía, veremos cómo instalar y configurar Git en un servidor CentOS 8. Cubriremos cómo instalar el software de dos formas diferentes: a través del gestor de paquetes integrado y a través de la fuente. Cada uno de estos enfoques tiene sus propios beneficios, dependiendo de sus necesidades específicas.
Necesitará un servidor CentOS 8 con una cuenta non-root superuser.
Para configurarlo, siga nuestra Guía de configuración inicial de servidores para CentOS 8.
Una vez configurados su servidor y usuario, estará listo para comenzar.
Nuestra primera opción para instalar Git es a través de los paquetes predeterminados de CentOS.
Esta opción es la mejor para aquellos que quieran empezar a trabajar rápidamente con Git, aquellos que prefieran una versión estable ampliamente usada o aquellos que no buscan las opciones disponibles más recientes. Si está buscando la versión más reciente, debería pasar a la sección sobre instalar desde la fuente.
Usaremos la herramienta de gestor de paquetes de código abierto DNF, que significa Dandified YUM, la versión de próxima generación de Yellowdog Updater, Modified (es decir, Yum). DNF es un gestor de paquetes que, ahora, es el gestor de paquetes predeterminado para los sistemas Linux basados en Red Hat como CentOS. Le permitirá instalar, actualizar y eliminar paquetes de software en su servidor.
Primero, use las herramientas de gestión de paquetes apt para actualizar su índice local de paquetes.
- sudo dnf update -y
El indicador -y
se usa para alertar al sistema de que sabemos que estamos realizando cambios, lo que evita que el terminal nos pida confirmar.
Con la actualización completada, puede instalar Git:
- sudo dnf install git -y
Puede confirmar que instaló Git de forma correcta ejecutando el siguiente comando:
- git --version
Outputgit version 2.18.2
Una vez que instale Git correctamente, podrá pasar a la sección Configurar Git de este tutorial para completar la configuración.
Un método más flexible para instalar Git consiste en compilar el software desde la fuente. Esto toma más tiempo y no se mantendrá en su administrador de paquetes, pero le permitirá descargar la versión más reciente y le brindará cierto control sobre las opciones que incluya si quiere personalizar.
Antes de comenzar, debe instalar el software necesario para Git. Se encuentra disponible en los repositorios predeterminados, de modo que podemos actualizar nuestro índice de paquetes locales y, luego, instalar los paquetes.
- sudo dnf update -y
- sudo dnf install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel gcc autoconf -y
Tras haber realizado las dependencias necesarias, cree un directorio temporal y vaya a él. Aquí es donde descargaremos nuestro tarball de Git.
- mkdir tmp
- cd /tmp
Desde el sitio web del proyecto Git, podemos navegar a la lista de tarball de distribución de Linux Red Hat disponible en https://mirrors.edge.kernel.org/pub/software/scm/git/ y descargar la versión que desee. En el momento de escribir este artículo, la versión más reciente es 2.26.0, así que descargaremos esa versión para nuestra demostración. Usaremos curl y enviaremos el archivo que descarguemos a git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
Descomprima el archivo tarball:
- tar -zxf git.tar.gz
A continuación, vaya al nuevo directorio de Git:
- cd git-*
Ahora, podrá crear el paquete e instalarlo escribiendo estos dos comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Una vez completado esto, puede estar seguro de que su instalación se realizó correctamente comprobando la versión.
- git --version
Outputgit version 2.26.0
Con Git instalado correctamente, ahora puede finalizar su configuración.
Ahora que instaló Git, debe configurarlo de modo que los mensajes de confirmación generados contengan su información correcta.
Esto es posible usando el comando git config
. Específicamente, debemos proporcionar nuestro nombre y nuestra dirección de correo electrónico, debido a que Git inserta esta información en cada confirmación que hacemos. Podemos añadir esta información escribiendo lo siguiente:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Podemos ver todos los elementos de configuración creados escribiendo lo siguiente:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
La información que introduce se almacena en su archivo de configuración de Git. Tendrá la opción de modificarlo con un editor de texto de la siguiente manera:
- vi ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Pulse ESC
y, luego, :q
para salir del editor de texto.
Existen muchas otras opciones que puede configurar, pero estas son las dos esenciales que se necesitan. Si omite este paso, probablemente verá las advertencias cuando realice una confirmación con Git. Esto implica un mayor trabajo para usted, pues tendrá que revisar las confirmaciones que haya realizado con la información corregida.
Ahora deberá tener Git instalado y listo para usar en su sistema.
Para obtener más información sobre cómo usar Git, consulte los artículos y las series siguientes:
]]>so I have a simple Flask application that I would like to deploy to my Ubuntu 14.04. that is already set as a LAMP server. I have the Apache default configuration with MySQL and both Python 2.7 and 3.7 set up. I have a specific subdomain where I want to deploy my Flask app. I followed this tutorial: https://www.digitalocean.com/community/tutorials/how-to-set-up-an-apache-mysql-and-python-lamp-server-without-frameworks-on-ubuntu-14-04
But at the end I get a 500 error and when I look at the Apache error log G get that the flask module is not installed which is not true because when I check my virtual enviroment every requirement is there. I made my virtual enviroment with ‘pipenv shell’ and then I installed my requirements with ‘pip install -r requirements.txt’.
Error from the log:
mod_wsgi (pid=3501): Target WSGI script '/var/www/ja.estudent.hr/maktivnosti_backend.wsgi' cannot be loaded as Python module.
mod_wsgi (pid=3501): Exception occurred processing WSGI script '/var/www/ja.estudent.hr/maktivnosti_backend.wsgi'.
Traceback (most recent call last):
File "/var/www/ja.estudent.hr/maktivnosti_backend.wsgi", line 12, in <module>
from maktivnosti_backend import app as application
File "/var/www/ja.estudent.hr/maktivnosti_backend/app.py", line 1, in <module>
from flask import Flask, request, jsonify
ImportError: No module named 'flask'
Here is my setup:
My directory tree:
/var/www/ja.estudent.hr/ |-maktivnosti_backend |-app.py |-maktivnosti_backend.wsgi
My Apache configuration:
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerAlias www.ja.estudent.hr
ServerName ja.estudent.hr
ServerAdmin webmaster@estudent.hr
#referring the user to the recipes application
DocumentRoot /var/www/ja.estudent.hr/maktivnosti_backend
WSGIScriptAlias / /var/www/ja.estudent.hr/maktivnosti_backend.wsgi
<Directory /var/www/ja.estudent.hr/maktivnosti_backend/>
Require all granted
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
RewriteCond %{HTTP_HOST} ^[^.]+\.[^.]+$
RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
RewriteCond %{HTTP_HOST} ^www\.([^.]+\.[^.]+\.[^.]+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L]
RewriteEngine on
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/www.ja.estudent.hr/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.ja.estudent.hr/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/www.ja.estudent.hr/chain.pem
</VirtualHost>
</IfModule>
My wsgi file:
activate_this = '/home/(username)/.local/share/virtualenvs/maktivnosti_backend-tbfZiu-1/bin/activate_this.py'
with open(activate_this) as file_:
exec(file_.read(), dict(__file__=activate_this))
#!/usr/bin/python
import sys
import logging
import site
sys.path.append('/home/(username)/.local/share/virtualenvs/maktivnosti_backend-tbfZiu-1/lib/python3.7/site-packages')
site.addsitedir('/home/(username)/.local/share/virtualenvs/maktivnosti_backend-tbfZiu-1/lib/python3.7/site-packages')
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0, '/var/www/ja.estudent.hr/')
from maktivnosti_backend import app as application
application.secret_key = '(my secret key)'
If anyone can help me solve this problem I would really apreciate it!
]]>DNSControl ist ein Infrastruktur-als-Code-Tool, mit dem Sie Ihre DNS-Zonen mithilfe von Standard-Software-Entwicklungsprinzipien, einschließlich Versionskontrolle, Tests und automatisierter Bereitstellung, bereitstellen und verwalten können. DNSControl wurde von Stack Exchange erstellt und ist in Go geschrieben.
Die Verwendung von DNSControl beseitigt viele der Fallstricke der manuellen DNS-Verwaltung, da die Zonendateien in einem programmierbaren Format gespeichert werden. Dadurch können Sie Zonen gleichzeitig bei mehreren DNS-Anbietern bereitstellen, Syntaxfehler identifizieren und Ihre DNS-Konfiguration automatisch mithilfe von Push übertragen, wodurch das Risiko menschlicher Fehler verringert wird. Eine weitere häufige Anwendung von DNSControl ist die schnelle Migration Ihres DNS zu einem anderen Anbieter, z. B. im Falle eines DDoS-Angriffs oder eines Systemausfalls.
In diesem Tutorial installieren und konfigurieren Sie DNSControl, erstellen eine grundlegende DNS-Konfiguration und beginnen mit der Bereitstellung von DNS-Einträgen bei einem Live-Provider. Als Teil dieses Tutorials werden wir DigitalOcean als DNS-Beispielanbieter verwenden. Wenn Sie einen anderen Anbieter verwenden möchten, ist die Einrichtung sehr ähnlich. Wenn Sie fertig sind, können Sie Ihre DNS-Konfiguration in einer sicheren Offline-Umgebung verwalten und testen und sie dann automatisch in der Produktion einsetzen.
Bevor Sie diese Anleitung beginnen, benötigen Sie Folgendes:
your-server-ipv4-address
bezieht sich auf die IP-Adresse des Servers, auf dem Sie Ihre Website oder Domäne hosten. your-server-ipv6-address
bezieht sich auf die IPv6-Adresse des Servers, auf dem Sie Ihre Website oder Domäne hosten.your_domain
und als Dienstanbieter DigitalOcean verwendet.Sobald Sie diese zur Verfügung haben, melden Sie sich zunächst als Benutzer ohne Rootberechtigung auf Ihrem Server an.
DNSControl ist in Go geschrieben. Daher beginnen Sie diesen Schritt mit der Installation von Go auf Ihrem Server und der Einstellung Ihres GOPATH
.
Go ist innerhalb der Standard-Software-Repositorys von Debian verfügbar und kann mit herkömmlichen Paketverwaltungsprogrammen installiert werden.
Sie müssen auch Git installieren, da dies erforderlich ist, damit Go die DNSControl-Software aus seinem Repository auf GitHub herunterladen und installieren kann.
Beginnen Sie damit, den lokalen Paketindex zu aktualisieren, um alle neuen Änderungen im Upstream zu berücksichtigen:
- sudo apt update
Installieren Sie dann die Pakete golang-go
und git
:
- sudo apt install golang-go git
Nach der Bestätigung der Installation lädt apt
Go und Git herunter und installiert diese sowie alle erforderlichen Abhängigkeiten.
Als Nächstes konfigurieren Sie die erforderlichen Pfad-Umgebungsvariablen für Go. Wenn Sie mehr darüber erfahren möchten, können Sie das Tutorial GOPATH verstehen lesen. Beginnen Sie mit der Bearbeitung der Datei ~/.profile
:
- nano ~/.profile
Fügen Sie die folgenden Zeilen ganz am Ende Ihrer Datei hinzu:
...
export GOPATH="$HOME/go"
export PATH="$PATH:$GOPATH/bin"
Wenn Sie diese Zeilen am Ende der Datei hinzugefügt haben, speichern und schließen Sie die Datei. Dann laden Sie Ihr Profil erneut, indem Sie sich entweder aus- und wieder einloggen oder die Datei erneut aufrufen:
- source ~/.profile
Nachdem Sie nun Go installiert und konfiguriert haben, können Sie DNSControl installieren.
Mit dem Befehl go get
kann eine Kopie des Codes abgerufen, automatisch kompiliert und in Ihr Go-Verzeichnis installiert werden:
- go get github.com/StackExchange/dnscontrol
Sobald dies abgeschlossen ist, können Sie die installierte Version überprüfen, um sicherzustellen, dass alles funktioniert:
- dnscontrol version
Sie sehen eine Ausgabe, die der folgenden ähnelt:
Outputdnscontrol 2.9-dev
Wenn Sie den Fehler dnscontrol: command not found
sehen, überprüfen Sie die Einrichtung des Go-Pfades.
Nachdem Sie nun DNSControl installiert haben, können Sie ein Konfigurationsverzeichnis erstellen und DNSControl mit Ihrem DNS-Provider verbinden, damit dieser Änderungen an Ihren DNS-Einträgen vornehmen kann.
In diesem Schritt erstellen Sie die erforderlichen Konfigurationsverzeichnisse für DNSControl und verbinden es mit Ihrem DNS-Provider, damit dieser beginnen kann, Live-Änderungen an Ihren DNS-Einträgen vorzunehmen.
Erstellen Sie zunächst ein neues Verzeichnis, in dem Sie Ihre DNSControl-Konfiguration speichern können, und wechseln Sie in dieses:
- mkdir ~/dnscontrol
- cd ~/dnscontrol
Anmerkung: Dieses Tutorial konzentriert sich auf die Ersteinrichtung von DNSControl; für den produktiven Einsatz wird jedoch empfohlen, Ihre DNSControl-Konfiguration in einem Versionskontrollsystem (VCS) wie Git zu speichern. Zu den Vorteilen dieses Systems gehören die vollständige Versionskontrolle, die Integration mit CI/CD für Tests, nahtlose Rollback-Bereitstellungen und so weiter.
Wenn Sie planen, DNSControl zum Schreiben von BIND-Zonendateien zu verwenden, sollten Sie auch das Verzeichnis zones
erstellen:
- mkdir ~/dnscontrol/zones
BIND-Zonendateien sind eine rohe, standardisierte Methode zur Speicherung von DNS-Zonen/Einträgen im Klartextformat. Sie wurden ursprünglich für die BIND-DNS-Server-Software verwendet, sind aber heute als Standardmethode zur Speicherung von DNS-Zonen weit verbreitet. BIND-Zonendateien, die von DNSControl erstellt werden, sind nützlich, wenn Sie sie auf einen benutzerdefinierten oder selbst gehosteten DNS-Server importieren oder für Audit-Zwecke verwenden möchten.
Wenn Sie DNSControl jedoch nur dazu verwenden möchten, DNS-Änderungen mithilfe von Push an einen verwalteten Provider zu übertragen, wird das Verzeichnis zones
nicht benötigt.
Als Nächstes müssen Sie die Datei creds.json
konfigurieren, die es DNSControl ermöglicht, sich bei Ihrem DNS-Provider zu authentifizieren und Änderungen vorzunehmen. Das Format von creds.json
unterscheidet sich leicht, je nach dem von Ihnen verwendeten DNS-Provider. Bitte lesen Sie die Liste der Dienstanbieter in der offiziellen DNSControl-Dokumentation, um die Konfiguration für Ihren eigenen Anbieter zu finden.
Erstellen Sie die Datei creds.json
im Verzeichnis ~/dnscontrol
:
- cd ~/dnscontrol
- nano creds.json
Fügen Sie der Datei die Beispielkonfiguration creds.json
für Ihren DNS-Provider hinzu. Wenn Sie DigitalOcean als Ihren DNS-Provider verwenden, können Sie Folgendes verwenden:
{
"digitalocean": {
"token": "your-digitalocean-oauth-token"
}
}
Diese Datei teilt DNSControl mit, mit welchen DNS-Providern es sich verbinden soll.
Sie müssen eine Form der Authentifizierung für Ihren DNS-Provider bereitstellen. Dies ist normalerweise ein API-Schlüssel oder ein OAuth-Token, aber einige Anbieter benötigen zusätzliche Informationen, die in der Liste der Dienstanbieter in der offiziellen DNSControl-Dokumentation dokumentiert sind.
Warnung: Dieses Token gewährt Zugang zu Ihrem DNS-Provider-Konto, daher sollten Sie es wie ein Passwort schützen. Stellen Sie außerdem sicher, dass bei Verwendung eines Versionskontrollsystems entweder die Datei, die das Token enthält, ausgeschlossen wird (z. B. mit .gitignore
) oder auf eine andere Art und Weise sicher verschlüsselt wird.
Wenn Sie DigitalOcean als DNS-Provider verwenden, können Sie das erforderliche OAuth-Token in Ihren DigitalOcean-Kontoeinstellungen verwenden, das Sie als Teil der Voraussetzungen erstellt haben.
Wenn Sie mehrere verschiedene DNS-Provider haben, z. B. für mehrere Domänennamen oder delegierte DNS-Zonen, können Sie diese alle in derselben creds.json
-Datei definieren.
Sie haben die anfänglichen DNSControl-Konfigurationsverzeichnisse eingerichtet und creds.json
so konfiguriert, dass DNSControl sich bei Ihrem DNS-Provider authentifizieren und Änderungen vornehmen kann. Als Nächstes erstellen Sie die Konfiguration für Ihre DNS-Zonen.
In diesem Schritt erstellen Sie eine anfängliche DNS-Konfigurationsdatei, die die DNS-Einträge für Ihren Domänennamen oder Ihre delegierte DNS-Zone enthält.
dnsconfig.js
ist die Haupt-DNS-Konfigurationsdatei für DNSControl. In dieser Datei werden die DNS-Zonen und die entsprechenden Einträge mithilfe der JavaScript-Syntax definiert. Dies wird als DSL oder domänenspezifische Sprache bezeichnet. Die Seite JavaScript-DSL in der offiziellen DNSControl-Dokumentation enthält weitere Einzelheiten.
Erstellen Sie zunächst die DNS-Konfigurationsdatei im Verzeichnis ~/dnscontrol
:
- cd ~/dnscontrol
- nano dnsconfig.js
Fügen Sie dann die folgende Beispielkonfiguration in die Datei ein:
// Providers:
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
// Domains:
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address')
);
Diese Beispieldatei definiert einen Domänennamen oder eine DNS-Zone bei einem bestimmten Provider, in diesem Fall your_domain
, die von DigitalOcean gehostet wird. Ein Beispieleintrag A
wird auch in der Zone Root (@)
definiert, der auf die IPv4-Adresse des Servers verweist, auf dem Sie die Domäne/Website hosten.
Es gibt drei Hauptfunktionen, die eine grundlegende DNSControl-Konfigurationsdatei ausmachen:
NewRegistrar(name, type, metadata)
: definiert die Domänen-Registrierungsstelle für Ihren Domänennamen. Damit kann DNSControl die erforderlichen Änderungen vornehmen, z. B. die Änderung der autorisierenden Nameserver. Wenn Sie DNSControl nur zur Verwaltung Ihrer DNS-Zonen verwenden möchten, kann dies im Allgemeinen als NONE
belassen werden.
NewDnsProvider(name, type, metadata)
: definiert einen DNS-Dienstanbieter für Ihren Domänennamen oder Ihre delegierte Zone. DNSControl wird die von Ihnen vorgenommenen DNS-Änderungen mithilfe von Push an diese Stelle übertragen.
D( name, registrar, modifiers)
: definiert einen Domänennamen oder eine delegierte DNS-Zone, die DNSControl verwalten soll, sowie die in der Zone vorhandenen DNS-Einträge.
Sie sollten NewRegistrar()
, NewDnsProvider()
und D()
entsprechend konfigurieren, indem Sie die Liste der Dienstanbieter in der offiziellen DNSControl-Dokumentation verwenden.
Wenn Sie DigitalOcean als DNS-Provider verwenden und nur in der Lage sein müssen, DNS-Änderungen vorzunehmen (und nicht auch noch Änderungen an autoritativen Nameservern), ist das Beispiel im vorhergehenden Codeblock bereits korrekt.
Nach Abschluss speichern und schließen Sie die Datei.
In diesem Schritt richten Sie eine DNS-Konfigurationsdatei für DNSControl ein, in der die entsprechenden Provider definiert sind. Als Nächstes werden Sie die Datei mit einigen nützlichen DNS-Einträgen füllen.
Als Nächstes können Sie die DNS-Konfigurationsdatei mit nützlichen DNS-Einträgen für Ihre Website oder Ihren Dienst unter Verwendung der DNSControl-Syntax füllen.
Im Gegensatz zu herkömmlichen BIND-Zonendateien, bei denen DNS-Datensätze in einem rohen, zeilenweisen Format geschrieben werden, werden DNS-Einträge innerhalb von DNSControl als Funktionsparameter (Domänenmodifikator) der D()
-Funktion definiert, wie in Schritt 3 kurz gezeigt wurde.
Ein Domänenmodifikator existiert für jeden der Standard-DNS-Eintragstypen, einschließlich A
, AAAA
, MX
, TXT
, NS
, CAA
und so weiter. Eine vollständige Liste der verfügbaren Eintragstypen ist im Abschnitt Domänenmodifikatoren in der DNSControl-Dokumentation verfügbar.
Es sind auch Modifikatoren für einzelne Einträge (Datensatzmodifikatoren) verfügbar. Diese werden derzeit in erster Linie für die Einstellung der TTL (time to live) der einzelnen Einträge verwendet. Eine vollständige Liste der verfügbaren Eintragsmodifikatoren ist im Abschnitt Eintragsmodifikatoren in der DNSControl-Dokumentation verfügbar. Eintragsmodifikatoren sind optional und können in den meisten grundlegenden Anwendungsfällen ausgelassen werden.
Die Syntax zur Einstellung von DNS-Einträgen variiert für jeden Eintragstyp leicht. Nachfolgend finden Sie einige Beispiele für die gängigsten Eintragstypen:
A
-Einträge:
A('name', 'address', optionale Eintragsmodifikatoren)
A('@', 'your-server-ipv4-address', TTL(30))
AAAA
-Einträge:
AAAA('name', 'address', optionale Eintragsmodifikatoren)
AAAA('@', 'your-server-ipv6-address')
(Eintragsmodifikator ausgelassen, daher wird die Standard-TTL verwendet)CNAME
-Einträge:
CNAME('name', 'target', optionale Eintragsmodifikatoren)
CNAME('subdomain1', 'example.org.')
(Anmerkung: Es muss ein nachgestellter .
eingefügt werden, wenn der Wert Punkte enthält)MX
-Einträge:
MX('name', 'priority', 'target', optionale Eintragsmodifikatoren)
MX('@', 10, 'mail.example.net')
(Anmerkung: Es muss ein nachgestellter .
eingefügt werden, wenn der Wert Punkte enthält)TXT
-Einträge:
TXT('name', 'content', optionale Eintragsmodifikatoren)
TXT('@', 'This is a TXT record. ')
CAA
-Einträge:
CAA('name', 'tag', 'value', optionale Eintragsmodifikatoren)
CAA('@', 'issue', 'letsencrypt.org')
Um mit dem Hinzufügen von DNS-Einträgen für Ihre Domäne oder delegierte DNS-Zone zu beginnen, bearbeiten Sie Ihre DNS-Konfigurationsdatei:
- nano dnsconfig.js
Als Nächstes können Sie damit beginnen, die Parameter für die vorhandene D()
-Funktion mit der in der vorherigen Liste sowie dem Abschnitt Domänenmodifikatoren der offiziellen DNSConrtol-Dokumentation beschriebenen Syntax einzutragen. Zwischen den einzelnen Einträgen muss ein Komma (,
) verwendet werden.
Als Referenz enthält der Codeblock hier eine vollständige Beispielkonfiguration für eine grundlegende DNS-Ersteinrichtung:
...
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address'),
A('www', 'your-server-ipv4-address'),
A('mail', 'your-server-ipv4-address'),
AAAA('@', 'your-server-ipv6-address'),
AAAA('www', 'your-server-ipv6-address'),
AAAA('mail', 'your-server-ipv6-address'),
MX('@', 10, 'mail.your_domain.'),
TXT('@', 'v=spf1 -all'),
TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;')
);
Wenn Sie Ihre anfängliche DNS-Konfiguration abgeschlossen haben, speichern und schließen Sie die Datei.
In diesem Schritt richten Sie die anfängliche DNS-Konfigurationsdatei ein, die Ihre DNS-Einträge enthält. Als Nächstes werden Sie die Konfiguration testen und bereitstellen.
In diesem Schritt führen Sie eine lokale Syntaxprüfung Ihrer DNS-Konfiguration durch und stellen dann die Änderungen auf dem Live-DNS-Server/Provider bereit.
Gehen Sie zuerst in Ihr Verzeichnis dnscontrol
:
- cd ~/dnscontrol
Als Nächstes verwenden Sie die Funktion preview
von DNSControl, um die Syntax Ihrer Datei zu überprüfen und die Änderungen auszugeben (ohne sie tatsächlich vorzunehmen):
- dnscontrol preview
Wenn die Syntax Ihrer DNS-Konfigurationsdatei korrekt ist, gibt DNSControl eine Übersicht über die Änderungen aus, die es vornehmen wird. Diese sollte ähnlich wie die folgende aussehen:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE A your_domain your-server-ipv4-address ttl=300
#2: CREATE A www.your_domain your-server-ipv4-address ttl=300
#3: CREATE A mail.your_domain your-server-ipv4-address ttl=300
#4: CREATE AAAA your_domain your-server-ipv6-address ttl=300
#5: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
#6: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
#7: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
----- Registrar: none...0 corrections
Done. 8 corrections.
Wenn Sie eine Fehlerwarnung in Ihrer Ausgabe sehen, stellt DNSControl Details dazu bereit, was der Fehler ist und wo er sich innerhalb Ihrer Datei befindet.
Warnung: Der nächste Befehl nimmt Live-Änderungen an Ihren DNS-Einträgen und ggf. anderen Einstellungen vor. Bitte stellen Sie sicher, dass Sie darauf vorbereitet sind, einschließlich der Erstellung einer Sicherungskopie Ihrer bestehenden DNS-Konfiguration, und stellen Sie sicher, dass Sie die Möglichkeit haben, bei Bedarf ein Rollback durchzuführen.
Schließlich können Sie die Änderungen mithilfe von Push an Ihren Live-DNS-Provider übertragen:
- dnscontrol push
Sie sehen eine Ausgabe, die der folgenden ähnelt:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
SUCCESS!
#2: CREATE A your_domain your-server-ipv4-address ttl=300
SUCCESS!
#3: CREATE AAAA your_domain your-server-ipv6-address ttl=300
SUCCESS!
#4: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#5: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#6: CREATE A www.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#7: CREATE A mail.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
SUCCESS!
----- Registrar: none...0 corrections
Done. 8 corrections.
Wenn Sie nun die DNS-Einstellungen für Ihre Domäne im DigitalOcean-Bedienfeld überprüfen, sehen Sie die Änderungen.
Sie können die Erstellung des Eintrags auch überprüfen, indem Sie mit dig
eine DNS-Abfrage für Ihre Domäne/delegierte Zone ausführen.
Wenn Sie dig
nicht installiert haben, müssen Sie das Paket dnsutils
installieren:
- sudo apt install dnsutils
Wenn Sie dig
installiert haben, können Sie damit eine DNS-Abfrage für Ihre Domäne durchführen. Sie werden sehen, dass die Einträge entsprechend aktualisiert wurden:
- dig +short your_domain
Sie sehen eine Ausgabe, die die IP-Adresse und den relevanten DNS-Eintrag aus Ihrer Zone zeigt, der mit DNSControl bereitgestellt wurde. Das Propagieren von DNS-Einträgen kann einige Zeit dauern, sodass Sie möglicherweise warten und diesen Befehl erneut ausführen müssen.
In diesem letzten Schritt haben Sie eine lokale Syntaxprüfung der DNS-Konfigurationsdatei durchgeführt, sie dann bei Ihrem Live-DNS-Provider bereitgestellt und getestet, ob die Änderungen erfolgreich durchgeführt wurden.
In diesem Artikel richten Sie DNSControl ein und stellen eine DNS-Konfiguration bei einem Live-Provider bereit. Jetzt können Sie Ihre DNS-Konfigurationsänderungen in einer sicheren Offline-Umgebung verwalten und testen, bevor Sie sie in der Produktion einsetzen.
Wenn Sie dieses Thema weiter vertiefen möchten, sei darauf verwiesen, dass DNSControl so konzipiert ist, dass es in Ihre CI/CD-Pipeline integriert werden kann, sodass Sie eingehende Tests durchführen können und mehr Kontrolle über Ihre Bereitstellung in der Produktion haben. Sie könnten sich auch mit der Integration von DNSControl in Ihre Infrastruktur-Building-/Bereitstellungsprozesse befassen, sodass Sie Server bereitstellen und diese vollständig automatisch zu DNS hinzufügen können.
Wenn Sie mit DNSControl noch weiter gehen möchten, bieten die folgenden DigitalOcean-Artikel einige interessante nächste Schritte, die Ihnen bei der Integration von DNSControl in Ihre Änderungsmanagement- und Infrastruktur-Bereitstellungs-Workflows helfen können:
]]>Shipit ist ein universelles Automatisierungs- und Bereitstellungswerkzeug für Node.js-Entwickler. Es bietet einen Aufgabenablauf, der auf dem populären Orchestrator-Paket, der Anmeldung und den interaktiven SSH-Befehlen über OpenSSH und einer erweiterbaren API basiert. Entwickler können Shipit zur Automatisierung von Erstellungs- und Bereitstellungs-Workflows für eine breite Palette von Node.js-Anwendungen verwenden.
Der Shipit-Workflow ermöglicht Entwicklern nicht nur die Konfiguration von Aufgaben, sondern auch die Angabe der Reihenfolge, in der sie ausgeführt werden; ob sie synchron oder asynchron und in welcher Umgebung sie ausgeführt werden sollen.
In diesem Tutorial werden Sie Shipit installieren und konfigurieren, um eine Node.js-Anwendung aus Ihrer lokalen Entwicklungsumgebung in Ihrer Produktivumgebung bereitzustellen. Sie werden Shipit verwenden, um Ihre Anwendung bereitzustellen und den Remote-Server zu konfigurieren, indem Sie:
rsync
, git
und ssh
).Bevor Sie dieses Tutorial beginnen, benötigen Sie Folgendes:
rsync
und git
.
git
auf Linux-Distributionen zu installieren, folgen Sie dem Tutorial So installieren Sie Git.Git
. Dieses Tutorial verwendet GitHub.Anmerkung: Windows-Benutzer müssen das Windows-Subsystem für Linux installieren, um die Befehle in diesem Leitfaden auszuführen.
Shipit erfordert ein Git-Repository zur Synchronisierung zwischen dem lokalen Entwicklungsrechner und dem Remote-Server. In diesem Schritt erstellen Sie ein Remote-Repository auf Github.com
. Obwohl jeder Anbieter etwas anders ist, sind die Befehle in gewisser Weise übertragbar.
Um ein Repository zu erstellen, öffnen Sie Github.com
in Ihrem Webbrowser und melden Sie sich an. Sie werden feststellen, dass in der oberen rechten Ecke einer beliebigen Seite ein +-Symbol vorhanden ist. Klicken Sie auf + und klicken Sie dann auf New Repository.
Geben Sie einen kurzen, einprägsamen Namen für Ihr Repository ein, z. B. hello-world
. Beachten Sie, dass der Name, den Sie hier wählen, als der Projektordner repliziert wird, von dem aus Sie auf Ihrem lokalen Rechner arbeiten werden.
Fügen Sie optional eine Beschreibung Ihres Repositorys hinzu.
Legen Sie die Sichtbarkeit Ihres Repository –entweder öffentlich oder privat – nach Ihren Wünschen fest.
Stellen Sie sicher, dass das Repository mit einem .gitignore
initialisiert wird, wählen Sie Node
aus der Dropdown-Liste Add gitgnore
aus. Dieser Schritt ist wichtig, um zu vermeiden, dass unnötige Dateien (wie der Ordner node_modules
) zu Ihrem Repository hinzugefügt werden.
Klicken Sie auf die Schaltfläche Create repository.
Das Repository muss jetzt von Github.com
zu Ihrem lokalen Rechner geklont werden.
Öffnen Sie Ihr Terminal und navigieren Sie zu dem Ort, an dem Sie alle Node.js-Projektdateien speichern möchten. Beachten Sie, dass bei diesem Vorgang ein Unterordner innerhalb des aktuellen Verzeichnisses erstellt wird. Führen Sie den folgenden Befehl aus, um das Repository auf Ihren lokalen Rechner zu klonen:
- git clone https://github.com/your-github-username/your-github-repository-name.git
Sie müssen your-githug-username
und your-github-repository-name
ersetzen, um Ihren Github-Benutzernamen und den zuvor angegebenen Repository-Namen widerzuspiegeln.
Anmerkung: Wenn Sie die Zweifaktor-Authentifizierung (2FA) auf Github.com
aktiviert haben, müssen Sie beim Zugriff auf Github über die Befehlszeile anstelle Ihres Passworts ein persönliches Zugriffstoken oder einen SSH-Schlüssel verwenden. Die Github-Hilfeseite zum Thema 2FA bietet weitere Informationen.
Sie sehen eine Ausgabe, die der nachfolgenden ähnelt:
OutputCloning into 'your-github-repository-name'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Unpacking objects: 100% (3/3), done.
Wechseln Sie in das Repository, indem Sie den folgenden Befehl ausführen:
- cd your-github-repository-name
Innerhalb des Repositorys befinden sich eine einzige Datei und ein einziger Ordner, beides Dateien, die von Git zur Verwaltung des Repositorys verwendet werden. Sie können dies wie folgt überprüfen:
- ls -la
Sie sehen eine Ausgabe, die der folgenden ähnelt:
Outputtotal 8
0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 .
0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 ..
0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git
8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore
Nachdem Sie nun ein funktionierendes git
-Repository konfiguriert haben, erstellen Sie die Datei shipit.js
, die Ihren Bereitstellungsprozess verwaltet.
In diesem Schritt erstellen Sie ein Beispielprojekt Node.js und fügen dann die Shipit-Pakete hinzu. Dieses Tutorial bietet eine Beispielanwendung – den Node.js Web-Server, der HTTP-Anfragen akzeptiert und mit Hello World
im Klartext antwortet. Um die Anwendung zu erstellen, führen Sie den folgenden Befehl aus:
- nano hello.js
Fügen Sie den folgenden Beispiel-Anwendungscode zu hello.js
hinzu (aktualisiert die Variable APP_PRIVATE_IP_ADDRESS
auf die IP-Adresse Ihres App-Servers im privaten Netzwerk):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
Erstellen Sie jetzt Ihre package.json
-Datei für Ihre Anwendung:
- npm init -y
Dieser Befehl erzeugt eine package.json
-Datei, die Sie zur Konfiguration Ihrer Node.js-Anwendung verwenden werden. Im nächsten Schritt fügen Sie mit der npm
-Befehlszeilenschnittstelle Abhängigkeiten zu dieser Datei hinzu.
OutputWrote to ~/hello-world/package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Als Nächstes installieren Sie die erforderlichen npm
-Pakete mit folgendem Befehl:
- npm install --save-dev shipit-cli shipit-deploy shipit-shared
Verwenden Sie hier das Flag --save-dev
, da die Shipit-Pakete nur auf Ihrem lokalen Rechner benötigt werden. Sie sehen eine Ausgabe, die der folgenden ähnelt:
Output+ shipit-shared@4.4.2
+ shipit-cli@4.2.0
+ shipit-deploy@4.1.4
updated 4 packages and audited 21356 packages in 11.671s
found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details
Dadurch wurden auch die drei Pakete zu Ihrer package.json
-Datei als Entwicklungsabhängigkeiten hinzugefügt:
. . .
"devDependencies": {
"shipit-cli": "^4.2.0",
"shipit-deploy": "^4.1.4",
"shipit-shared": "^4.4.2"
},
. . .
Wenn Ihre lokale Umgebung konfiguriert ist, können Sie nun mit der Vorbereitung des Remote-App-Servers für Shipit-basierte Bereitstellungen fortfahren.
In diesem Schritt verwenden Sie ssh
, um sich mit Ihrem App-Server zu verbinden und Ihre Remote-Abhängigkeit rsync
zu installieren. Rsync ist ein Dienstprogramm zur effizienten Übertragung und Synchronisierung von Dateien zwischen lokalen und vernetzten Computern, indem die Änderungszeiten und Größen von Dateien verglichen werden.
Shipit verwendet rsync
zur Übertragung und Synchronisierung von Dateien zwischen Ihrem lokalen Computer und dem Remote-App-Server. Sie werden keine Befehle direkt an rsync
erteilen; Shipit übernimmt das für Sie.
Anmerkung: Mit So richten Sie eine Node.js-Anwendung für die Produktion unter CentOS 7 ein haben Sie die zwei Server App und Web erstellt. Diese Befehle sollten nur auf App ausgeführt werden.
Verbinden Sie Ihren Remote-App-Server über ssh
:
- ssh deployer@your_app_server_ip
Installieren Sie rsync
auf Ihrem Server, indem Sie den folgenden Befehl ausführen:
- sudo yum install rsync
Bestätigen Sie die Installation mit:
- rsync --version
In der Ausgabe dieses Befehls sehen Sie eine ähnliche Zeile:
Outputrsync version 3.1.2 protocol version 31
. . .
Sie können Ihre ssh
-Sitzung durch Eingabe von exit
beenden.
Wenn rsync
installiert und in der Befehlszeile verfügbar ist, können Sie mit den Bereitstellungsaufgaben und ihrer Beziehung zu den Ereignissen fortfahren.
Sowohl Ereignisse als auch Aufgaben sind Schlüsselkomponenten von Shipit-Bereitstellungen, und es ist wichtig zu verstehen, wie sie die Bereitstellung Ihrer Anwendung ergänzen. Die von Shipit ausgelösten Ereignisse stellen bestimmte Punkte im Lebenszyklus der Bereitstellung dar. Ihre Aufgaben werden als Reaktion auf diese Ereignisse ausgeführt, basierend auf der Reihenfolge des Shipit-Lebenszyklus.
Ein gängiges Beispiel, bei dem dieses Aufgaben-/Ereignissystem in einer Node.js-Anwendung nützlich ist, ist die Installation der Abhängigkeiten der Anwendung (node_modules
) auf dem Remote-Server. Später in diesem Schritt lassen Sie Shipit auf das Ereignis updated
(das nach der Übertragung der Anwendungsdateien ausgegeben wird) lauschen und eine Aufgabe ausführen, um die Abhängigkeiten der Anwendung (npm install
) auf dem Remote-Server zu installieren.
Zum Lauschen auf Ereignisse und Ausführen von Aufgaben benötigt Shipit eine Konfigurationsdatei, die Informationen über Ihren Remote-Server enthält (den App-Server) und die Ereignis-Listener und die von diesen Aufgaben auszuführenden Befehle registriert. Diese Datei befindet sich auf Ihrem lokalen Entwicklungsrechner im Verzeichnis Ihrer Node.js-Anwendung.
Um zu beginnen, erstellen Sie diese Datei, einschließlich Informationen über Ihren Remote-Server, die Ereignis-Listener, die Sie abonnieren möchten, und einige Definitionen Ihrer Aufgaben. Erstellen Sie shipitfile.js
in Ihrem Anwendungs-Stammverzeichnis auf Ihrem lokalen Rechner, indem Sie den folgenden Befehl ausführen:
- nano shipitfile.js
Nachdem Sie nun eine Datei erstellt haben, muss sie mit den anfänglichen Umgebungsinformationen, die Shipit benötigt, gefüllt werden. Dies ist in erster Linie der Standort Ihres Remote Git
-Repositorys und vor allem die öffentliche IP-Adresse und das SSH-Benutzerkonto Ihres App-Servers.
Fügen Sie diese Anfangskonfiguration hinzu und aktualisieren Sie die hervorgehobenen Zeilen, um sie an Ihre Umgebung anzupassen:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/sammy/your-domain',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
};
Die Aktualisierung der variables in Ihrer shipit.initConfig
-Methode stellt Shipit eine für Ihre Bereitstellung spezifische Konfiguration bereit. Diese stellen für Shipit Folgendes dar:
deployTo:
ist das Verzeichnis, in dem Shipit den Code Ihrer Anwendung auf dem Remote-Server bereitstellt. Hier verwenden Sie den Ordner /home/
für einen Benutzer ohne Rootberechtigung und mit sudo
-Berechtigungen (/home/sammy
), da er sicher ist und Probleme mit den Berechtigungen vermieden werden. Die Komponente /your-domain
ist eine Namenskonvention zur Unterscheidung des Ordners von anderen Ordnern im Home-Ordner des Benutzers.repositoryUrl:
ist die URL zu dem vollständigen Git-Repository. Shipit verwendet diese URL, um sicherzustellen, dass die Projektdateien vor der Bereitstellung synchronisiert sind.keepReleases:
ist die Anzahl der Versionen, die auf dem Remote-Server aufbewahrt werden sollen. Ein Release
ist ein mit einem Datumsstempel versehener Ordner, der die Dateien Ihrer Anwendung zum Zeitpunkt der Freigabe enthält. Diese können für das Rollback
einer Bereitstellung nützlich sein.shared:
ist eine mit keepReleases
korrespondierende Konfiguration, die die gemeinsame Nutzung von Verzeichnissen (shared
) zwischen Versionen ermöglicht. In diesem Fall haben wir einen einzigen Ordner node_modules
, der von allen Versionen gemeinsam genutzt wird.production:
stellt einen Remote-Server dar, auf dem Ihre Anwendung bereitgestellt wird. In diesem Fall haben Sie einen einzigen Server (App-Server), den Sie production
nennen, wobei die Konfiguration servers:
Ihrem SSH user
und der public ip address
entspricht. Der Name production
entspricht dem Shipit Bereitstellungsbefehl, der gegen Ende dieses Tutorials verwendet wird (npx shipit server name deploy
oder in Ihrem Fall npx shipit production deploy
).Weitere Informationen zu dem Objekt Shipit-Bereitstellungskonfiguration finden Sie im Shiptit Github-Repository.
Bevor Sie mit der Aktualisierung Ihrer shipitfile.js
fortfahren, lassen Sie uns das folgende Beispielcode-Snippet durchsehen, um die Shipit-Aufgaben zu verstehen:
Example event listenershipit.on('deploy', () => {
shipit.start('say-hello');
});
shipit.blTask('say-hello', async () => {
shipit.local('echo "hello from your local computer"')
});
Dies ist eine Beispielaufgabe, die die shipit.on
-Methode verwendet, um das Ereignis deploy
zu abonnieren. Diese Aufgabe wartet auf die Auslösung des Ereignisses deploy
durch den Shipit-Lebenszyklus, und wenn das Ereignis empfangen wird, führt die Aufgabe die Methode shipit.start
aus, die Shipit anweist, die say-hello
-Aufgabe zu starten
.
Die Methode shipit.on
nimmt zwei Parameter an, den Namen des Ereignisses, auf das zu lauschen ist, und die Rückruffunktion, die beim Empfang des Ereignisses ausgeführt werden soll.
Unter der Methodendeklaration shipit.on
wird die Aufgabe mit der Methode shipit.blTask
definiert. Dadurch wird eine neue Shipit-Aufgabe erstellt, die andere Aufgaben während ihrer Ausführung blockiert (es handelt sich um eine synchrone Aufgabe). Die Methode shipit.blTask
benötigt auch zwei Parameter, den Namen der Aufgabe, die sie definiert, und eine Rückruffunktion, die ausgeführt wird, wenn die Aufgabe durch shipit.start
ausgelöst wird.
Innerhalb der Rückruffunktion dieser Beispielaufgabe (say-hello
) führt die Methode shipit.local
einen Befehl auf dem lokalen Rechner aus. Der lokale Befehl gibt "hello from your local computer"
in die Terminalausgabe aus.
Wenn Sie einen Befehl auf dem Remote-Server ausführen wollten, würden Sie die Methode shipit.remote
verwenden. Die beiden Methoden, shipit.local
und shipit.remote
, stellen eine API zur Verfügung, um Befehle entweder lokal oder als Teil einer Bereitstellung per Fernzugriff auszugeben.
Aktualisieren Sie nun die Datei shipitfile.js
, um Ereignis-Listener einzubeziehen und den Shipit-Lebenszyklus mit shipit.on
zu abonnieren. Fügen Sie die Ereignis-Listener zu Ihrer shipitfile.js
hinzu, indem Sie sie nach dem Kommentar-Platzhalter aus der Anfangskonfiguration // Our tasks will go here
einfügen:
. . .
shipit.on('updated', () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', () => {
shipit.start('pm2-server');
});
Diese beiden Methoden lauschen auf die Ereignisse updated
und published
, die als Teil des Shipit-Bereitstellungslebenszyklus ausgesendet werden. Wenn das Ereignis empfangen wird, initiieren sie jeweils Aufgaben mit der Methode shipit.start
, ähnlich wie bei der Beispielaufgabe.
Nachdem Sie nun die Listener eingeplant haben, fügen Sie die entsprechende Aufgabe hinzu. Fügen Sie die folgende Aufgabe zu Ihrer shipitfile.js
hinzu, indem Sie sie nach den Ereignis-Listenern einfügen:
. . .
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
Sie deklarieren zunächst eine Aufgabe namens copy-config
. Diese Aufgabe erstellt eine lokale Datei namens ecosystem.config.js
und kopiert diese Datei dann auf Ihren Remote-App-Server. PM2
verwendet diese Datei, um Ihre Node.js-Anwendung zu verwalten. Sie stellt PM2
die erforderlichen Dateipfadinformationen zur Verfügung, um sicherzustellen, dass Ihre zuletzt bereitgestellten Dateien ausgeführt werden. Später im Build-Prozess erstellen Sie eine Aufgabe, die PM2
mit ecosystem.config.js
als Konfiguration ausführt.
Wenn Ihre Anwendung Umgebungsvariablen benötigt (wie z. B. einen Datenbank-Verbindungszeichenfolge), können Sie diese entweder lokal in env:
oder auf dem Remote-Server in env_production:
auf dieselbe Weise deklarieren, wie Sie die Variable NODE_ENV
in diesen Objekten festlegen.
Fügen Sie nach der Aufgabe copy-config
Ihrer shipitfile.js
-Datei die nächste Aufgabe hinzu:
. . .
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
Als Nächstes deklarieren Sie eine Aufgabe namens npm-install
. Diese Aufgabe verwendet ein Remote-Bash-Terminal (über shipit.remote
), um die Abhängigkeiten der Anwendung (npm
-Pakete) zu installieren.
Fügen Sie nach der Aufgabe npm-install
Ihrer shipitfile.js
-Datei die letzte Aufgabe hinzu:
. . .
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
Abschließend deklarieren Sie eine Aufgabe namens pm2-server
. Diese Aufgabe verwendet ebenfalls ein Remote-Bash-Terminal, um PM2
zunächst durch den Befehl delete
an der Verwaltung Ihrer vorherigen Bereitstellung zu hindern und dann eine neue Instanz Ihres Node.js-Servers zu starten, die die Datei ecosystem.config.js
als Variable bereitstellt. Sie teilen PM2
auch mit, dass es Umgebungsvariablen aus dem Block production
Ihrer Anfangskonfiguration verwenden soll, und fordern PM2
auf, die Anwendung zu überwachen und bei einem Absturz neu zu starten.
Die vollständige Datei shipitfile.js
:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/deployer/example.com',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
shipit.on('updated', async () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', async () => {
shipit.start('pm2-server');
});
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
};
Speichern und beenden Sie die Datei, wenn Sie fertig sind.
Wenn Ihre shipitfile.js
konfiguriert ist, die Ereignis-Listener und die zugehörigen Aufgaben abgeschlossen sind, können Sie mit der Bereitstellung auf dem App-Server fortfahren.
In diesem Schritt werden Sie Ihre Anwendung remote bereitstellen und testen, ob die Bereitstellung Ihre Anwendung für das Internet verfügbar gemacht hat.
Da Shipit die Projektdateien aus dem Remote-Git-Repository klont, müssen Sie Ihre lokalen Node.js-Anwendungsdateien von Ihrem lokalen Rechner zu Github übertragen. Navigieren Sie zum Anwendungsverzeichnis Ihres Node.js-Projekts (wo sich die Dateien hello.js
und shiptitfile.js
befinden) und führen Sie den folgenden Befehl aus:
- git status
Der Befehl git status
zeigt den Status des Arbeitsverzeichnisses und des Stagingbereichs an. Damit können Sie sehen, welche Änderungen bereitgestellt wurden, welche nicht, und welche Dateien nicht von Git verfolgt werden. Ihre Dateien werden nicht verfolgt und erscheinen in der Ausgabe rot:
OutputOn branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.js
package-lock.json
package.json
shipitfile.js
nothing added to commit but untracked files present (use "git add" to track)
Sie können diese Dateien mit dem folgenden Befehl zu Ihrem Repository hinzufügen:
- git add --all
Dieser Befehl erzeugt keine Ausgabe, obwohl die Dateien bei einer erneuten Ausführung von git status
grün und mit einem Hinweis darauf erscheinen würden, dass es Änderungen gibt, die übernommen werden müssen.
Sie können einen Commit erstellen, indem Sie den folgenden Befehl ausführen:
- git commit -m "Our first commit"
Die Ausgabe dieses Befehls liefert einige Git-spezifische Informationen über die Dateien.
Output[master c64ea03] Our first commit
4 files changed, 1948 insertions(+)
create mode 100644 hello.js
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 shipitfile.js
Jetzt müssen Sie nur noch den Commit mittels Push in das Remote-Repository übertragen, damit Shipit während der Bereitstellung Ihren App-Server klonen kann. Führen Sie den folgenden Befehl aus:
- git push origin master
Die Ausgabe enthält Informationen über die Synchronisation mit dem Remote-Repository:
OutputEnumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.com:Asciant/hello-world.git
e274312..c64ea03 master -> master
Um Ihre Anwendung bereitzustellen, führen Sie den folgenden Befehl aus:
- npx shipit production deploy
Die Ausgabe dieses Befehls (die zu groß ist, um sie in seiner Gesamtheit anzuzeigen) liefert Einzelheiten über die ausgeführten Aufgaben und das Ergebnis der spezifischen Funktion. Die folgende Ausgabe für die Aufgabe pm2-server
zeigt, dass die Anwendung Node.js gestartet wurde:
OutputRunning 'deploy:init' task...
Finished 'deploy:init' after 432 μs
. . .
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.27 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.81 s
Running 'deploy:finish' task...
Finished 'deploy:finish' after 222 μs
Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]
Um Ihre Anwendung aus der Perspektive eines Benutzers zu sehen, können Sie die URL Ihrer Webseite your-domain
in Ihren Browser eingeben, um auf Ihren Web-Server zuzugreifen. Dadurch wird die Node.js-Anwendung auf dem App-Server, auf dem Ihre Dateien bereitgestellt wurden, über Reverse-Proxy bereitgestellt.
Sie sehen die Begrüßung Hello World.
Anmerkung: Nach der ersten Bereitstellung wird Ihr Git-Repository eine neu erstellte Datei namens ecosystem.config.js
verfolgen. Da diese Datei bei jeder Bereitstellung neu erstellt wird und kompilierte Anwendungsgeheimnisse enthalten kann, sollte sie vor dem nächsten git
-Commit zur Datei .gitignore
im Anwendungs-Stammverzeichnis auf Ihrem lokalen Rechner hinzugefügt werden.
. . .
# ecosystem.config
ecosystem.config.js
Sie haben Ihre Node.js-Anwendung auf Ihrem App-Server bereitgestellt, der auf Ihre neue Bereitstellung verweist. Jetzt, wo alles läuft, können Sie zur Überwachung Ihrer Anwendungsprozesse übergehen.
PM2 ist ein ausgezeichnetes Werkzeug zur Verwaltung Ihrer Remote-Prozesse, bietet aber auch Funktionen zur Überwachung der Leistung dieser Anwendungsprozesse.
Stellen Sie mit diesem Befehl über SSH eine Verbindung zu Ihrem Remote-App-Server her:
- ssh deployer@your_app_server_ip
Um spezifische Informationen zu den von PM2 verwalteten Prozessen zu erhalten, führen Sie Folgendes aus:
- pm2 list
Sie sehen eine Ausgabe, die der nachfolgenden ähnelt:
Output┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤
│ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │
└─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘
Sie sehen eine Zusammenfassung der Informationen, die PM2 gesammelt hat. Um detaillierte Informationen zu sehen, können Sie Folgendes ausführen:
- pm2 show hello
Die Ausgabe erweitert die durch den Befehl pm2 list
bereitgestellten zusammenfassenden Informationen. Sie liefert auch Informationen über eine Reihe von Zusatzbefehlen und gibt die Speicherorte der Protokolldateien an:
Output Describing process with id 0 - name hello
┌───────────────────┬─────────────────────────────────────────────────────────────┐
│ status │ online │
│ name │ hello │
│ version │ 1.0.0 │
│ restarts │ 0 │
│ uptime │ 82s │
│ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │
│ script args │ N/A │
│ error log path │ /home/deployer/.pm2/logs/hello-error.log │
│ out log path │ /home/deployer/.pm2/logs/hello-out.log │
│ pid path │ /home/deployer/.pm2/pids/hello-0.pid │
│ interpreter │ node │
│ interpreter args │ N/A │
│ script id │ 0 │
│ exec cwd │ /home/deployer │
│ exec mode │ fork_mode │
│ node.js version │ 4.2.3 │
│ node env │ production │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-05-31T21:30:48.334Z │
└───────────────────┴─────────────────────────────────────────────────────────────┘
Revision control metadata
┌──────────────────┬────────────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ N/A │
│ repository root │ /home/deployer/example.com/releases/20190531213027 │
│ last update │ 2019-05-31T21:30:48.559Z │
│ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │
│ comment │ Our first commit │
│ branch │ master │
└──────────────────┴────────────────────────────────────────────────────┘
Divergent env variables from local env
┌───────────────────────────┬───────────────────────────────────────┐
│ XDG_SESSION_ID │ 15 │
│ HOSTNAME │ N/A │
│ SELINUX_ROLE_REQUESTED │ │
│ TERM │ N/A │
│ HISTSIZE │ N/A │
│ SSH_CLIENT │ 44.222.77.111 58545 22 │
│ SELINUX_USE_CURRENT_RANGE │ │
│ SSH_TTY │ N/A │
│ LS_COLORS │ N/A │
│ MAIL │ /var/mail/deployer │
│ PATH │ /usr/local/bin:/usr/bin │
│ SELINUX_LEVEL_REQUESTED │ │
│ HISTCONTROL │ N/A │
│ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │
└───────────────────────────┴───────────────────────────────────────┘
. . .
PM2 bietet auch ein In-Terminal-Überwachungstool, auf das wie folgt zugegriffen werden kann:
- pm2 monit
Die Ausgabe dieses Befehls ist ein interaktives Dashboard, in dem pm2
Prozessinformationen, Protokolle, Metriken und Metadaten in Echtzeit zur Verfügung stellt. Dieses Dashboard kann bei der Überwachung von Ressourcen und Fehlerprotokollen helfen:
Output┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐
│[ 0] hello Mem: 22 MB ││ │
│ ││ │
│ ││ │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐
│ Heap Size 10.73 ││ App Name hello │
│ Heap Usage 66.14 ││ Version N/A │
│ Used Heap Size 7.10 ││ Restarts 0 │
│ Active requests 0 ││ Uptime 55s │
│ Active handles 4 ││ Script path /home/asciant/hello.js │
│ Event Loop Latency 0.70 ││ Script args N/A │
│ Event Loop Latency p95 ││ Interpreter node │
│ ││ Interpreter args N/A │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
Wenn Sie wissen, wie Sie Ihre Prozesse mit PM2 überwachen können, können Sie fortfahren und lernen, wie Shipit Ihnen bei dem Rollback einer vorherigen funktionierenden Bereitstellung helfen kann.
Beenden Sie ihre ssh
-Sitzung auf Ihrem App-Server, indem Sie exit
ausführen.
Bei der Bereitstellung werden gelegentlich unvorhergesehene Fehler oder Probleme aufgedeckt, die zum Ausfall Ihrer Website führen. Die Entwickler und Betreuer von Shipit haben dies vorausgesehen und Ihnen die Möglichkeit gegeben, ein Rollback zu einer vorherigen (funktionierenden) Bereitstellung Ihrer Anwendung durchzuführen.
Um sicherzustellen, dass Ihre PM2
-Konfiguration erhalten bleibt, fügen Sie shipitfile.js
einen weiteren Ereignis-Listener für das Ereignis rollback
hinzu:
. . .
shipit.on('rollback', () => {
shipit.start('npm-install', 'copy-config');
});
Sie fügen dem Ereignis rollback
einen Listener hinzu, um Ihre Aufgaben npm-install
und copy-config
auszuführen. Dies ist erforderlich, da das Ereignis updated
im Gegensatz zum Ereignis published
beim Rollback einer Bereitstellung nicht durch den Shipit-Lebenszyklus ausgeführt wird. Das Hinzufügen dieses Ereignis-Listeners stellt sicher, dass Ihr PM2
-Prozessmanager selbst im Falle eines Rollbacks auf die letzte Bereitstellung verweist.
Dieser Prozess ähnelt der Bereitstellung, mit einer kleinen Befehlsänderung. Um ein Rollback auf eine frühere Bereitstellung zu versuchen, können Sie Folgendes ausführen:
- npx shipit production rollback
Wie der Befehl deploy
stellt rollback
Details zum Rollback-Prozess und zu den ausgeführten Aufgaben bereit:
OutputRunning 'rollback:init' task...
Get current release dirname.
Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com releases/20190531213719
Current release dirname : 20190531213719.
Getting dist releases.
Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com 20190531213719
@centos-ap-app.asciant.com 20190531213519
@centos-ap-app.asciant.com 20190531213027
Dist releases : ["20190531213719","20190531213519","20190531213027"].
Will rollback to 20190531213519.
Finished 'rollback:init' after 3.96 s
Running 'deploy:publish' task...
Publishing release "/home/deployer/example.com/releases/20190531213519"
Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com".
Release published.
Finished 'deploy:publish' after 1.8 s
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.55 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.82 s
Running 'rollback:finish' task...
Finished 'rollback:finish' after 615 μs
Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]
Über die Konfiguration keepReleases: 5
in shipitfile.js
haben Sie Shipit so konfiguriert, dass 5 Versionen behalten werden. Shipit verfolgt diese Versionen intern, um sicherzustellen, dass es in der Lage ist, bei Bedarf ein Rollback durchzuführen. Shipit bietet auch eine praktische Möglichkeit, die Releases zu identifizieren, indem es ein Verzeichnis erstellt, das als Zeitstempel benannt wird (JJJJMMTTHHmmss - Beispiel: /home/deployer/your-domain/releases/20190420210548
).
Wenn Sie den Rollback-Prozess weiter anpassen wollten, können Sie auf Ereignisse lauschen, die für den Rollback-Vorgang spezifisch sind. Diese Ereignisse können Sie dann zur Ausführung von Aufgaben verwenden, die Ihren Rollback-Prozess ergänzen. Sie können dazu auf die Ereignisliste in der Aufschlüsselung des Shipit-Lebenszyklus zurückgreifen und die Aufgaben/Listener innerhalb Ihrer shipitfile.js
konfigurieren.
Die Fähigkeit zum Rollback bedeutet, dass Sie Ihren Benutzern immer eine funktionierende Version Ihrer Anwendung zur Verfügung stellen können, selbst wenn eine Bereitstellung unerwartete Fehler/Probleme mit sich bringt.
In diesem Tutorial haben Sie einen Workflow konfiguriert, mit dem Sie eine hochgradig anpassbare Alternative zu „Platform as a Service“ erstellen können, und das alles von zwei Servern aus. Dieser Workflow ermöglicht eine kundenspezifische Bereitstellung und Konfiguration, die Prozessüberwachung mit PM2, die Möglichkeit zur Skalierung und dem Hinzufügen von Diensten oder zusätzlichen Servern oder Umgebungen zu der Bereitstellung, wenn dies erforderlich ist.
Wenn Sie daran interessiert sind, Ihre Node.js-Fähigkeiten weiterzuentwickeln, sehen Sie sich den Inhalt von DigitalOcean Node.js sowie die Serie So codieren Sie in Node.js an.
]]>DNSControl est un outil infrastructure-as-code qui vous permet de déployer et de gérer vos zones DNS en utilisant des principes de développement logiciel standard, y compris le contrôle de version, les tests et le déploiement automatisé. DNSControl a été créé par Stack Exchange et est écrit en Go.
L’utilisation de DNSControl élimine bon nombre des pièges de la gestion manuelle du DNS, car les fichiers de zone sont stockés dans un format structuré (YAML). Cela vous permet de déployer des zones vers plusieurs fournisseurs DNS simultanément, d’identifier les erreurs de syntaxe et de pousser votre configuration DNS automatiquement, réduisant ainsi le risque d’erreur humaine. L’une des autres utilisations courantes de DNSControl est la migration rapide de votre DNS vers un autre fournisseur, par exemple en cas d’attaque DDoS ou de panne de système.
Dans ce tutoriel, vous installerez et configurerez DNSControl, créerez une configuration DNS de base, et commencerez à déployer des enregistrements DNS vers un fournisseur en direct. Dans le cadre de ce tutoriel, nous utiliserons DigitalOcean comme exemple de fournisseur DNS. Si vous souhaitez utiliser un autre fournisseur, l’installation est très similaire. Lorsque vous aurez terminé, vous pourrez gérer et tester votre configuration DNS dans un environnement sécurisé et hors ligne, puis la déployer automatiquement en production.
Avant de commencer ce guide, vous aurez besoin des éléments suivants :
your-server-ipv4-address
fait référence à l’adresse IP du serveur où vous hébergez votre site web ou votre domaine. your-server-ipv6-address
fait référence à l’adresse IPv6 du serveur où vous hébergez votre site web ou votre domaine.your_domain
pendant toute sa durée et DigitalOcean comme fournisseur de services.Une fois que tout cela est prêt, connectez-vous à votre serveur en tant qu’utilisateur non root pour commencer.
DNSControl est écrit en Go, vous commencerez donc cette étape en installant Go sur votre serveur et en paramétrant votre GOPATH
.
Go est disponible dans les référentiels de logiciels par défaut de Debian, ce qui permet de l’installer à l’aide d’outils classiques de gestion des paquets.
Vous devrez également installer Git, car cela est nécessaire pour permettre à Go de télécharger et d’installer le logiciel DNSControl à partir de son dépôt sur GitHub.
Commencez par mettre à jour l’index local des paquets pour refléter tout nouveau changement en amont :
- sudo apt update
Ensuite, installez les paquets golang-go
et git
:
- sudo apt install golang-go git
Après avoir confirmé l’installation, apt
téléchargera et installera Go et Git, ainsi que toutes leurs dépendances requises.
Ensuite, vous configurerez les variables d’environnement de chemin requises pour Go. Si vous souhaitez en savoir plus à ce sujet, vous pouvez lire ce tutoriel sur Comprendre le GOPATH. Commencez par modifier le fichier ~/.profile
:
- nano ~/.profile
Ajoutez les lignes suivantes à la toute fin de votre fichier :
...
export GOPATH="$HOME/go"
export PATH="$PATH:$GOPATH/bin"
Une fois que vous avez ajouté ces lignes au bas du fichier, enregistrez et fermez le fichier. Ensuite, rechargez votre profil en vous déconnectant et en vous reconnectant, ou en vous procurant à nouveau le fichier :
- source ~/.profile
Maintenant que vous avez installé et configuré Go, vous pouvez installer DNSControl.
La commande go get
peut être utilisée pour récupérer une copie du code, le compiler automatiquement et l’installer dans votre répertoire Go :
- go get github.com/StackExchange/dnscontrol
Une fois cette opération terminée, vous pouvez vérifier la version installée pour vous assurer que tout fonctionne :
- dnscontrol version
Votre sortie ressemblera à ce qui suit :
Outputdnscontrol 2.9-dev
Si vous voyez une erreur dnscontrol : command not found
, vérifiez la configuration de votre chemin d’accès à Go.
Maintenant que vous avez installé DNSControl, vous pouvez créer un répertoire de configuration et connecter DNSControl à votre fournisseur DNS afin de lui permettre d’apporter des modifications à vos enregistrements DNS.
Au cours de cette étape, vous allez créer les répertoires de configuration requis pour DNSControl, et connecter ce dernier à votre fournisseur DNS afin qu’il puisse commencer à apporter des modifications en direct à vos enregistrements DNS.
Tout d’abord, créez un nouveau répertoire dans lequel vous pouvez stocker votre configuration DNSControl, et ensuite y accéder :
- mkdir ~/dnscontrol
- cd ~/dnscontrol
Remarque : ce tutoriel portera sur la configuration initiale de DNSControl ; toutefois, pour une utilisation en production, il est recommandé de stocker votre configuration DNSControl dans un système de contrôle de version (VCS) tel que Git. Les avantages de cette solution sont notamment le contrôle complet des versions, l’intégration avec les CI/CD pour les tests, les déploiements rétroactifs harmonieux, etc.
Si vous envisagez d’utiliser DNSControl pour écrire des fichiers de zone BIND, vous devez également créer le répertoire zones
:
- mkdir ~/dnscontrol/zones
Les fichiers de zone BIND sont une méthode brute et standardisée pour le stockage des zones/enregistrements DNS en format texte brut. Ils étaient à l’origine utilisés pour le logiciel du serveur DNS BIND, mais sont maintenant largement adoptés comme méthode standard pour le stockage des zones DNS. Les fichiers de zone BIND produits par DNSControl sont utiles si vous souhaitez les importer vers un serveur DNS personnalisé ou auto-hébergé, ou à des fins d’audit.
Cependant, si vous voulez simplement utiliser DNSControl pour pousser les changements DNS vers un fournisseur géré, le répertoire des zones
ne sera pas nécessaire.
Ensuite, vous devez configurer le fichier creds.json
, qui permettra à DNSControl de s’authentifier auprès de votre fournisseur DNS et d’apporter des modifications. Le format de creds.json
diffère légèrement en fonction du fournisseur DNS que vous utilisez. Veuillez consulter la liste des fournisseurs de service dans la documentation officielle de DNSControl pour trouver la configuration adaptée à votre propre fournisseur.
Créez le fichier creds.json
dans le répertoire ~/dnscontrol
:
- cd ~/dnscontrol
- nano creds.json
Ajoutez l’exemple de configuration creds.json
pour votre fournisseur DNS au fichier. Si vous utilisez DigitalOcean comme fournisseur DNS, vous pouvez utiliser ce qui suit :
{
"digitalocean": {
"token": "your-digitalocean-oauth-token"
}
}
Ce fichier indique à DNSControl à quels fournisseurs de DNS vous voulez qu’il se connecte.
Vous devrez fournir une forme d’authentification pour votre fournisseur de DNS. Il s’agit généralement d’une clé API ou d’un jeton OAuth, mais certains fournisseurs exigent des informations supplémentaires, comme le montre la liste des fournisseurs de services dans la documentation officielle de DNSControl.
Attention : ce jeton donnera accès à votre compte de fournisseur de DNS, vous devez donc le protéger comme vous le feriez avec un mot de passe. Par ailleurs, si vous utilisez un système de contrôle de version, assurez-vous que le fichier contenant le jeton est exclu (par exemple en utilisant .gitignore
), ou qu’il est crypté de manière sécurisée.
Si vous utilisez DigitalOcean comme fournisseur de DNS, vous pouvez utiliser le jeton OAuth (requis dans les paramètres de votre compte DigitalOcean) que vous avez généré dans le cadre des conditions préalables.
Si vous avez plusieurs fournisseurs de DNS différents - par exemple, pour plusieurs noms de domaine ou des zones DNS déléguées - vous pouvez tous les définir dans le même fichier creds.json
.
Vous avez configuré les répertoires de configuration initiale de DNSControl, et configuré creds.json
pour permettre à DNSControl de s’authentifier auprès de votre fournisseur DNS et d’y apporter des modifications. Maintenant, vous allez créer la configuration de vos zones DNS.
Au cours de cette étape, vous aller créer un fichier de configuration DNS initial, qui contiendra les enregistrements DNS pour votre nom de domaine ou la zone DNS déléguée.
dnsconfig.js
est le principal fichier de configuration DNS pour le DNSControl. Dans ce fichier, les zones DNS et leurs enregistrements correspondants sont définis en utilisant la syntaxe JavaScript. C’est ce qu’on appelle une DSL, ou Domain Specific Language (langage dédié). La page JavaScript DSL de la documentation officielle du DNSControl fournit davantage d’informations.
Pour commencer, créez le fichier de configuration DNS dans le répertoire ~/dnscontrol
:
- cd ~/dnscontrol
- nano dnsconfig.js
Ensuite, ajoutez l’exemple de configuration suivant au fichier :
// Providers:
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
// Domains:
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address')
);
Cet exemple de fichier définit un nom de domaine ou une zone DNS chez un fournisseur particulier, qui dans ce cas est your_domain
hébergé par DigitalOcean. Un exemple d’enregistrement A
est également défini pour la zone root (@
), pointant vers l’adresse IPv4 du serveur sur lequel vous hébergez votre domaine/site web.
Trois fonctions principales constituent un fichier de configuration de base de DNSControl :
NewRegistrar(name, type, metadata)
: définit le registre de domaine de votre nom de domaine. DNSControl peut l’utiliser pour effectuer les changements nécessaires, comme la modification des serveurs de noms faisant autorité. Si vous ne souhaitez utiliser le DNSControl que pour gérer vos zones DNS, vous pouvez généralement laisser ce choix sur NONE (AUCUN)
.
NewDnsProvider(name, type, metada)
: définit un fournisseur de services DNS pour votre nom de domaine ou la zone déléguée. C’est là que DNSControl va pousser les modifications que vous apportez au DNS.
D(name, registrar, modifiers)
: définit un nom de domaine ou une zone DNS déléguée à gérer par DNSControl, ainsi que les enregistrements DNS présents dans la zone.
Vous devez configurer NewRegistrar()
, NewDnsProvider()
et D()
en conséquence en utilisant la liste des fournisseurs de services incluse dans la documentation officielle de DNSControl.
Si vous utilisez DigitalOcean comme fournisseur DNS, et que vous avez seulement besoin de pouvoir faire des changements DNS (plutôt que des serveurs de noms faisant autorité également), l’échantillon dans le bloc de code précédent est déjà correct.
Une fois terminé, enregistrez et fermez le fichier.
Au cours de cette étape, vous avez configuré un fichier de configuration DNS pour le DNSControl, en définissant les fournisseurs concernés. Ensuite, vous alimenterez le fichier avec quelques enregistrements DNS utiles.
Ensuite, vous pouvez alimenter le fichier de configuration DNS avec des enregistrements DNS utiles pour votre site ou service, en utilisant la syntaxe DNSControl.
Contrairement aux fichiers de zone BIND traditionnels, où les enregistrements DNS sont écrits dans un format brut, ligne par ligne, les enregistrements DNS dans DNSControl sont définis comme un paramètre de fonction (modificateur de domaine) de la fonction D()
, comme indiqué brièvement à l’Étape 3.
Un modificateur de domaine existe pour chacun des types d’enregistrement DNS standard, notamment A
, AAAA
, MX
, TXT
, NS
, CAA
, etc. Une liste complète des types d’enregistrement disponibles est disponible dans la section Modificateurs de domaines de la documentation DNSControl.
Des modificateurs pour les enregistrements individuels sont également disponibles (modificateurs d’enregistrement). Actuellement, ils sont principalement utilisés pour fixer le TTL (time to live) des enregistrements individuels. Une liste complète des modificateurs d’enregistrement disponibles est disponible dans la section Modificateurs d’enregistrements de la documentation DNSControl. Les modificateurs d’enregistrement sont facultatifs, et dans la plupart des cas d’utilisation de base, ils peuvent être laissés de côté.
La syntaxe utilisée pour définir les enregistrements DNS varie légèrement selon les types d’enregistrement. Vous trouverez ci-dessous quelques exemples des types d’enregistrement les plus courants :
enregistrements A
:
A('name', 'address', modificateurs d'enregistrement optionnels)
A('@', 'your-server-ipv4-address', TTL(30))
enregistrements AAAA
:
AAAA('name', 'address', modificateurs d'enregistrement optionnels)
AAAA('@', 'your-server-ipv6-address')
(le modificateur d’enregistrement est omis, donc le TTL par défaut sera utilisé)enregistrements CNAME
:
CNAME('name>', 'target', modificateurs d'enregistrement optionnels)
CNAME('subdomain1', 'example.org
(notez qu’un .
final doit être inclus s’il y a des points dans la valeur)enregistrements MX
:
MX('name>', 'priority', 'target', modificateurs d'enregistrements optionnels)
MX('@', 10, 'mail.example.net')
(notez qu’un .
final doit être inclus s’il y a des points dans la valeur)enregistrements TXT
:
TXT('name>', 'content', modificateurs d'enregistrement optionnels)
TXT('@', 'C'est un enregistrement TXT. ')
enregistrements CAA
:
CAA('name', 'tag', ' value', modificateurs d'enregistrements optionnels)
CAA('@', 'issue', 'letsencrypt.org')
Afin de commencer à ajouter des enregistrements DNS pour votre domaine ou votre zone DNS déléguée, modifiez votre fichier de configuration DNS :
- nano dnsconfig.js
Ensuite, vous pouvez commencer à remplir les paramètres de la fonction D()
existante en utilisant la syntaxe décrite dans la liste précédente, ainsi que dans la section Modificateurs de domaine de la documentation officielle du DNSControl. Une virgule (,
) doit être utilisée entre chaque enregistrement.
Pour référence, le bloc de code ci-joint contient un exemple complet de configuration (pour une configuration initiale DNS de base) :
...
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address'),
A('www', 'your-server-ipv4-address'),
A('mail', 'your-server-ipv4-address'),
AAAA('@', 'your-server-ipv6-address'),
AAAA('www', 'your-server-ipv6-address'),
AAAA('mail', 'your-server-ipv6-address'),
MX('@', 10, 'mail.your_domain.'),
TXT('@', 'v=spf1 -all'),
TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;')
);
Une fois que vous avez terminé votre configuration DNS initiale, enregistrez et fermez le fichier.
Au cours de cette étape, vous avez configuré le fichier de configuration DNS initial, contenant vos enregistrements DNS. Maintenant, vous allez tester la configuration et la déployer.
Au cours de cette étape, vous allez effectuer un contrôle syntaxique local sur votre configuration DNS, puis déployer les modifications sur le serveur/fournisseur de DNS en direct.
Tout d’abord, entrez dans votre répertoire dnscontrol
:
- cd ~/dnscontrol
Ensuite, utilisez la fonction preview
de DNSControl pour vérifier la syntaxe de votre fichier et afficher les modifications qu’il apportera (sans les faire réellement) :
- dnscontrol preview
Si la syntaxe de votre fichier de configuration DNS est correcte, DNSControl produira un aperçu des modifications qu’il apportera. Cela devrait ressembler à ce qui suit :
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE A your_domain your-server-ipv4-address ttl=300
#2: CREATE A www.your_domain your-server-ipv4-address ttl=300
#3: CREATE A mail.your_domain your-server-ipv4-address ttl=300
#4: CREATE AAAA your_domain your-server-ipv6-address ttl=300
#5: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
#6: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
#7: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
----- Registrar: none...0 corrections
Done. 8 corrections.
Si vous voyez une erreur ou un avertissement dans votre sortie, DNSControl fournira des détails sur la nature et l’emplacement de l’erreur dans votre fichier.
Attention : la commande suivante apportera des modifications en direct à vos enregistrements DNS et éventuellement à d’autres paramètres. Veuillez vous assurer que vous êtes prêt pour cela, notamment en effectuant une sauvegarde de votre configuration DNS existante, et en vous assurant que vous avez les moyens de revenir en arrière si nécessaire.
Enfin, vous pouvez répercuter les changements sur votre fournisseur de DNS en direct :
- dnscontrol push
Vous verrez une sortie similaire à ce qui suit :
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
SUCCESS!
#2: CREATE A your_domain your-server-ipv4-address ttl=300
SUCCESS!
#3: CREATE AAAA your_domain your-server-ipv6-address ttl=300
SUCCESS!
#4: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#5: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#6: CREATE A www.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#7: CREATE A mail.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
SUCCESS!
----- Registrar: none...0 corrections
Done. 8 corrections.
Maintenant, si vous vérifiez les paramètres DNS de votre domaine dans le panneau de contrôle de DigitalOcean, vous verrez les changements.
Vous pouvez également vérifier la création de l’enregistrement en lançant une requête DNS pour votre domaine/zone déléguée, en utilisant dig
.
Si vous ne disposez pas de dig
, vous devrez installer le package dnsutils
:
- sudo apt install dnsutils
Une fois que vous avez installé dig
, vous pouvez l’utiliser pour effectuer une recherche DNS pour votre domaine. Vous verrez que les dossiers ont été mis à jour en conséquence :
- dig +short your_domain
Vous verrez une sortie indiquant l’adresse IP et l’enregistrement DNS relatif à votre zone qui a été déployée à l’aide de DNSControl. Les enregistrements DNS peuvent prendre un certain temps pour se propager, vous devrez donc peut-être attendre et exécuter à nouveau cette commande.
Lors de cette dernière étape, vous avez effectué un contrôle syntaxique local du fichier de configuration DNS, puis vous l’avez déployé chez votre fournisseur de DNS en direct, et vous avez vérifié que les modifications avaient été apportées avec succès.
Dans cet article, vous avez mis en place DNSControl et déployé une configuration DNS chez un fournisseur en direct. Vous pouvez désormais gérer et tester vos changements de configuration DNS dans un environnement sûr et hors ligne avant de les déployer en production.
Si vous souhaitez approfondir ce sujet, DNSControl est conçu pour être intégré à votre pipeline de CI/CD, ce qui vous permet d’effectuer des tests approfondis et d’avoir plus de contrôle sur votre déploiement en production. Vous pourriez également envisager d’intégrer DNSControl à vos processus de construction/déploiement de l’infrastructure, ce qui vous permettrait de déployer des serveurs et de les ajouter au DNS de manière entièrement automatique.
Si vous souhaitez aller plus loin avec DNSControl, les articles suivants de DigitalOcean fournissent quelques étapes intéressantes pour vous aider à intégrer DNSControl dans vos processus de gestion du changement et de déploiement d’infrastructures :
]]>Shipit est un outil universel d’automatisation et de déploiement pour les développeurs Node.js. Il comporte un flux de tâches basé sur le célèbre package Orchestrator, des commandes de connexion et de SSH interactif via OpenSSH, et une API extensible. Les développeurs peuvent utiliser Shipit pour automatiser les flux de construction et de déploiement d’un large éventail d’applications Node.js.
Le flux de travail Shipit permet aux développeurs non seulement de configurer les tâches, mais aussi de spécifier l’ordre dans lequel elles sont exécutées, si elles doivent être exécutées de manière synchrone ou asynchrone et sur quel environnement.
Dans ce tutoriel, vous installerez et configurerez Shipit pour déployer une application Node.js depuis votre environnement de développement local vers votre environnement de production. Vous utiliserez Shipit pour déployer votre application et configurer le serveur distant en :
rsync
, git
et ssh
).Avant de commencer ce guide, vous aurez besoin des éléments suivants :
rsync
et git
installés.
git
sur les distributions Linux, suivez le tutoriel Comment installer Git.git
hébergés. Ce tutoriel utilisera GitHub.Remarque : les utilisateurs de Windows devront installer le sous-système Windows pour Linux pour exécuter les commandes de ce guide.
Shipit requiert un référentiel Git pour la synchronisation entre la machine de développement locale et le serveur distant. Dans cette étape, vous allez créer un référentiel distant sur Github.com
. Bien que chaque fournisseur soit légèrement différent, les commandes sont en général assez similaires.
Pour créer un référentiel, ouvrez Github.com
dans votre navigateur web et connectez-vous. Vous remarquerez que dans le coin supérieur droit de chaque page, se trouve un symbole +. Cliquez sur +, puis sur New repository (Nouveau référentiel).
Tapez un nom court et facile à retenir pour votre référentiel, par exemple, hello-world
. Notez que le nom que vous choisissez ici sera reproduit comme le dossier du projet à partir duquel vous travaillerez sur votre machine locale.
Vous pouvez si vous le souhaitez ajouter une description de votre référentiel.
Définissez le mode de visibilité de votre référentiel selon votre préférence, public ou privé.
Assurez-vous que le référentiel est initialisé avec un .gitignore
, choisissez Node
dans la liste déroulante Add .gitignore
. Cette étape est importante pour éviter d’ajouter à votre référentiel des fichiers inutiles (comme le dossier node_modules
).
Cliquez sur le bouton Create repository (Créer un référentiel).
Le référentiel doit maintenant être cloné depuis Github.com
sur votre machine locale.
Ouvrez votre terminal et naviguez à l’endroit où vous souhaitez stocker tous vos fichiers du projet Node.js. Notez que ce processus créera un sous-dossier dans le répertoire actuel. Pour cloner le référentiel sur votre machine locale, exécutez la commande suivante :
- git clone https://github.com/your-github-username/your-github-repository-name.git
Vous devrez remplacer your-github-username
et your-github-repository-name
pour refléter votre nom d’utilisateur Github et le nom du référentiel précédemment fourni.
Remarque : si vous avez activé une authentification à deux facteurs (2FA) sur Github.com
, vous devez utiliser un jeton d’accès personnel ou une clé SSH au lieu de votre mot de passe lors de l’accès à Github en ligne de commande. La page d’aide Github liée à 2FA fournit de plus amples informations.
Vous verrez un résultat semblable à :
OutputCloning into 'your-github-repository-name'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Unpacking objects: 100% (3/3), done.
Déplacez-vous vers le référentiel en exécutant la commande suivante :
- cd your-github-repository-name
À l’intérieur du référentiel se trouvent un seul fichier et un seul dossier, qui sont tous deux utilisés par Git pour gérer le référentiel. Vous pouvez le vérifier avec :
- ls -la
Vous verrez une sortie semblable à ce qui suit :
Outputtotal 8
0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 .
0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 ..
0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git
8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore
Maintenant que vous avez configuré un référentiel git
fonctionnel, vous allez créer le fichier shipit.js
qui gère votre processus de déploiement.
Dans cette étape, vous allez créer un exemple de projet Node.js, puis ajouter les packages Shipit. Ce tutoriel fournit un exemple d’application : le serveur web Node.js qui accepte les requêtes HTTP et répond avec Hello World
en texte clair. Pour créer l’application, exécutez la commande suivante :
- nano hello.js
Ajoutez l’exemple de code d’application suivant à hello.js
(en mettant à jour la variable APP_PRIVATE_IP_ADDRESS
à l’adresse IP du réseau privé de votre serveur app) :
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
Maintenant, créez votre fichier package.json
pour votre application :
- npm init -y
Cette commande crée un fichier package.json
, que vous utiliserez pour configurer votre application Node.js. Dans l’étape suivante, vous ajouterez des dépendances à ce fichier avec l’interface de ligne de commande npm
.
OutputWrote to ~/hello-world/package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Ensuite, installez les packages npm
nécessaires avec la commande suivante :
- npm install --save-dev shipit-cli shipit-deploy shipit-shared
Vous utilisez l’indicateur --save-dev
ici puisque les packages Shipit ne sont requis que sur votre machine locale. Vous verrez une sortie semblable à ce qui suit :
Output+ shipit-shared@4.4.2
+ shipit-cli@4.2.0
+ shipit-deploy@4.1.4
updated 4 packages and audited 21356 packages in 11.671s
found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details
Cela a également ajouté les trois packages à votre fichier package.json
en tant que dépendances de développement :
. . .
"devDependencies": {
"shipit-cli": "^4.2.0",
"shipit-deploy": "^4.1.4",
"shipit-shared": "^4.4.2"
},
. . .
Votre environnement local étant configuré, vous pouvez maintenant passer à la préparation du serveur app distant pour les déploiements basés sur Shipit.
Dans cette étape, vous utiliserez ssh
pour vous connecter à votre serveur app et installer votre dépendance à distance rsync
. Rsync est un utilitaire permettant de transférer et de synchroniser efficacement des fichiers entre des lecteurs d’ordinateurs locaux et entre des ordinateurs en réseau, en comparant les heures de modification et les tailles des fichiers.
Shipit utilise rsync
pour transférer et synchroniser des fichiers entre votre ordinateur local et le serveur app distant. Vous ne délivrerez pas directement de commandes à rsync
, Shipit gérera cela pour vous.
Remaque : en suivant le tutoriel Comment mettre en place une application Node.js pour la production sur CentOS 7, vous avez créé deux serveurs app et web. Ces commandes doivent être exécutées uniquement sur app.
Connectez-vous à votre serveur app distant via ssh
:
- ssh deployer@your_app_server_ip
Installez rsync
sur votre serveur en exécutant la commande suivante :
- sudo yum install rsync
Vérifiez l’installation avec :
- rsync --version
Vous verrez une ligne similaire dans la sortie de cette commande :
Outputrsync version 3.1.2 protocol version 31
. . .
Vous pouvez mettre fin à votre session ssh
en entrant exit
.
Maintenant que rsync
est installé et disponible sur la ligne de commande, vous pouvez passer aux tâches de déploiement et à leur relation avec les événements.
Les événements et les tâches sont des éléments clés des déploiements Shipit et il est important de comprendre comment ils complètent le déploiement de votre application. Les événements déclenchés par Shipit représentent des points spécifiques du cycle de vie du déploiement. Vos tâches seront exécutées en réponse à ces événements, en fonction de la séquence du cycle de vie de Shipit.
Un exemple courant de l’utilité de ce système de tâches/événements dans une application Node.js est l’installation des dépendances de l’application (node_modules
) sur le serveur distant. Plus tard dans cette étape, vous demanderez à Shipit d’écouter l’événement updated
(qui est émis après le transfert des fichiers de l’application) et d’exécuter une tâche pour installer les dépendances de l’application (npm install
) sur le serveur distant.
Pour écouter les événements et exécuter les tâches, Shipit a besoin d’un fichier de configuration qui contient des informations sur votre serveur distant (le serveur app) et enregistre les auditeurs d’événements et les commandes à exécuter par ces tâches. Ce fichier vit sur votre ordinateur de développement local, à l’intérieur du répertoire de votre application Node.js.
Pour commencer, créez ce fichier, en y incluant des informations sur votre serveur distant, les auditeurs d’événements auxquels vous voulez vous abonner, et quelques définitions de vos tâches. Créez shipitfile.js
dans le répertoire racine de votre application sur votre machine locale en exécutant la commande suivante :
- nano shipitfile.js
Maintenant que vous avez créé un fichier, il doit être alimenté avec les informations initiales sur l’environnement dont Shipit a besoin. C’est principalement la localisation de votre référentiel Git
distant et, plus important encore, l’adresse IP publique de votre serveur app et votre compte utilisateur SSH.
Ajoutez cette configuration initiale et mettez à jour les lignes surlignées pour qu’elles correspondent à votre environnement :
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/sammy/your-domain',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
};
La mise à jour des variables dans votre méthode shipit.initConfig
fournit à Shipit une configuration spécifique pour votre déploiement. Elles représentent ce qui suit pour Shipit :
deployTo:
est le répertoire où Shipit déploiera le code de votre application sur le serveur distant. Ici, vous utilisez le dossier /home/
pour un utilisateur non root avec des privilèges sudo
(/home/sammy
) car il est sécurisé et évitera les problèmes de permission. Le composant /your-domain
est une convention d’appellation permettant de distinguer le dossier des autres dans le dossier personnel de l’utilisateur.repositoryUrl:
est l’URL pour le référentiel Git complet. Shipit utilisera cette URL pour s’assurer que les fichiers du projet sont synchronisés avant le déploiement.keepReleases:
est le nombre de versions à garder sur le serveur distant. Une release
(version mise en production) est un dossier daté contenant les fichiers de votre application au moment de la release. Cela peut être utile pour le rollback
(annulation) d’un déploiement.shared:
est une configuration qui correspond aux keepReleases
et qui permet à des répertoires d’être shared
(partagés) entre des releases. Dans le cas présent, nous avons un seul dossier node_modules
qui est partagé entre toutes les releases.production:
représente un serveur distant sur lequel déployer votre application. Dans ce cas, vous avez un seul serveur (serveur app) que vous nommez production
, avec la configuration servers:
correspondant à vos variables user
SSH et public ip address
. Le nom production
, correspond à la commande Shipit deploy utilisée vers la fin de ce tutoriel (npx shipit server name deploy
ou dans votre cas npx shipit production deploy
).De plus amples informations sur l’objet Shipit Deploy Configuration peuvent être trouvées dans le référentiel Github Shipit.
Avant de continuer à mettre à jour votre shipitfile.js
, examinons l’exemple de code suivant pour comprendre les tâches de Shipit :
Example event listenershipit.on('deploy', () => {
shipit.start('say-hello');
});
shipit.blTask('say-hello', async () => {
shipit.local('echo "hello from your local computer"')
});
C’est un exemple de tâche qui utilise la méthode shipit.on
pour souscrire à l’événement deploy
. Cette tâche attend que l’événement deploy
soit émis par le cycle de vie de Shipit, puis lorsque l’événement est reçu, la tâche exécute la méthode shipit.start
qui indique à Shipit de start
(lancer) la tâche say- hello
.
La méthode shipit.on
prend deux paramètres, le nom de l’événement à écouter et la fonction de rappel à exécuter lorsque l’événement est reçu.
Dans la déclaration de la méthode shipit.on
, la tâche est définie avec la méthode shipit.blTask
. Cela crée une nouvelle tâche Shipit qui bloque d’autres tâches pendant son exécution (il s’agit d’une tâche synchrone). La méthode shipit.blTask
prend également deux paramètres, le nom de la tâche qu’elle définit et une fonction de rappel à exécuter lorsque la tâche est déclenchée par shipit.start
.
Dans la fonction de rappel de cet exemple de tâche (say-hello
), la méthode shipit.local
exécute une commande sur la machine locale. La commande locale envoie un message "hello from your local computer"
(bonjour de votre ordinateur local) à la sortie du terminal.
Si vous vouliez exécuter une commande sur le serveur distant, vous devriez utiliser la méthode shipit.remote
. Les deux méthodes, shipit.local
et shipit.remote
, fournissent une API pour délivrer des commandes locales, ou à distance dans le cadre d’un déploiement.
Mettez maintenant à jour le shipitfile.js
afin d’inclure les auditeurs des événements devant souscrire au cycle de vie de Shipit avec shipit.on
. Ajoutez les auditeurs des événements à votre shipitfile.js
, en les insérant à la suite de l’espace réservé de commentaire de la configuration initiale // Our tasks will go here
(Nos tâches iront ici) :
. . .
shipit.on('updated', () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', () => {
shipit.start('pm2-server');
});
Ces deux méthodes sont à l’écoute des événements updated
(mis à jour) et published
(publiés) qui sont émis dans le cadre du cycle de vie du déploiement de Shipit. Lorsque l’événement est reçu, elles lanceront chacune des tâches en utilisant la méthode shipit.start
, de la même manière que la tâche exemple.
Maintenant que vous avez programmé les auditeurs, vous allez ajouter la tâche correspondante. Ajoutez la tâche suivante à votre shipitfile.js
, en l’insérant après vos auditeurs d’événements :
. . .
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
Vous devez d’abord déclarer une tâche appelée copy-config
. Cette tâche crée un fichier local appelé ecosystem.config.js
puis copie ce fichier sur votre serveur app distant. PM2
utilise ce fichier pour gérer votre application Node.js. Cela fournit à PM2
les informations nécessaires sur le chemin d’accès aux fichiers afin de s’assurer qu’il exécute vos derniers fichiers déployés. Plus tard dans le processus de construction, vous créerez une tâche qui exécute PM2
avec ecosystem.config.js
comme configuration.
Si votre application a besoin de variables d’environnement (comme une chaîne de connexion à une base de données), vous pouvez les déclarer soit localement dans env:
soit sur le serveur distant dans env_production:
, de la même manière que vous définissez la variable NODE_ENV
dans ces objets.
Ajoutez la tâche suivante à votre shipitfile.js
après la tâche copy-config
:
. . .
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
Ensuite, vous déclarez une tâche appelée npm-install
. Cette tâche utilise un terminal bash distant (via shipit.remote
) pour installer les dépendances de l’application (packages npm
).
Ajoutez la dernière tâche à votre shipitfile.js
après la tâche npm-install
:
. . .
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
Enfin, vous déclarez une tâche appelée pm2-server
. Cette tâche utilise également un terminal bash distant pour d’abord empêcher PM2
de gérer votre déploiement précédent par la commande delete
et ensuite démarrer une nouvelle instance de votre serveur Node.js en fournissant le fichier ecosystem.config.js
comme variable. Vous donnez également pour instruction à PM2
d’utiliser les variables d’environnement du bloc production
dans votre configuration initiale et demandez à PM2
de surveiller l’application, en la redémarrant si elle plante.
Le fichier shipitfile.js
complet :
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/deployer/example.com',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
shipit.on('updated', async () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', async () => {
shipit.start('pm2-server');
});
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
};
Enregistrez et quittez le fichier lorsque vous êtes prêt.
Une fois votre shipitfile.js
configuré, les auditeurs d’événements définis et les tâches associées finalisées, vous pouvez passer au déploiement sur le serveur app.
Au cours de cette étape, vous déploierez votre application à distance et vérifierez que le déploiement a mis votre application à disposition sur Internet.
Comme Shipit clone les fichiers du projet à partir du référentiel Git distant, vous devez pousser les fichiers d’application Node.js de votre machine locale vers Github. Naviguez jusqu’au répertoire d’application de votre projet Node.js (où se trouvent vos fichiers hello.js
et shiptitfile.js
) et exécutez la commande suivante :
- git status
La commande git status
affiche l’état du répertoire de travail et de la zone de transit. Cela vous permet de voir quels changements ont été mis en place, lesquels ne l’ont pas été et quels fichiers ne sont pas suivis par Git. Vos fichiers ne sont pas tracés et apparaissent en rouge dans la sortie :
OutputOn branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.js
package-lock.json
package.json
shipitfile.js
nothing added to commit but untracked files present (use "git add" to track)
Vous pouvez ajouter ces fichiers à votre référentiel avec la commande suivante :
- git add --all
Cette commande ne produit aucune sortie, bien que si vous deviez exécuter à nouveau git status
, les fichiers apparaîtraient en vert avec une note indiquant qu’il y a des modifications à commiter.
Vous pouvez créer un commit en exécutant la commande suivante :
- git commit -m "Our first commit"
La sortie de cette commande fournit des informations spécifiques à Git sur les fichiers.
Output[master c64ea03] Our first commit
4 files changed, 1948 insertions(+)
create mode 100644 hello.js
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 shipitfile.js
Il ne reste plus qu’à pousser votre commit vers le référentiel distant pour que Shipit le clone sur votre serveur app pendant le déploiement. Exécutez la commande suivante :
- git push origin master
La sortie comprend des informations sur la synchronisation avec le référentiel distant :
OutputEnumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.com:Asciant/hello-world.git
e274312..c64ea03 master -> master
Pour déployer votre application, exécutez la commande suivante :
- npx shipit production deploy
Le résultat de cette commande (qui est trop volumineux pour être inclus dans son intégralité) fournit des détails sur les tâches exécutées et le résultat de la fonction spécifique. La sortie suivante pour la tâche pm2-server
montre que l’application Node.js a été lancée :
OutputRunning 'deploy:init' task...
Finished 'deploy:init' after 432 μs
. . .
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.27 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.81 s
Running 'deploy:finish' task...
Finished 'deploy:finish' after 222 μs
Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]
Pour visualiser votre application comme un utilisateur le ferait, vous pouvez saisir l’URL de votre site your-domain
dans votre navigateur pour accéder à votre serveur web. Cela servira l’application Node.js, via le proxy inverse, sur le serveur app où vos fichiers ont été déployés.
Vous verrez un message d’accueil Hello World.
Remarque : après le premier déploiement, votre référentiel Git suivra un fichier nouvellement créé nommé ecosystem.config.js
. Comme ce fichier sera reconstruit à chaque déploiement, et peut contenir des secrets d’application compilés, il doit être ajouté au fichier .gitignore
dans le répertoire racine de l’application sur votre machine locale avant votre prochain commit git
.
. . .
# ecosystem.config
ecosystem.config.js
Vous avez déployé votre application Node.js sur votre serveur app, qui fait référence à votre nouveau déploiement. Maintenant que tout est en place, vous pouvez passer à la surveillance de vos processus d’application.
PM2 est un excellent outil pour gérer vos processus à distance, mais il offre également des fonctionnalités permettant de surveiller les performances de ces processus d’application.
Connectez-vous à votre serveur app distant via SSH avec cette commande :
- ssh deployer@your_app_server_ip
Pour obtenir des informations spécifiques relatives à vos processus gérés par PM2, exécutez ce qui suit :
- pm2 list
Vous verrez un résultat semblable à :
Output┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤
│ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │
└─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘
Vous verrez un résumé des informations que PM2 a recueillies. Pour voir des informations détaillées, vous pouvez exécuter :
- pm2 show hello
La sortie développe le résumé des informations fournies par la commande pm2 list
. Elle fournit également des informations sur un certain nombre de commandes auxiliaires et indique l’emplacement des fichiers journaux :
Output Describing process with id 0 - name hello
┌───────────────────┬─────────────────────────────────────────────────────────────┐
│ status │ online │
│ name │ hello │
│ version │ 1.0.0 │
│ restarts │ 0 │
│ uptime │ 82s │
│ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │
│ script args │ N/A │
│ error log path │ /home/deployer/.pm2/logs/hello-error.log │
│ out log path │ /home/deployer/.pm2/logs/hello-out.log │
│ pid path │ /home/deployer/.pm2/pids/hello-0.pid │
│ interpreter │ node │
│ interpreter args │ N/A │
│ script id │ 0 │
│ exec cwd │ /home/deployer │
│ exec mode │ fork_mode │
│ node.js version │ 4.2.3 │
│ node env │ production │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-05-31T21:30:48.334Z │
└───────────────────┴─────────────────────────────────────────────────────────────┘
Revision control metadata
┌──────────────────┬────────────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ N/A │
│ repository root │ /home/deployer/example.com/releases/20190531213027 │
│ last update │ 2019-05-31T21:30:48.559Z │
│ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │
│ comment │ Our first commit │
│ branch │ master │
└──────────────────┴────────────────────────────────────────────────────┘
Divergent env variables from local env
┌───────────────────────────┬───────────────────────────────────────┐
│ XDG_SESSION_ID │ 15 │
│ HOSTNAME │ N/A │
│ SELINUX_ROLE_REQUESTED │ │
│ TERM │ N/A │
│ HISTSIZE │ N/A │
│ SSH_CLIENT │ 44.222.77.111 58545 22 │
│ SELINUX_USE_CURRENT_RANGE │ │
│ SSH_TTY │ N/A │
│ LS_COLORS │ N/A │
│ MAIL │ /var/mail/deployer │
│ PATH │ /usr/local/bin:/usr/bin │
│ SELINUX_LEVEL_REQUESTED │ │
│ HISTCONTROL │ N/A │
│ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │
└───────────────────────────┴───────────────────────────────────────┘
. . .
PM2 fournit également un outil de surveillance à l’intérieur du terminal, accessible avec :
- pm2 monit
Le résultat de cette commande est un tableau de bord interactif, où pm2
fournit en temps réel des informations sur les processus, des journaux, des mesures et des métadonnées. Ce tableau de bord peut aider à surveiller les ressources et les journaux d’erreur :
Output┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐
│[ 0] hello Mem: 22 MB ││ │
│ ││ │
│ ││ │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐
│ Heap Size 10.73 ││ App Name hello │
│ Heap Usage 66.14 ││ Version N/A │
│ Used Heap Size 7.10 ││ Restarts 0 │
│ Active requests 0 ││ Uptime 55s │
│ Active handles 4 ││ Script path /home/asciant/hello.js │
│ Event Loop Latency 0.70 ││ Script args N/A │
│ Event Loop Latency p95 ││ Interpreter node │
│ ││ Interpreter args N/A │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
En comprenant comment vous pouvez surveiller vos processus avec PM2, vous pouvez passer à la façon dont Shipit peut vous aider à revenir à un déploiement de travail antérieur.
Terminez votre session ssh
sur votre serveur app en exécutant exit
.
Les déploiements entraînent parfois l’apparition de bugs imprévus ou de problèmes qui provoquent la défaillance de votre site. Les développeurs et les responsables de la maintenance de Shipit ont anticipé cette situation et vous ont donné la possibilité de revenir au déploiement précédent (fonctionnel) de votre application.
Pour vous assurer que votre configuration PM2
persiste, ajoutez un autre auditeur d’événement à shipitfile.js
sur l’événement rollback
:
. . .
shipit.on('rollback', () => {
shipit.start('npm-install', 'copy-config');
});
Vous ajoutez un auditeur à l’événement rollback
pour exécuter vos tâches npm-install
et copy-config
. Cela est nécessaire car, contrairement à l’événement published
, l’événement updated
n’est pas géré par le cycle de vie de Shipit lors de l’annulation d’un déploiement. L’ajout de cet auditeur d’événements permet à votre gestionnaire de processus PM2
de pointer vers le déploiement le plus récent, même en cas de retour en arrière.
Ce processus est similaire au déploiement, avec un petit changement de commande. Pour essayer de revenir à un déploiement précédent, vous pouvez exécuter ce qui suit :
- npx shipit production rollback
Comme la commande deploy
, rollback
fournit des détails sur le processus de retour en arrière et les tâches en cours d’exécution :
OutputRunning 'rollback:init' task...
Get current release dirname.
Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com releases/20190531213719
Current release dirname : 20190531213719.
Getting dist releases.
Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com 20190531213719
@centos-ap-app.asciant.com 20190531213519
@centos-ap-app.asciant.com 20190531213027
Dist releases : ["20190531213719","20190531213519","20190531213027"].
Will rollback to 20190531213519.
Finished 'rollback:init' after 3.96 s
Running 'deploy:publish' task...
Publishing release "/home/deployer/example.com/releases/20190531213519"
Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com".
Release published.
Finished 'deploy:publish' after 1.8 s
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.55 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.82 s
Running 'rollback:finish' task...
Finished 'rollback:finish' after 615 μs
Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]
Vous avez configuré Shipit pour conserver 5 releases par le biais de la fonction keepReleases: 5
dans shipitfile.js
. Shipit garde une trace de ces releases en interne pour s’assurer qu’il est capable de revenir en arrière si nécessaire. Shipit fournit également un moyen pratique d’identifier les sorties en créant un répertoire nommé sous forme d’horodatage (AAAAMMJJHHmmss - Exemple : /home/deployer/your-domain/releases/20190420210548
).
Si vous souhaitez personnaliser davantage le processus de retour en arrière, vous pouvez écouter les événements spécifiques à l’opération de roll back. Vous pouvez ensuite utiliser ces événements pour exécuter les tâches qui complètent votre retour en arrière. Vous pouvez vous référer à la liste d’événements fournie dans la répartition du cycle de vie de Shipit et configurer les tâches/auditeurs dans votre shipitfile.js
.
La possibilité de revenir en arrière signifie que vous pouvez toujours fournir une version fonctionnelle de votre application à vos utilisateurs, même si un déploiement introduit des problèmes ou des bugs inattendus.
Dans ce tutoriel, vous avez configuré un flux de travail qui vous permet de créer une alternative hautement personnalisable à une solution PaaS (Platform as a Service), le tout à partir de quelques serveurs. Ce flux de travail permet un déploiement et une configuration personnalisés, le suivi des processus avec PM2, la possibilité de faire évoluer le système et d’ajouter des services, serveurs ou environnements supplémentaires au déploiement si nécessaire.
Si vous souhaitez continuer à développer vos compétences sur Node.js, consultez le contenu Node.js de DigitalOcean ainsi que la série Comment coder en Node.js.
]]>Version control systems are an indispensable part of modern software development. Versioning allows you to keep track of your software at the source level. You can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
One of the most popular version control systems currently available is Git. Many projects’ files are maintained in a Git repository, and sites like GitHub, GitLab, and Bitbucket help to facilitate software development project sharing and collaboration.
In this guide, we will go through how to install and configure Git on a CentOS 8 server. We will cover how to install the software two different ways: via the built-in package manager and via source. Each of these approaches has their own benefits depending on your specific needs.
You will need a CentOS 8 server with a non-root superuser account.
To set this up, you can follow our Initial Server Setup Guide for CentOS 8.
With your server and user set up, you are ready to begin.
Our first option to install Git is via CentOS’s default packages.
This option is best for those who want to get up and running quickly with Git, those who prefer a widely-used stable version, or those who are not looking for the newest available options. If you are looking for the most recently release, you should jump to the section on installing from source.
We will be using the open-source package manager tool DNF, which stands for Dandified YUM the next-generation version of the Yellowdog Updater, Modified (that is, yum). DNF is a package manager that is now the default package manager for Red Hat based Linux systems like CentOS. It will let you install, update, and remove software packages on your server.
First, use the DNF package management tools to update your local package index.
- sudo dnf update -y
The -y
flag is used to alert the system that we are aware that we are making changes, preventing the terminal from prompting us to confirm.
With the update complete, you can install Git:
- sudo dnf install git -y
You can confirm that you have installed Git correctly by running the following command:
- git --version
Outputgit version 2.18.2
With Git successfully installed, you can now move on to the Setting Up Git section of this tutorial to complete your setup.
A more flexible method of installing Git is to compile the software from source. This takes longer and will not be maintained through your package manager, but it will allow you to download the latest release and will give you some control over the options you include if you wish to customize.
Before you begin, you need to install the software that Git depends on. This is all available in the default repositories, so we can update our local package index and then install the packages.
- sudo dnf update -y
- sudo dnf install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel gcc autoconf -y
After you have installed the necessary dependencies, create a temporary directory and move into it. This is where we will download our Git tarball.
- mkdir tmp
- cd /tmp
From the Git project website, we can navigate to the Red Hat Linux distribution tarball list available at https://mirrors.edge.kernel.org/pub/software/scm/git/ and download the version you would like. At the time of writing, the most recent version is 2.26.0, so we will download that for demonstration purposes. We’ll use curl and output the file we download to git.tar.gz
.
- curl -o git.tar.gz https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.26.0.tar.gz
Unpack the compressed tarball file:
- tar -zxf git.tar.gz
Next, move into the new Git directory:
- cd git-*
Now, you can make the package and install it by typing these two commands:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
With this complete, you can be sure that your install was successful by checking the version.
- git --version
Outputgit version 2.26.0
With Git successfully installed, you can now complete your setup.
Now that you have Git installed, you should configure it so that the generated commit messages will contain your correct information.
This can be achieved by using the git config
command. Specifically, we need to provide our name and email address because Git embeds this information into each commit we do. We can go ahead and add this information by typing:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
We can display all of the configuration items that have been set by typing:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
The information you enter is stored in your Git configuration file, which you can optionally edit by hand with a text editor like this:
- vi ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Press ESC
then :q
to exit the text editor.
There are many other options that you can set, but these are the two essential ones needed. If you skip this step, you’ll likely see warnings when you commit to Git. This makes more work for you because you will then have to revise the commits you have done with the corrected information.
You should now have Git installed and ready to use on your system.
To learn more about how to use Git, check out these articles and series:
]]>O Shipit é uma ferramenta de automação e implantação universal para desenvolvedores do Node.js. Ela possui um fluxo de tarefas baseado no pacote popular Orchestrator, comandos de login e SSH interativo através do OpenSSH, além de uma API extensível. Os desenvolvimentos podem usar o Shipit para automatizar os fluxos de trabalho de compilação e DE implantação para uma ampla gama de aplicativos Node.js.
O fluxo de trabalho do Shipit permite que os desenvolvedores não apenas configurem tarefas, mas também especifiquem a ordem na qual são executadas; se devem ser executadas de maneira sincronizada ou não e em qual ambiente.
Neste tutorial, você instalará e configurará o Shipit para implantar um aplicativo Node.js a partir do seu ambiente de desenvolvimento local para o seu ambiente de produção. Você usará o Shipit para implantar seu aplicativo e configurar o servidor remoto desta forma:
rsync
, git
e ssh
).Antes de iniciar este tutorial, você precisará do seguinte:
rsync
e git
instalados.
git
nas distribuições Linux, siga o tutorial Como instalar o Git.git
hospedados. Este tutorial usará o GitHub.Nota: os usuários do Windows terão que instalar o Subsistema Windows para o Linux para executar os comandos neste guia.
O Shipit exige um repositório Git para a sincronização entre a máquina de desenvolvimento local e o servidor remoto. Neste passo, você criará um repositório remoto no Github.com
. Embora cada provedor seja ligeiramente diferente, os comandos são de certo modo transferíveis.
Para criar um repositório, abra o Github.com
em seu navegador Web e faça login. Você notará que, no canto superior de todas as páginas, existe um símbolo de adição +. Clique no + e, em seguida, clique em Novo repositório.
Digite um nome curto e fácil de lembrar para o seu repositório como ,por exemplo, hello-world
. Note que, independentemente do nome que escolher aqui, ele será replicado como a pasta do projeto a partir da qual você irá trabalhar em sua máquina local.
De maneira opcional, adicione uma descrição do seu repositório.
Defina a visibilidade do seu repositório de acordo com sua preferência, como sendo pública ou privada.
Certifique-se de que o repositório seja inicializado com um .gitignore
, selecionando Node
na lista suspensa Add .gitignore
. Este passo é importante para evitar que arquivos desnecessários (como a pasta node_modules
) sejam adicionados ao seu repositório.
Clique no botão Criar repositório.
Agora, o repositório precisa ser clonado do Github.com
para sua máquina local.
Abra seu terminal e navegue até o local onde quer armazenar todos os seus arquivos do projeto Node.js. Note que este processo criará uma subpasta dentro do diretório atual. Para clonar o repositório para sua máquina local, execute o seguinte comando:
- git clone https://github.com/your-github-username/your-github-repository-name.git
Você precisará substituir o your-github-username
e your-github-repository-name
para que reflitam seu nome de usuário do Github e o nome do repositório anteriormente fornecido.
Nota: caso tenha habilitado a autenticação de dois fatores (2FA) no Github.com
, ao acessar o Github na linha de comando, você deverá usar um token de acesso pessoal ou uma chave SSH, em vez de sua senha. A página de ajuda do Github relacionada à 2FA fornece mais informações.
Você verá um resultado parecido com este:
OutputCloning into 'your-github-repository-name'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Unpacking objects: 100% (3/3), done.
Vá até o repositório, executando o seguinte comando:
- cd your-github-repository-name
Dentro do repositório, há um arquivo e uma pasta únicos, sendo ambos arquivos usados pelo Git para gerenciar o repositório. Você pode verificar isso com:
- ls -la
Você verá um resultado parecido com este:
Outputtotal 8
0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 .
0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 ..
0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git
8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore
Agora que você configurou um repositório git
funcional, irá criar o arquivo shipit.js
que gerencia o seu processo de implantação.
Neste passo, você criará um projeto Node.js como exemplo e, depois, adicionará os pacotes do Shipit. Este tutorial fornece um app exemplo — o servidor web Node.js que aceita solicitações HTTP e responde com um Hello World
em texto simples. Para criar o aplicativo, execute o seguinte comando:
- nano hello.js
Adicione o seguinte código de aplicativo exemplo ao hello.js
(atualizando a variável APP_PRIVATE_IP_ADDRESS
com o endereço IP da rede privada do servidor do seu app):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
Agora, crie seu arquivo package.json
para seu aplicativo:
- npm init -y
Esse comando cria um arquivo package.json
, que você usará para configurar seu aplicativo Node.js. No próximo passo, você adicionará dependências a esse arquivo com a interface de linha de comando npm
.
OutputWrote to ~/hello-world/package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Em seguida, instale os pacotes npm
necessários com o seguinte comando:
- npm install --save-dev shipit-cli shipit-deploy shipit-shared
Aqui, você usará o sinalizador --save-dev
, uma vez que os pacotes do Shipit somente são necessários em sua máquina local. Você verá um resultado parecido com este:
Output+ shipit-shared@4.4.2
+ shipit-cli@4.2.0
+ shipit-deploy@4.1.4
updated 4 packages and audited 21356 packages in 11.671s
found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details
Isso também adicionou os três pacotes ao seu arquivo package.json
como dependências de desenvolvimento:
. . .
"devDependencies": {
"shipit-cli": "^4.2.0",
"shipit-deploy": "^4.1.4",
"shipit-shared": "^4.4.2"
},
. . .
Com seu ambiente local configurado, agora você pode prosseguir com a preparação do servidor app remoto para implantações baseadas no Shipit.
Neste passo, você usará o ssh
para se conectar ao seu servidor app e instalar sua dependência remota rsync
. O Rsync é um utilitário para transferir e sincronizar arquivos de modo eficiente entre os discos do computador local e os computadores em rede, comparando os tempos de modificação e os tamanhos dos arquivos.
O Shipit usa o rsync
para transferir e sincronizar arquivos entre seu computador local e o servidor app remoto. Você não emitirá qualquer comando diretamente no rsync
; o Shipit fará isso por você.
Nota: o tutorial Como configurar um aplicativo Node.js para produção no CentOS 7 possibilitou que você ficasse com dois servidores, app e web. Esses comandos devem ser executados apenas no app.
Conecte-se ao seu servidor app remoto via ssh
:
- ssh deployer@your_app_server_ip
Instale o rsync
no seu servidor, executando o seguinte comando:
- sudo yum install rsync
Confirme a instalação com:
- rsync --version
Você verá uma linha semelhante no resultado deste comando:
Outputrsync version 3.1.2 protocol version 31
. . .
Você pode encerrar sua sessão ssh
, digitando exit
.
Com o rsync
instalado e disponível na linha de comando, você pode prosseguir com as tarefas de implantação e sua relação com eventos.
Tanto events quanto tasks são componentes chave das implantações Shipit. É importante entender como eles complementam a implantação do seu aplicativo. Os eventos desencadeados pelo Shipit representam pontos específicos no ciclo de vida da implantação. Suas tarefas executarão em reposta a esses eventos, com base na sequência do clico de vida do Shipit.
Um exemplo comum de onde esse sistema de tarefa/evento é útil em um aplicativo Node.js é na instalação das dependências do app (node_modules
), no servidor remoto. Mais adiante neste passo, você fará o Shipit escutar para o evento updated
(que é emitido depois que os arquivos do aplicativo são transferidos) e executar uma tarefa para instalar as dependências do aplicativo (npm install
) no servidor remoto.
Para escutar eventos e executar tarefas, o Shipit precisa de um arquivo de configuração que possua informações sobre seu servidor remoto (o servidor app) e registre ouvintes de eventos e os comandos a serem executados por essas tarefas. Esse arquivo fica no seu computador de desenvolvimento local, no diretório do seu aplicativo Node.js.
Para começar, crie esse arquivo, incluindo informações sobre seu servidor remoto, os ouvintes de eventos nos quais quer se inscrever e algumas definições das suas tarefas. Crie o shipitfile.js
no diretório raiz do seu aplicativo, em sua máquina local, executando o seguinte comando:
- nano shipitfile.js
Agora que você criou um arquivo, ele precisa ser preenchido com as informações iniciais do ambiente que o Shipit precisa. Basicamente, esse é o local do seu repositório Git
remoto e, mais importante que tudo, do endereço IP público e da conta de usuário SSH do seu servidor app.
Adicione essa configuração inicial e atualize as linhas destacadas para que correspondam ao seu ambiente:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/sammy/your-domain',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
};
Atualizando as variables no seu método shipit.initConfig
dá ao Shipit uma configuração específica para sua implantação. Essas variáveis representam o seguinte para o Shipit:
deployTo:
é o diretório onde o Shipit implantará o código do seu aplicativo no servidor remoto. Aqui, você usa a pasta /home/
para um usuário não raiz com privilégios sudo
(/home/sammy
) uma vez que é segura e evitará problemas de permissão. O componente /your-domain
é uma convenção de nomenclatura para distinguir a pasta das demais na pasta inicial do usuário.repositoryUrl:
é a URL do repositório completo do Git. O Shipit usará essa URL para garantir que os arquivos do projeto estejam em sincronia antes da implantação.keepReleases:
é o número de lançamentos a ser mantido no servidor remoto. Um release
(lançamento) é uma pasta com data que contém os arquivos do seu aplicativo no momento do lançamento. Eles podem ser úteis para rollback
(reverter) uma implantação.shared:
é a configuração que corresponde aos keepReleases
que permite que os diretórios sejam shared
(compartilhados) entre os lançamentos. Nessa instância, temos uma única pasta node_modules
, que é compartilhada entre todos os lançamentos.production:
representa um servidor remoto para o qual implantar seu aplicativo. Nessa instância, você tem um servidor (servidor app) único, ao qual você dá o nome de production
, com o servers:
configuração que correspondendo ao seu user
SSH e ao public ip address
do SSH. O nome production
corresponde ao comando de implantação do Shipit usado ao final deste tutorial (npx shipit server name deploy
ou, no seu caso, npx shipit production deploy
).No repositório Github para o Shipit, você pode encontrar mais informações sobre a o objeto de Configuração da implantação do Shipit.
Antes de continuar a atualizar seu shipitfile.js
, vamos analisar o seguinte exemplo de trecho de código para entender as tarefas do Shipit:
Example event listenershipit.on('deploy', () => {
shipit.start('say-hello');
});
shipit.blTask('say-hello', async () => {
shipit.local('echo "hello from your local computer"')
});
Esse é um exemplo de tarefa que utiliza o método shipit.on
para se inscrever no evento deploy
(implantar). Essa tarefa esperará que o evento deploy
seja emitido pelo ciclo de vida do Shipit. Depois, quando o evento é recebido, a tarefa executa o método shipit.start
, que diz ao Shipit para start
(iniciar) a tarefa say-hello
.
O método shipit.on
aceita dois parâmetros, o nome do evento para escutar e a função de callback a ser executada quando o evento for recebido.
Sob a declaração do método shipit.on
, a tarefa é definida com o método shipit.blTask
. Isso cria uma tarefa do Shipit que irá bloquear outras tarefas durante sua execução (é uma tarefa síncrona). O método shipit.blTask
também aceita dois parâmetros, o nome da tarefa que está definindo e uma função de callback para ser executada quando a tarefa for ativada pelo shipit.start
.
Dentro da função de callback dessa tarefa exemplo (say-hello
), o método shipit.local
executa um comando na máquina local. O comando local ecoa "hello from your local computer"
no resultado do terminal.
Se quisesse executar um comando no servidor remoto, usaria o método shipit.remote
. Os dois métodos, shipit.local
e shipit.remote
, fornecem uma API para emitir comandos tanto localmente quanto remotamente como parte de uma implantação.
Agora, atualize o shipitfile.js
para incluir ouvintes de eventos para se inscrever no ciclo de vida do Shipit com shipit.on
. Adicione os ouvintes de eventos ao seu shipitfile.js
, inserindo-os após o espaço reservado para comentário da configuração inicial // Our tasks will go here
:
. . .
shipit.on('updated', () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', () => {
shipit.start('pm2-server');
});
Esses dois métodos estão escutando os eventos updated
e published
que são emitidos como parte do ciclo de vida da implantação do Shipit. Quando os eventos forem recebidos, cada qual irá iniciar tarefas usando o método shipit.start
, assim como na tarefa exemplo.
Agora que você agendou os ouvintes, adicionará a tarefa correspondente. Adicione a seguinte tarefa ao seu shipitfile.js
, inserindo-a após seus ouvintes de eventos:
. . .
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
Primeiro, você declara uma tarefa chamada copy-config
. Essa tarefa cria um arquivo local chamado ecosystem.config.js
e, depois, copia esse arquivo para o seu servidor remoto app. O PM2
usa esse arquivo para gerenciar seu aplicativo Node.js. Ela fornece as informações necessárias do caminho do arquivo para o PM2
para garantir que ele esteja executando seus arquivos mais recentemente implantados. Mais adiante no processo de compilação, você criará uma tarefa que executa o PM2
com o ecosystem.config.js
como configuração.
Caso seu aplicativo precise de variáveis de ambiente (como uma string de conexão com banco de dados), você pode declará-las tanto localmente em env:
quanto no servidor remoto em env_production:
da mesma maneira que você definiu a variável NODE_ENV
nesses objetos.
Adicione a próxima tarefa ao seu shipitfile.js
após a tarefa copy-config
:
. . .
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
Em seguida, declare uma tarefa chamada npm-install
. Essa tarefa usa um terminal bash remoto (via shipit.remote
) para instalar as dependências do app (pacotes npm
).
Adicione a última tarefa ao seu shipitfile.js
depois da tarefa npm-install
:
. . .
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
Por fim, declare uma tarefa chamada pm2-server
. Essa tarefa também usa um terminal bash remoto para primeiro impedir o PM2
de gerenciar sua implantação anterior pelo comando delete
e, em seguida, iniciar uma nova instância do seu servidor Node.js, fornecendo o arquivo ecosystem.config.js
como uma variável. Além disso, você também avisa o PM2
que ele deve usar variáveis de ambiente a partir do bloco production
na sua configuração inicial e pede ao PM2
para monitorar o aplicativo, reiniciando-o caso falhe.
O arquivo completo do shipitfile.js
:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/deployer/example.com',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
shipit.on('updated', async () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', async () => {
shipit.start('pm2-server');
});
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
};
Salve e saia do arquivo quando estiver pronto.
Com seu shipitfile.js
configurado, ouvintes de eventos e as tarefas finalizadas associadas, você pode prosseguir com a implantação para o servidor app.
Neste passo, você implantará seu aplicativo remotamente e testará se a implantação disponibilizou seu aplicativo na internet.
Como o Shipit clona os arquivos do projeto do repositório remoto do Git, você precisa mandar seus arquivos do aplicativo local Node.js da sua máquina local para o Github. Navegue até o diretório do seu aplicativo do projeto Node.js (onde seu hello.js
e shipitfile.js
estão localizados) e execute o seguinte comando:
- git status
O comando git status
mostra o estado do diretório de trabalho e a área de preparação. Ele permite que você veja quais alterações foram preparadas, quais não foram e quais arquivos o Git não está rastreando. Seus arquivos não estão rastreados e aparecem em vermelho no resultado:
OutputOn branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.js
package-lock.json
package.json
shipitfile.js
nothing added to commit but untracked files present (use "git add" to track)
Você pode adicionar esses arquivos ao seu repositório com o seguinte comando:
- git add --all
Esse comando não gera nenhum resultado, embora caso fosse executar o git status
novamente, os arquivos apareceriam em verde, com uma nota avisando que existem alterações a serem confirmadas.
Você pode criar uma confirmação, executando o seguinte comando:
- git commit -m "Our first commit"
O resultado desse comando fornece algumas informações específicas do Git sobre os arquivos.
Output[master c64ea03] Our first commit
4 files changed, 1948 insertions(+)
create mode 100644 hello.js
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 shipitfile.js
Agora, tudo o que resta é mandar sua confirmação para o repositório remoto, para que o Shipit clonar para o seu servidor app durante a implantação. Execute o seguinte comando:
- git push origin master
O resultado inclui informações sobre a sincronização com o repositório remoto:
OutputEnumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.com:Asciant/hello-world.git
e274312..c64ea03 master -> master
Para implantar seu aplicativo, execute o seguinte comando:
- npx shipit production deploy
O resultado desse comando (que é grande demais para ser incluído por completo) fornece detalhes sobre as tarefas que estão sendo executadas e o resultado da função específica. O resultado que se segue à tarefa pm2-server
mostra que o app Node.js foi iniciado:
OutputRunning 'deploy:init' task...
Finished 'deploy:init' after 432 μs
. . .
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.27 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.81 s
Running 'deploy:finish' task...
Finished 'deploy:finish' after 222 μs
Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]
Para visualizar seu aplicativo como um usuário visualizaria, você pode digitar a URL your-domain
do seu site em seu navegador para acessar o servidor web. Isso servirá o aplicativo Node.js, por proxy reverso, no servidor app, onde seus arquivos foram implantados.
Você verá uma saudação Hello World.
Nota: após a primeira implantação, seu repositório Git irá rastrear um arquivo recém-criado, chamado ecosystem.config.js
. Como esse arquivo será recompilado a cada implantação, podendo conter segredos do aplicativo compilado, ele deve ser adicionado ao arquivo .gitignore
, no diretório raiz do aplicativo em sua máquina local, antes da sua próxima confirmação do git
.
. . .
# ecosystem.config
ecosystem.config.js
Você implantou seu aplicativo Node.js em seu servidor app, que se refere à sua nova implantação. Com tudo em funcionamento, você pode prosseguir com o monitoramento dos processos do seu aplicativo.
O PM2 é uma ótima ferramenta para gerenciar seus processos remotos, mas também fornece recursos para monitorar o desempenho desses processos do aplicativo.
Conecte-se ao seu servidor app remoto via SSH com este comando:
- ssh deployer@your_app_server_ip
Para obter informações específicas relacionadas aos seus processos gerenciados pelo PM2, execute o seguinte:
- pm2 list
Você verá um resultado parecido com este:
Output┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤
│ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │
└─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘
Você verá um resumo das informações que o PM2 coletou. Para ver informações detalhadas, execute:
- pm2 show hello
O resultado dá mais detalhes sobre as informações do resumo fornecidas pelo comando pm2 list
. Ele também fornece informações sobre vários comandos auxiliares e fornece os locais dos arquivos de registro:
Output Describing process with id 0 - name hello
┌───────────────────┬─────────────────────────────────────────────────────────────┐
│ status │ online │
│ name │ hello │
│ version │ 1.0.0 │
│ restarts │ 0 │
│ uptime │ 82s │
│ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │
│ script args │ N/A │
│ error log path │ /home/deployer/.pm2/logs/hello-error.log │
│ out log path │ /home/deployer/.pm2/logs/hello-out.log │
│ pid path │ /home/deployer/.pm2/pids/hello-0.pid │
│ interpreter │ node │
│ interpreter args │ N/A │
│ script id │ 0 │
│ exec cwd │ /home/deployer │
│ exec mode │ fork_mode │
│ node.js version │ 4.2.3 │
│ node env │ production │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-05-31T21:30:48.334Z │
└───────────────────┴─────────────────────────────────────────────────────────────┘
Revision control metadata
┌──────────────────┬────────────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ N/A │
│ repository root │ /home/deployer/example.com/releases/20190531213027 │
│ last update │ 2019-05-31T21:30:48.559Z │
│ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │
│ comment │ Our first commit │
│ branch │ master │
└──────────────────┴────────────────────────────────────────────────────┘
Divergent env variables from local env
┌───────────────────────────┬───────────────────────────────────────┐
│ XDG_SESSION_ID │ 15 │
│ HOSTNAME │ N/A │
│ SELINUX_ROLE_REQUESTED │ │
│ TERM │ N/A │
│ HISTSIZE │ N/A │
│ SSH_CLIENT │ 44.222.77.111 58545 22 │
│ SELINUX_USE_CURRENT_RANGE │ │
│ SSH_TTY │ N/A │
│ LS_COLORS │ N/A │
│ MAIL │ /var/mail/deployer │
│ PATH │ /usr/local/bin:/usr/bin │
│ SELINUX_LEVEL_REQUESTED │ │
│ HISTCONTROL │ N/A │
│ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │
└───────────────────────────┴───────────────────────────────────────┘
. . .
O PM2 também fornece uma ferramenta de monitoramento dentro do terminal, que pode ser acessada com:
- pm2 monit
O resultado desse comando é um painel interativo, onde o pm2
fornece informações de processo, registros, métricas e metadados em tempo real. Este painel pode ajudar no monitoramento de recursos e registros de erro:
Output┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐
│[ 0] hello Mem: 22 MB ││ │
│ ││ │
│ ││ │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐
│ Heap Size 10.73 ││ App Name hello │
│ Heap Usage 66.14 ││ Version N/A │
│ Used Heap Size 7.10 ││ Restarts 0 │
│ Active requests 0 ││ Uptime 55s │
│ Active handles 4 ││ Script path /home/asciant/hello.js │
│ Event Loop Latency 0.70 ││ Script args N/A │
│ Event Loop Latency p95 ││ Interpreter node │
│ ││ Interpreter args N/A │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
Com uma compreensão sobre como você pode monitorar seus processos com o PM2, você pode prosseguir e ver como o Shipit pode ajudar na reversão para uma implantação anterior em funcionamento.
Termine sua sessão ssh
no seu servidor app executando exit
.
Às vezes, as implantações revelam bugs imprevistos, ou problemas que fazem com que seu site falhe. Os desenvolvedores e mantenedores do Shipit previram esse problema e proporcionaram a capacidade de se fazer a reversão para implantações anteriores (em funcionamento) do seu aplicativo.
Para garantir que sua configuração PM2
permaneça, adicione outro ouvinte de eventos ao shipitfile.js
no evento rollback
(reversão):
. . .
shipit.on('rollback', () => {
shipit.start('npm-install', 'copy-config');
});
Adicione um ouvinte ao evento rollback
para executar suas tarefas npm-install
e copy-config
. Isso é necessário, pois, ao contrário do evento published
, o evento updated
não é executado pelo ciclo de vida do Shipit quando se reverte uma implantação. Adicionar esse ouvinte de eventos garante que seu gerenciador de processos PM2
apontará para a implantação mais recente, mesmo no caso de uma reversão.
Esse processo é semelhante ao da implantação, com uma pequena alteração no comando. Para tentar reverter para uma implantação anterior, execute o seguinte:
- npx shipit production rollback
Assim como o comando deploy
, o rollback
fornece detalhes sobre o processo de reversão e as tarefas que estão sendo executadas:
OutputRunning 'rollback:init' task...
Get current release dirname.
Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com releases/20190531213719
Current release dirname : 20190531213719.
Getting dist releases.
Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com 20190531213719
@centos-ap-app.asciant.com 20190531213519
@centos-ap-app.asciant.com 20190531213027
Dist releases : ["20190531213719","20190531213519","20190531213027"].
Will rollback to 20190531213519.
Finished 'rollback:init' after 3.96 s
Running 'deploy:publish' task...
Publishing release "/home/deployer/example.com/releases/20190531213519"
Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com".
Release published.
Finished 'deploy:publish' after 1.8 s
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.55 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.82 s
Running 'rollback:finish' task...
Finished 'rollback:finish' after 615 μs
Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]
Você configurou o Shipit para manter 5 versões de lançamento, através da configuração de keepReleases: 5
no shipitfile.js
. O Shipit monitora essas versões internamente para garantir a capacidade de reverter quando necessário. O Shipit também fornece uma maneira conveniente de identificar as versões, criando um diretório nomeado na forma de um carimbo de data/hora (YYYYMMDDHHmmss - Exemplo: /home/deployer/your-domain/releases/20190420210548
).
Se quisesse personalizar ainda mais o processo de reversão, você poderia ouvir eventos específicos para a operação de reversão. Na sequência, você poderá usar esses eventos para executar tarefas que irão complementar sua reversão. Você pode consultar a lista de eventos fornecida no detalhamento do ciclo de vida do Shipit e configurar as tarefas/ouvintes dentro do seu shipitfile.js
.
Capacidade de reversão significa que sempre será possível oferecer aos seus usuários uma versão do seu aplicativo que funcione, mesmo se uma implantação introduzir erros/problemas inesperados.
Neste tutorial, você configurou um fluxo de trabalho que permite que crie uma alternativa altamente personalizável da Plataforma como um Serviço, tudo a partir de alguns servidores. Esse fluxo de trabalho leva em consideração implantações e configurações personalizadas, o monitoramento de processos com o PM2, a possibilidade de dimensionar e adicionar serviços, servidores adicionais ou ambientes à implantação, quando necessário.
Se estiver interessado em continuar no desenvolvimento de suas habilidades com o Node.js, confira o material sobre Node.js da DigitalOcean, bem como a série sobre Como programar no Node.js.
]]>liz@ubuntu-server:/var/www$ sudo git clone git@github.com:lizyorick/markdown-portfolio.git [sudo] password for liz: Cloning into ‘markdown-portfolio’… git@github.com: Permission denied (publickey). fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists. liz@ubuntu-server:/var/www$ git clone git@github.com:lizyorick/markdown-portfolio.git fatal: could not create work tree dir ‘markdown-portfolio’: Permission denied liz@ubuntu-server:/var/www$ ls -la total 12 drwxr-xr-x 3 root root 4096 Mar 31 17:04 . drwxr-xr-x 14 root root 4096 Mar 31 16:00 … drwxr-xr-x 2 root root 4096 Mar 31 16:00 html
Shouldn’t my username be in place of root when I ls? How do I fix this? Please help!!
Liz
]]>Shipit es una herramienta universal de automatización e implementación para desarrolladores de Node.js. Incluye un flujo de tareas basado en el popular paquete Orchestrator, comandos de inicio de sesión y SSH interactivos a través de OpenSSH y una API extensible. Los desarrolladores pueden usar Shipit a fni de automatizar flujos de trabajo de compilación e implementación para una amplia variedad de aplicaciones de Node.js.
El flujo de trabajo de Shipit no solo permite a los desarrolladores configurar tareas, sino también especificar su orden de ejecución; si deben ejecutarse de forma sincrónica o asincrónica y en qué entorno.
En este tutorial, instalará y configurará Shipit para implementar una aplicación de Node.js de su entorno de desarrollo local en su entorno de producción. Usará Shipit para implementar su aplicación y configurar el servidor remoto haciendo lo siguiente:
rsync
, git
y ssh
);Antes de iniciar este tutorial, necesitará lo siguiente:
rsync
y git
instalados.
git
en distribuciones de Linux, siga el tutorial Cómo instalar Git.git
. En este tutorial se usará GitHub.Nota: Los usuarios de Windows deberán instalar el subsistema de Windows para Linux para ejecutar los comandos de esta guía.
En Shipit se requiere un repositorio de Git para la sincronía entre la máquina de desarrollo local y el servidor remoto. En este paso, creará un repositorio remoto en Github.com
. Si bien cada proveedor es ligeramente diferente, los comandos pueden ser transferibles.
Para crear un repositorio, abra Github.com
en su navegador web e inicie sesión. Verá un símbolo + en la esquina superior derecha de cualquier página. Haga clic en + y luego en New repository.
Ingrese un nombre corto y recordable para su repositorio; por ejemplo, hello-world
. Tenga en cuenta que cualquier nombre que elija aquí se replicará como la carpeta del proyecto a partir de la que trabajará en su máquina local.
De manera opcional, puede añadir una descripción de su repositorio.
Según su preferencia, fije la visibilidad de su repositorio de modo que sea público o privado.
Asegúrese de que este se inicialice con .gitignore
; seleccione Node
en la lista desplegable Add .gitignore
. Este paso es importante para evitar que se añadan archivos innecesarios (por ejemplo, la carpeta node_modules
) a su repositorio.
Haga clic en el botón Create repository.
Ahora, el repositorio debe clonarse de Github.com
y enviarse a su equipo local.
Abra su terminal y diríjase a la ubicación en la que desee almacenar todos los archivos de su proyecto de Node.js. Tenga en cuenta que en este proceso se creará una subcarpeta dentro del directorio actual. Para clonar el repositorio en su máquina local, ejecute el siguiente comando:
- git clone https://github.com/your-github-username/your-github-repository-name.git
Deberá sustituir your-github-username
y your-github-repository-name
para reflejar su nombre de usuario de Github y el nombre del repositorio previamente suministrado.
Nota: Si habilitó la autenticación de dos factores (2FA) en Github.com
, debe usar un token de acceso personal o una clave SSH en lugar de su contraseña cuando acceda a Github en la línea de comandos. En la página de ayuda de Github relacionada con 2FA se proporciona más información.
Verá un resultado similar a lo siguiente:
OutputCloning into 'your-github-repository-name'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Unpacking objects: 100% (3/3), done.
Diríjase al repositorio ejecutando el siguiente comando:
- cd your-github-repository-name
En el repositorio se encuentra un solo archivo y una carpeta, ambos son archivos que Git utiliza para administrar el repositorio. Puede verificar esto con lo siguiente:
- ls -la
Visualizará un resultado similar al siguiente:
Outputtotal 8
0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 .
0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 ..
0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git
8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore
Ahora que configuró un repositorio git
funcional, creará el archivo shipit.js
que administra su proceso de implementación.
En este paso, creará un proyecto de ejemplo de Node.js y luego agregará los paquetes de Shipit. En este tutorial se ofrece una aplicación de ejemplo: el servidor web Node.js que acepta solicitudes HTTP y responde con Hello World
en texto simple. Para crear la aplicación, ejecute el siguiente comando:
- nano hello.js
Añada el siguiente código de aplicación de ejemplo a hello.js
(actualizando la variable APP_PRIVATE_IP_ADDRESS
de modo que sea la dirección IP privada de su servidor app):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
Ahora, cree el archivo package.json
para su aplicación:
- npm init -y
Con este comando se crea un archivo package.json
, que usará para configurar su aplicación Node.js. En el siguiente paso, agregará dependencias a este archivo con la interfaz de línea de comandos npm
.
OutputWrote to ~/hello-world/package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
A continuación, instale los paquetes npm
necesarios con el siguiente comando:
- npm install --save-dev shipit-cli shipit-deploy shipit-shared
Aquí se utiliza el indicador --save-dev
, ya que los paquetes de Shipit solo son necesarios en su equipo local. Visualizará un resultado similar al siguiente:
Output+ shipit-shared@4.4.2
+ shipit-cli@4.2.0
+ shipit-deploy@4.1.4
updated 4 packages and audited 21356 packages in 11.671s
found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details
Con esto también se añadieron los tres paquetes a su archivo package.json
como dependencias de desarrollo:
. . .
"devDependencies": {
"shipit-cli": "^4.2.0",
"shipit-deploy": "^4.1.4",
"shipit-shared": "^4.4.2"
},
. . .
Ahora que configuró su entorno local, podrá proceder a preparar el servidor app remoto para las implementaciones basadas en Shipit.
En este paso, usará ssh
para establecer conexión con el servidor app e instalará la dependencia remota rsync
. Rsync es una utilidad que permite transferir y sincronizar de manera eficiente archivos entre unidades de máquinas locales y en computadoras en red mediante la comparación de fechas de modificación y tamaños de archivos.
En Shipit se utiliza rsync
para transferir y sincronizar archivos entre su equipo local y el servidor app remoto. No emitirá comandos para rsync
de forma directa; Shipit lo hará por usted.
Nota: A través de Cómo configurar una aplicación de Node.js para la producción en CentOS 7, obtuvo dos servidores app y web. Estos comandos deben ejecutarse únicamente en app.
Establezca conexión con su servidor app
remoto a través de ssh:
- ssh deployer@your_app_server_ip
Instale rsync
en su servidor ejecutando el siguiente comando:
- sudo yum install rsync
Confirme la instalación con lo siguiente:
- rsync --version
Verá una línea similar en el resultado de este comando:
Outputrsync version 3.1.2 protocol version 31
. . .
Puede cerrar su sesión ssh
escribiendo exit
.
Con rsync
instalado y disponible en la línea de comandos, puede proceder con las tareas de implementación y su relación con los eventos.
Tanto los eventos como las tareas son componentes claves de las implementaciones de Shipit y es importante comprender la forma en que complementan la implementación de su aplicación. Los eventos desencadenados por Shipit representan puntos específicos en el ciclo de vida de la implementación. Sus tareas se ejecutarán en respuesta a estos eventos, según la secuencia del ciclo de vida de Shipit.
Un ejemplo común de la utilidad de este sistema de tareas y eventos en una aplicación de Node.js se encuentra en la instalación de las dependencias de la aplicación (node_modules
) en el servidor remoto. Más adelante en este paso, hará que en Shipit se escuche el evento updated
(emitido después de que los archivos de la aplicación se transfieren) y ejecutará una tarea para instalar las dependencias de la aplicación (npm install
) en el servidor remoto.
Para escuchar los eventos y ejecutar tareas, Shipit necesita un archivo de configuración que contenga información sobre su servidor remoto (app) y registre las escuchas de eventos y los comandos que estas tareas ejecutarán. Este archivo reside en su máquina de desarrollo local, dentro del directorio de su aplicación de Node.js.
Para comenzar, cree este archivo; incluyea información sobre su servidor remoto, las escuchas de eventos a las que desee suscribirse y algunas definiciones de sus tareas. Cree shipitfile.js
dentro del directorio root de su aplicación, en su máquina local, ejecutando el siguiente comando:
- nano shipitfile.js
Ahora que creó un archivo, este debe completarse con la información inicial de entorno que Shipit requiere. Esta consiste principalmente en la ubicación de su repositorio Git
remoto y, lo que es más importante, la dirección IP pública y la cuenta de usuario SSH de su servidor app.
Añada esta configuración inicial y actualice las líneas resaltadas para que coincidan con su entorno:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/sammy/your-domain',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
};
La actualización de las variables en su método shipit.initConfig
proporciona a Shipit la configuración específica para su implementación. Estas representan lo siguiente para Shipit:
deployTo:
es el directorio en el que Shipit implementará el código de su aplicación en el servidor remoto. En este caso, se utiliza la carpeta /home/
para un usuario no root con privilegios sudo
(/home/sammy
) debido a que es segura y evitará problemas con los permisos. El componente /your-domain
es una convención de nomenclatura para distinguir la carpeta de otras dentro de la carpeta de inicio del usuario.repositoryUrl:
es la URL para el repositorio completo de Git; en Shipit se utilizará esta URL para garantizar que los archivos del proyecto se encuentren en sincronización antes de la implementación.keepReleases:
es el número de versiones que se deben mantener en el servidor remoto. release
es una carpeta fechada que contiene los archivos de su aplicación en el momento de su lanzamiento. Puede ser útil para un rollback
de una implementación.shared:
es la configuración que se corresponde con keepReleases
y permite que los directorios sean shared
entre las versiones. En este caso, tenemos una sola carpeta node_modules
que comparten todas las versiones.production:
representa un servidor remoto para la implementación de su aplicación. En este caso, tiene un único servidor (app) al que asignó el nombre production
y la configuración los servers
: coincide con su public ip address
y user
de SSH. El nombre production
se corresponde con el comando de implementación de Shipit utilizado al final de este tutorial (npx shipit server name deploy
o, en su caso, npx shipit production deploy
).Se puede encontrar más información sobre el objeto de configuración de implementación de Shipit en el repositorio Shipit de Github.
Antes de continuar con la actualización de su shipitfile.js
, revisaremos el siguiente fragmento de código de ejemplo para entender las tareas de Shipit:
Example event listenershipit.on('deploy', () => {
shipit.start('say-hello');
});
shipit.blTask('say-hello', async () => {
shipit.local('echo "hello from your local computer"')
});
Esta es una tarea de ejemplo que utiliza el método shipit.on
para suscribirse al evento deploy
. Esta tarea se esperará que el ciclo de vida de Shipit emita el evento deploy
; luego, cuando se reciba el evento, en la tarea se ejecutará el método shipit.start
que indica a Shipit que aplique start
a la tarea say-hello
.
En el método shipit.on
se toman dos parámetros: el nombre del evento que se escuchará y la función de devolución de llamada que se ejecutará cuando se reciba el evento.
En la declaración del método shipit.on
, la tarea se define con el método shipit.blTask
. Esto crea una nueva tarea Shipit que bloqueará otras tareas durante su ejecución (es una tarea sincrónica). En el método shipit.blTask
también se toman dos parámetros: el nombre de la tarea que define y una función de devolución de llamada que se ejecutará una vez que shipit.start active la tarea
.
En la función de devolución de llamada de este ejemplo (say-hello
), método shipit.local
ejecuta un comando en la máquina local. En el comando local se emite “hello from your local computer”
en el resultado de la terminal.
Si quisiera ejecutar un comando en el servidor remoto, usaría el método shipit.remote
. Los dos métodos, shipit.local
y shipit.remote
, proporcionan una API para emitir comandos de forma local o remota como parte de una implementación.
Ahora, actualice shipitfile.js
para incluir las escuchas de eventos a fin de suscribirse al ciclo de vida de Shipit con shipit.on
. Añada las escuchas de eventos a su shipitfile.js
, e insértelos conforme al marcador de posición de comentarios de la configuración inicial // Our tasks will go here
:
. . .
shipit.on('updated', () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', () => {
shipit.start('pm2-server');
});
Estos dos métodos escuchan los eventos updated
y published
que se emiten como parte del ciclo de vida de implementación de Shipit. Cuando se reciba el evento, cada uno iniciará tareas usando el método shipit.start
, en un proceso similar al de la tarea de ejemplo.
Ahora que programó las escuchas, agregará la tarea correspondiente. Añada la siguiente tarea a su shipitfile.js
; insértelas después de las escuchas de eventos:
. . .
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
Primero declara una tarea llamada copy-config
. En esta tarea se crea un archivo local llamado ecosystem.config.js
y luego se copia ese archivo a su servidor de app remoto. En PM2
, este archivo se utiliza para administrar su aplicación de Node.js. Proporciona a PM2
la información necesaria sobre rutas de archivos para garantizar que se ejecuten los últimos archivos que implementó. Más adelante, en el proceso de compilación, creará una tarea que ejecuta PM2
con ecosystem.config.js
como configuración.
Si en su aplicación se necesitan variables de entorno (como una cadena de conexión de base de datos), puede declararlas de forma local, en env:
, o en el servidor remoto, en env_production:
, aplicando el mismo método con el que estableció la variable Node_ENV
en estos objetos.
Añada la siguiente tarea a su shipitfile.js
después de la tarea copy-config
:
. . .
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
A continuación, declare una tarea llamada npm-install
. En esta tarea se utiliza una terminal bash remota (a través de shipit.remote
) para instalar las dependencias de la aplicación (paquetes npm
).
Añada la última tarea a su shipitfile.js
después de la tarea npm-install
:
. . .
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
Por último, declare una tarea llamada pm2-server
. En esta tarea, también se utiliza una terminal bash remota para primero evitar que PM2
administre su implementación anterior a través del comando delete
y luego iniciar una nueva instancia de su servidor Node.js proporcionando el archivo ecosystem.config.js
como una variable. También se notifica a PM2
que debe usar variables de entorno del bloque production
en su configuración inicial y se solicita, también a PM2
, que controle la aplicación y la reinicie si falla.
El archivo shipitfile.js
completo:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/deployer/example.com',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
shipit.on('updated', async () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', async () => {
shipit.start('pm2-server');
});
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
};
Guarde y cierre el archivo cuando esté listo.
Con su shipitfile.js
configurado, las escuchas de eventos y las tareas asociadas completadas, puede proceder con la implementación en el servidor app.
En este paso, implementará su aplicación de forma remota y verificará que la implementación haya permitido la disponibilidad de su aplicación en Internet.
Debido a que en Shipit se clonan los archivos de proyecto del repositorio remoto de Git, debe mover los archivos locales de su aplicación de Node.js de su máquina local a Github. Diríjase al directorio de la aplicación de su proyecto de Node.js (donde se ubican hello.js
y shiptitfile.js
) y ejecute el siguiente comando:
- git status
Con el comando git status
se muestra el estado del directorio de trabajo y el área de ensayos. Le permite ver los cambios que se prepararon o no, y los archivos que no se sometieron a seguimiento a través de Git. Sus archivos no se encuentran bajo seguimiento y aparecen en rojo en el resultado:
OutputOn branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.js
package-lock.json
package.json
shipitfile.js
nothing added to commit but untracked files present (use "git add" to track)
Puede añadir estos archivos a su repositorio con el siguiente comando:
- git add --all
Con este comando no se produce ningún resultado, pero si ejecutara git status
de nuevo los archivos se mostrarían en verde con una nota que señalaría la necesidad de confirmar cambios.
Puede crear una confirmación ejecutando el siguiente comando:
- git commit -m "Our first commit"
En el resultado de este comando se proporciona información específica de Git sobre los archivos.
Output[master c64ea03] Our first commit
4 files changed, 1948 insertions(+)
create mode 100644 hello.js
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 shipitfile.js
Lo único que queda por hacer ahora es enviar su confirmación al repositorio remoto para que Shipit realice una clonación a su servidor app durante la implementación. Ejecute el siguiente comando:
- git push origin master
En el resultado, se incluye información acerca de la sincronización con el repositorio remoto:
OutputEnumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.com:Asciant/hello-world.git
e274312..c64ea03 master -> master
Para implementar su aplicación, ejecute el siguiente comando:
- npx shipit production deploy
En el resultado de este comando (que es demasiado largo para incluirlo en su totalidad) se proporcionan detalles sobre las tareas que se ejecutan y el resultado de la función específica. En el resultado que se observa a continuación para la tarea pm2-server
se muestra que la aplicación Node.js se ha iniciado:
OutputRunning 'deploy:init' task...
Finished 'deploy:init' after 432 μs
. . .
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.27 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.81 s
Running 'deploy:finish' task...
Finished 'deploy:finish' after 222 μs
Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]
Para ver su aplicación como la vería un usuario, puede ingresar la URL de su sitio web your-domain
en su navegador a fin de acceder a su servidor web. Esto proporcionará la aplicación de Node.js, a través de un proxy inverso, en el servidor app en el que se implementaron sus archivos.
Verá un saludo Hello World.
Nota: Después de la primera implementación, en su repositorio de Git se hará un seguimiento de un archivo nuevo llamado ecosystem.config.js
. Debido a que este archivo se reconstruye en cada implementación y puede contener secretos de la aplicación compilados, se debe añadir al archivo .gitignore
en el directorio root de la aplicación de su máquina local antes de su próxima confirmación de git
.
. . .
# ecosystem.config
ecosystem.config.js
Implementó su aplicación Node.js en el servidor app, que hace referencia a su nueva implementación. Con todo configurado y en funcionamiento, puede proceder con la supervisión de los procesos de su aplicación.
PM2 es una excelente herramienta para administrar sus procesos remotos, pero también incluye funciones para controlar el rendimiento de estos procesos de aplicación.
Establezca conexión con su servidor app remoto mediante SSH con el siguiente comando:
- ssh deployer@your_app_server_ip
Para obtener información específica relacionada con los procesos administrados de PM2, ejecute lo siguiente:
- pm2 list
Verá un resultado similar a lo siguiente:
Output┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤
│ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │
└─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘
Verá un resumen de la información recopilada por PM2. Para ver información detallada, puede ejecutar lo siguiente:
- pm2 show hello
En el resultado, se amplía la información del resumen proporcionada por el comando pm2 list
. También proporciona información sobre varios comandos suplementarios y también la ubicación de los archivos de registro:
Output Describing process with id 0 - name hello
┌───────────────────┬─────────────────────────────────────────────────────────────┐
│ status │ online │
│ name │ hello │
│ version │ 1.0.0 │
│ restarts │ 0 │
│ uptime │ 82s │
│ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │
│ script args │ N/A │
│ error log path │ /home/deployer/.pm2/logs/hello-error.log │
│ out log path │ /home/deployer/.pm2/logs/hello-out.log │
│ pid path │ /home/deployer/.pm2/pids/hello-0.pid │
│ interpreter │ node │
│ interpreter args │ N/A │
│ script id │ 0 │
│ exec cwd │ /home/deployer │
│ exec mode │ fork_mode │
│ node.js version │ 4.2.3 │
│ node env │ production │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-05-31T21:30:48.334Z │
└───────────────────┴─────────────────────────────────────────────────────────────┘
Revision control metadata
┌──────────────────┬────────────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ N/A │
│ repository root │ /home/deployer/example.com/releases/20190531213027 │
│ last update │ 2019-05-31T21:30:48.559Z │
│ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │
│ comment │ Our first commit │
│ branch │ master │
└──────────────────┴────────────────────────────────────────────────────┘
Divergent env variables from local env
┌───────────────────────────┬───────────────────────────────────────┐
│ XDG_SESSION_ID │ 15 │
│ HOSTNAME │ N/A │
│ SELINUX_ROLE_REQUESTED │ │
│ TERM │ N/A │
│ HISTSIZE │ N/A │
│ SSH_CLIENT │ 44.222.77.111 58545 22 │
│ SELINUX_USE_CURRENT_RANGE │ │
│ SSH_TTY │ N/A │
│ LS_COLORS │ N/A │
│ MAIL │ /var/mail/deployer │
│ PATH │ /usr/local/bin:/usr/bin │
│ SELINUX_LEVEL_REQUESTED │ │
│ HISTCONTROL │ N/A │
│ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │
└───────────────────────────┴───────────────────────────────────────┘
. . .
Además, PM2 proporciona una herramienta de supervisión en terminal a la que es posible acceder con lo siguiente:
- pm2 monit
Como resultado de este comando se obtiene un panel interactivo, en el cual pm2
proporciona registros, métricas, metadatos e información sobre procesos en tiempo real. Este panel puede ser útil para controlar los recursos y registros de errores:
Output┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐
│[ 0] hello Mem: 22 MB ││ │
│ ││ │
│ ││ │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐
│ Heap Size 10.73 ││ App Name hello │
│ Heap Usage 66.14 ││ Version N/A │
│ Used Heap Size 7.10 ││ Restarts 0 │
│ Active requests 0 ││ Uptime 55s │
│ Active handles 4 ││ Script path /home/asciant/hello.js │
│ Event Loop Latency 0.70 ││ Script args N/A │
│ Event Loop Latency p95 ││ Interpreter node │
│ ││ Interpreter args N/A │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
Al comprender la forma en la que puede supervisar sus procesos con PM2, ahora puede considerar la forma en la que Shipit puede asistir en la reversión a una implementación funcional anterior.
Finalice su sesión ssh
en su servidor app ejecutando exit
.
En las implementaciones, de tanto en tanto se exhiben errores imprevistos o problemas que pueden causar fallas en su sitio. Los desarrolladores y los encargados de mantenimiento de Shipit previeron esto y proporcionaron la capacidad de restablecer la implementación (funcional) anterior de su aplicación.
Para garantizar que su configuración de PM2
persista, añada otra escucha de eventos a shipitfile.js
en el evento rollback
.
. . .
shipit.on('rollback', () => {
shipit.start('npm-install', 'copy-config');
});
Debe añadir una escucha al evento rollback
para ejecutar sus tareas npm-install
y copy-config
. Esto se requiere porque, a diferencia del evento published
, el evento updated
no se ejecuta a través del ciclo de vida de Shipit cuando se revierte una implementación. Añadir esta escucha garantiza que su administrador de procesos PM2
se oriente hacia la implementación más reciente, incluso en caso de una reversión.
Este proceso es similar a la implementación, con un pequeño cambio de comando. Para intentar restablecer a una implementación anterior, puede ejecutar lo siguiente:
- npx shipit production rollback
Al igual que el comando deploy
, rollback
proporciona detalles sobre el proceso de reversión y las tareas que se ejecutan:
OutputRunning 'rollback:init' task...
Get current release dirname.
Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com releases/20190531213719
Current release dirname : 20190531213719.
Getting dist releases.
Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com 20190531213719
@centos-ap-app.asciant.com 20190531213519
@centos-ap-app.asciant.com 20190531213027
Dist releases : ["20190531213719","20190531213519","20190531213027"].
Will rollback to 20190531213519.
Finished 'rollback:init' after 3.96 s
Running 'deploy:publish' task...
Publishing release "/home/deployer/example.com/releases/20190531213519"
Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com".
Release published.
Finished 'deploy:publish' after 1.8 s
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.55 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.82 s
Running 'rollback:finish' task...
Finished 'rollback:finish' after 615 μs
Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]
Configuró Shipit para mantener 5 versiones a través del ajuste keepReleases: 5
en shipitfile.js
. Shipit realiza un seguimiento interno de estas versiones para garantizar que la reversión sea posible cuando se necesite. Shipit también proporciona una alternativa práctica para identificar las versiones creando un directorio con el nombre de una marca de tiempo (AAAAMMDDHHmmss; por ejemplo, /home/deployer/your-domain/releases/20190420210548
).
Si desea personalizar aún más el proceso de reversión, puede escuchar eventos específicos de la operación de reversión. Luego, puede utilizarlos para ejecutar tareas que complementarán su reversión. Puede consultar la lista de eventos proporcionada en el desglose del ciclo de vida de Shipit, y configurar las tareas y escuchas dentro de su shipitfile.js
.
La capacidad de reversión implica que siempre puede proporcionar a sus usuarios una versión funcional de su aplicación, incluso si en una implementación se producen errores o problemas inesperados.
A lo largo de este tutorial, configuró un flujo de trabajo que le permite crear una alternativa sumamente personalizable a las plataformas como servicio, todo ello desde algunos servidores. Este flujo de trabajo permite personalizar la implementación y la configuración, controlar procesos con PM2, escalar y agregar servicios, o contar con servidores o entornos adicionales en la implementación cuando sea necesario.
Si está interesado en continuar desarrollando sus aptitudes para Node.js, consulte el contenido de Node.js de DigtalOcean y la serie Cómo producir código en Node.js.
]]>Shipit — универсальный инструмент развертывания и автоматизации для разработчиков Node.js. В нем используется система потоков задач на базе популярного пакета Orchestrator, система входа и интерактивные команды SSH на базе OpenSSH, а также расширяемый API. Разработчики могут использовать Shipit для автоматизации рабочих процессов сборки и развертывания для разнообразных приложений Node.js.
Рабочие процессы Shipit позволяют разработчикам не только настраивать задачи, но и указывать порядок их исполнения, необходимость синхронного или асинхронного выполнения, а также среду выполнения.
В этом обучающем руководстве мы выполним установку и настройку Shipit для развертывания приложения Node.js из локальной среды разработки в производственной среде. Мы используем Shipit для развертывания приложения и настройки удаленного сервера посредством следующих шагов:
rsync
, git
и ssh
).Перед началом выполнения этого обучающего руководства вам потребуется следующее:
rsync
и git
.
git
на дистрибутивах Linux содержатся в обучающем руководстве Установка Git.git
. В этом обучающем руководстве мы будем использовать GitHub.Примечание. Для выполнения описанных в этом руководстве команд пользователям Windows потребуется установить подсистему Windows Subsystem for Linux.
Для синхронизации между локальным компьютером разработчика и удаленным сервером Shipit требуется репозиторий Git. На этом шаге вы создадите удаленный репозиторий на Github.com
. Хотя все поставщики немного отличаются друг от друга, некоторые команды аналогичны.
Для создания репозитория откройте сайт Github.com
в браузере и введите учетные данные. В правом верхнем углу каждой странице располагается символ +. Нажмите +, а затем нажмите New repository.
Введите легко запоминающееся короткое имя репозитория, например hello-world
. Любое указанное здесь имя будет воспроизведено в папке проекта, используемой при работе на локальном компьютере.
При желании вы можете добавить описание репозитория.
Укажите желаемую видимость репозитория (публичный или частный).
Убедитесь, что для инициализации репозитория используется .gitignore
, выберите пункт Node
из выпадающего списка Add .gitignore
. Этот шаг важен, чтобы предотвратить добавление в репозиторий ненужных файлов (например, папки node_modules
).
Нажмите кнопку Create repository.
Теперь необходимо клонировать репозиторий с Github.com
на локальный компьютер.
Откройте терминал и перейдите в место, где вы хотите хранить все файлы вашего проекта Node.js. Данный процесс создаст подпапку в текущей директории. Для клонирования репозитория на локальный компьютер используйте следующую команду:
- git clone https://github.com/your-github-username/your-github-repository-name.git
Вам потребуется заменить your-github-username
и your-github-repository-name
на свое имя пользователя Github и ранее заданное имя репозитория.
Примечание. Если вы активировали двухфакторную аутентификацию (2FA) на Github.com
, вы должны использовать персональный токен доступа или ключ SSH вместо пароля при доступе к Github из командной строки. Дополнительную информацию можно найти на странице справки Github по двухфакторной аутентификации.
Результат будет выглядеть примерно так:
OutputCloning into 'your-github-repository-name'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Unpacking objects: 100% (3/3), done.
Выполните следующую команду для перехода в репозиторий:
- cd your-github-repository-name
Внутри репозитория содержится отдельный файл и папка, используемые Git для управления репозиторием. Вы можете проверить это с помощью следующей команды:
- ls -la
Результат будет выглядеть примерно так:
Outputtotal 8
0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 .
0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 ..
0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git
8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore
Мы настроили рабочий репозиторий git
и теперь можем создать файл shipit.js
для управления процессом развертывания.
На этом шаге вы создадите образец проекта Node.js, а затем добавите пакеты Shipit. В этом обучающем руководстве приведен пример приложения — сервер Node.js web принимает запросы HTTP и отвечает на них текстовым сообщением Hello World
. Для создания приложения запустите следующую команду:
- nano hello.js
Добавьте следующий пример кода приложения в файл hello.js
(измените значение переменной APP_PRIVATE_IP_ADDRESS
на частный IP-адрес вашего сервера app):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
Создайте файл package.json
для вашего приложения:
- npm init -y
Эта команда создает файл package.json
, который вы будете использовать для настройки вашего приложения Node.js. На следующем шаге мы добавим в этот файл зависимости с помощью интерфейса командной строки npm
.
OutputWrote to ~/hello-world/package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Затем установите необходимые пакеты npm
с помощью следующей команды:
- npm install --save-dev shipit-cli shipit-deploy shipit-shared
Здесь мы используем флаг --save-dev
, поскольку пакеты Shipit требуются только на локальном компьютере. Результат будет выглядеть примерно так:
Output+ shipit-shared@4.4.2
+ shipit-cli@4.2.0
+ shipit-deploy@4.1.4
updated 4 packages and audited 21356 packages in 11.671s
found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details
Эта команда также добавила в файл package.json
три пакета как зависимости разработки:
. . .
"devDependencies": {
"shipit-cli": "^4.2.0",
"shipit-deploy": "^4.1.4",
"shipit-shared": "^4.4.2"
},
. . .
Мы выполнили настройку локальной среды и теперь можем перейти к подготовке удаленного сервера app для развертывания на базе Shipit.
На этом шаге мы используем ssh
для подключения к серверу app и установки удаленной зависимости rsync
. Rsync — это утилита для эффективного перемещения и синхронизации файлов на дисках локального компьютера и сетевых компьютерах, которая выполняет сравнение времени изменения и размера файлов.
Shipit использует rsync
для передачи и синхронизации файлов между локальным компьютером и удаленным сервером app. Никакие команды rsync
не нужно отправлять напрямую; Shipit обрабатывает их автоматически.
Примечание. После выполнения обучающего руководства Настройка приложения Node.js для производственной среды в CentOS 7 у нас осталось два сервера, app и web. Эти команды следует выполнять только на сервере app.
Подключитесь к удаленному серверу app через ssh
:
- ssh deployer@your_app_server_ip
Установите на сервере rsync
с помощью следующей команды:
- sudo yum install rsync
Подтвердите установку с помощью следующей команды:
- rsync --version
Вы увидите примерно такую строку с результатами выполнения команды:
Outputrsync version 3.1.2 protocol version 31
. . .
Вы можете завершить сеанс ssh
с помощью команды exit
.
Мы установили rsync
и сделали эту утилиту доступной в командной строке. Теперь мы можем перейти к задачам развертывания и их связи с событиями.
События и задачи являются ключевыми компонентами развертывания с помощью Shipit, и поэтому важно понимать, как они дополняют процесс развертывания вашего приложения. Активируемые Shipit события отражают конкретные моменты жизненного цикла разработки. Задачи выполняются в ответ на эти события на основе последовательности жизненного цикла Shipit.
Установка зависимостей приложения (node_modules
) на удаленный сервер является обычным примером полезного использования этой системы задач и событий в приложении Node.js. Позднее на этом шаге мы сделаем так, что Shipit будет прослушивать событие updated
(происходящее после передачи файлов приложения) и запускать задачу для установки зависимостей приложения (npm install
) на удаленном сервере.
Чтобы прослушивать события и выполнять задачи, Shipit требуется файл конфигурации, где хранится информация об удаленном сервере (сервер app) и регистрируются модули прослушивания событий и команды, выполняемые этими задачами. Этот файл размещается на локальном компьютере, используемом для разработки, в директории приложения Node.js.
Для начала работы необходимо создать этот файл и добавить в него информацию об удаленном сервере, модулях прослушивания событий, на которые вы хотите подписаться, а также некоторые определения ваших задач. Создайте файл shipitfile.js
в корневой директории приложения на локальном компьютере с помощью следующей команды:
- nano shipitfile.js
Мы создали файл, и теперь в него нужно внести исходную информацию о среде, которая требуется Shipit. Обычно эта информация включает расположение удаленного репозитория Git
, публичный IP-адрес сервера app и учетную запись пользователя SSH.
Добавьте эту начальную конфигурацию, изменив выделенные сроки в соответствии с параметрами вашей среды:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/sammy/your-domain',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
};
При обновлении variables в методе shipit.initConfig
Shipit получает сведения о конфигурации развертывания вашего приложения. Они представляют в Shipit следующее:
deployTo:
директория, где Shipit размещает код вашего приложения на удаленном сервере. Здесь мы используем папку /home/
для пользователя sudo
без привилегий root (/home/sammy
), поскольку это защищенная папка, и с ней мы можем избежать проблем с разрешениями. Компонент имени /your-domain
помогает отличить эту папку от других папок в домашней директории пользователя.repositoryUrl:
URL-адрес полного репозитория Git, Shipit использует этот URL-адрес для обеспечения синхронизации файлов проекта до начала развертывания.keepReleases:
количество выпусков приложения, хранящихся на удаленном сервере. Выпуск
— это папка с указанием даты, содержащая файлы вашего приложения на момент выпуска. Они могут быть полезны, если потребуется провести откат
развертывания.shared:
конфигурация, соответствующая keepReleases
, позволяющая использовать общие
директории для нескольких выпусков. В данном случае мы используем одну папку node_modules
для всех выпусков.production:
представляет удаленный сервер, где вы хотите развернуть ваше приложение. В данном случае мы используем один сервер (сервер app) с именем production
, и конфигурация servers:
соответствует пользователю
SSH и публичному IP-адресу
. Имя production
соответствует команде Shipit deploy, используемой ближе к концу этого обучающего руководства (npx shipit server name deploy
или в данном случае npx shipit production deploy
).Дополнительную информацию по объекту Shipit Deploy Configuration можно найти в репозитории Shipit на Github.
Прежде чем продолжить обновление файла shipitfile.js
, рассмотрим следующий пример кода, чтобы лучше понять задачи Shipit:
Example event listenershipit.on('deploy', () => {
shipit.start('say-hello');
});
shipit.blTask('say-hello', async () => {
shipit.local('echo "hello from your local computer"')
});
Это пример задачи, где метод shipit.on
используется для подписки на событие deploy
. Данная задача ожидает события deploy
в жизненном цикле Shipit, а после получения этого события задача выполняет метод shipit.start
, предписывающий Shipit запустить
задачу say-hello
.
Метод shipit.on
берет два параметра, название прослушиваемого события и функцию обратного вызова, выполняемую при получении события.
При декларировании метода shipit.on
задача определяется с помощью метода shipit.blTask
. Так создается новая задача Shipit, которая будет блокировать другие задачи во время выполнения (это синхронная задача). Метод shipit.blTask
также принимает два параметра, а именно имя определяемой задачи и функцию обратного вызова, которая выполняется при активации задачи shipit.start
.
В функции обратного вызова в этом примере (say-hello
) метод shipit.local
выполняет команду на локальном компьютере. Локальная команда выводит эхо-сообщение "hello from your local computer"
на экран терминала.
Если вам нужно запустить команду на удаленном сервере, используйте метод shipit.remote
. Два метода, shipit.local
и shipit.remote
, предоставляют API для локальной или удаленной отправки команд в процессе развертывания.
Обновите файл shipitfile.js
для добавления в него модулей прослушивания событий для подписки на жизненный цикл Shipit с помощью shipit.on
. Добавьте модули прослушивания событий в файл shipitfile.js
, вставьте их после замещающего текста комментария в первоначальной конфигурации // Our tasks will go here
:
. . .
shipit.on('updated', () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', () => {
shipit.start('pm2-server');
});
Эти два метода прослушивают события updated
и published
в составе жизненного цикла развертывания Shipit. При получении события каждый метод инициирует задачи с помощью метода shipit.start
аналогично приведенному примеру.
Мы задали планирование модулей прослушивания и теперь можем добавить соответствующую задачу. Добавьте следующую задачу в файл shipitfile.js
, вставив ее после модулей прослушивания событий:
. . .
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
Вначале декларируется задача с именем copy-config
. Эта задача создает локальный файл с именем ecosystem.config.js
, а затем копирует его на удаленный сервер app. PM2
использует этот файл для управления вашим приложением Node.js. Он передает PM2
необходимую информацию о пути к файлу, чтобы обеспечить использование последней версии развернутых файлов. Позднее в процессе сборки мы создадим задачу запуска PM2
с помощью ecosystem.config.js
как конфигурации.
Если для вашего приложения требуются переменные среды (например, строка подключения к базе данных), вы можете декларировать их локально через env:
или на удаленном сервере через env_production:
точно так же, как вы настраиваете переменную NODE_ENV
в этих объектах.
Добавьте следующую задачу в файл shipitfile.js
после задачи copy-config
:
. . .
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
Затем декларируйте задачу с именем npm-install
. Данная задача использует удаленный терминал bash (через shipit.remote
) для установки зависимостей приложения (пакетов npm
).
Добавьте последнюю задачу в файл shipitfile.js
после задачи npm-install
:
. . .
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
В заключение декларируйте задачу pm2-server
. Эта задача также использует удаленный терминал для остановки управления предыдущим развертыванием со стороны PM2
с помощью команды delete
и для запуска нового экземпляра сервера Node.js с указанием файла ecosystem.config.js
в качестве переменной. Также вы должны сообщить PM2
о необходимости использования в начальной конфигурации переменных среды из блока production
и предписать PM2
следить за приложением и перезапускать его в случае сбоя.
Полный файл shipitfile.js
:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/deployer/example.com',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
shipit.on('updated', async () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', async () => {
shipit.start('pm2-server');
});
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
};
Сохраните и закройте файл, когда будете готовы.
Мы внесли настройки в файл shipitfile.js
, подготовили модули прослушивания событий и завершили соответствующие задачи, и теперь можем перейти к развертыванию на сервере app.
На этом шаге мы выполним удаленное развертывание приложения и проверим доступность развернутого приложения через Интернет.
Поскольку Shipit клонирует файлы проекта из удаленного репозитория Git, локальные файлы приложения Node.js следует отправить с локального компьютера на Github. Перейдите в директорию приложения проекта Node.js (где находятся файлы hello.js
и shiptitfile.js
) и запустите следующую команду:
- git status
Команда git status
показывает состояние рабочей директории и области размещения. С его помощью вы увидите, какие изменения были размещены, какие нет и какие файлы не отслеживаются Git. Ваши файлы не отслеживаются и выделяются красным цветом в выводимых результатах:
OutputOn branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.js
package-lock.json
package.json
shipitfile.js
nothing added to commit but untracked files present (use "git add" to track)
Вы можете добавить эти файлы в репозиторий с помощью следующей команды:
- git add --all
Эта команда не выводит никаких результатов, хотя если вы снова запустите команду git status
, файлы будут помечены зеленым цветом с отметкой, что имеются изменения для записи.
Для записи изменений используется следующая команда:
- git commit -m "Our first commit"
В результатах этой информации выводится определенная информация о файлах, относящаяся к Git.
Output[master c64ea03] Our first commit
4 files changed, 1948 insertions(+)
create mode 100644 hello.js
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 shipitfile.js
Теперь остается только завершить запись в удаленный репозиторий, чтобы Shipit мог выполнить клонирование сервера app во время развертывания. Запустите следующую команду:
- git push origin master
В результатах выводится информация о синхронизации с удаленным репозиторием:
OutputEnumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.com:Asciant/hello-world.git
e274312..c64ea03 master -> master
Чтобы развернуть приложение, запустите следующую команду:
- npx shipit production deploy
Эта команда выводит данные о выполняемых задачах (в слишком большом объеме, чтобы показать их полностью) и результат выполнения определенной функции. В результатах, выводимых после выполнения задачи pm2-server
, показывается, что приложение Node.js запущено:
OutputRunning 'deploy:init' task...
Finished 'deploy:init' after 432 μs
. . .
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.27 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.81 s
Running 'deploy:finish' task...
Finished 'deploy:finish' after 222 μs
Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]
Чтобы увидеть приложение как пользователь, вы можете ввести URL-адрес вашего сайта your-domain
в браузере, чтобы получить доступ к серверу web. Приложение Node.js будет обслуживаться через обратный прокси-сервер на сервере app, где развернуты ваши файлы.
Вы увидите приветствие Hello World.
Примечание. После первого развертывания репозиторий Git будет отслеживать новый файл с именем ecosystem.config.js
. Поскольку этот файл будет воссоздаваться заново при каждом развертывании и может содержать секретные данные скомпилированного приложения, его следует добавить в файл .gitignore
в корневой директории приложения на локальном компьютере до следующей записи в git
.
. . .
# ecosystem.config
ecosystem.config.js
Мы развернули приложение Node.js на сервере app, проведя новую операцию развертывания. Теперь все работает, и мы можем переходить к мониторингу процессов вашего приложения.
PM2 — отличный инструмент для управления удаленными процессами, который также поддерживает мониторинг производительности этих процессов.
Подключитесь к удаленному серверу app через SSH с помощью следующей команды:
- ssh deployer@your_app_server_ip
Для получения конкретной информации по управляемым процессам PM2 используйте следующую команду:
- pm2 list
Результат будет выглядеть примерно так:
Output┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤
│ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │
└─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘
Вы увидите сводную информацию, собранную PM2. Чтобы посмотреть подробную информацию, запустите следующую команду:
- pm2 show hello
В результатах раскрывается сводная информация, предоставляемая командой pm2 list
. Также предоставляется информация по ряду вспомогательных команд и сведения о расположении файла журнала:
Output Describing process with id 0 - name hello
┌───────────────────┬─────────────────────────────────────────────────────────────┐
│ status │ online │
│ name │ hello │
│ version │ 1.0.0 │
│ restarts │ 0 │
│ uptime │ 82s │
│ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │
│ script args │ N/A │
│ error log path │ /home/deployer/.pm2/logs/hello-error.log │
│ out log path │ /home/deployer/.pm2/logs/hello-out.log │
│ pid path │ /home/deployer/.pm2/pids/hello-0.pid │
│ interpreter │ node │
│ interpreter args │ N/A │
│ script id │ 0 │
│ exec cwd │ /home/deployer │
│ exec mode │ fork_mode │
│ node.js version │ 4.2.3 │
│ node env │ production │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-05-31T21:30:48.334Z │
└───────────────────┴─────────────────────────────────────────────────────────────┘
Revision control metadata
┌──────────────────┬────────────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ N/A │
│ repository root │ /home/deployer/example.com/releases/20190531213027 │
│ last update │ 2019-05-31T21:30:48.559Z │
│ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │
│ comment │ Our first commit │
│ branch │ master │
└──────────────────┴────────────────────────────────────────────────────┘
Divergent env variables from local env
┌───────────────────────────┬───────────────────────────────────────┐
│ XDG_SESSION_ID │ 15 │
│ HOSTNAME │ N/A │
│ SELINUX_ROLE_REQUESTED │ │
│ TERM │ N/A │
│ HISTSIZE │ N/A │
│ SSH_CLIENT │ 44.222.77.111 58545 22 │
│ SELINUX_USE_CURRENT_RANGE │ │
│ SSH_TTY │ N/A │
│ LS_COLORS │ N/A │
│ MAIL │ /var/mail/deployer │
│ PATH │ /usr/local/bin:/usr/bin │
│ SELINUX_LEVEL_REQUESTED │ │
│ HISTCONTROL │ N/A │
│ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │
└───────────────────────────┴───────────────────────────────────────┘
. . .
PM2 также предоставляет инструмент для мониторинга через терминал:
- pm2 monit
В результате выполнения этой команды выводится интерактивная панель, где pm2
в реальном времени предоставляет информацию о процессах, журналы, метрические показатели и метаданные. Информационная панель может помочь для мониторинга ресурсов и журналов ошибок:
Output┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐
│[ 0] hello Mem: 22 MB ││ │
│ ││ │
│ ││ │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐
│ Heap Size 10.73 ││ App Name hello │
│ Heap Usage 66.14 ││ Version N/A │
│ Used Heap Size 7.10 ││ Restarts 0 │
│ Active requests 0 ││ Uptime 55s │
│ Active handles 4 ││ Script path /home/asciant/hello.js │
│ Event Loop Latency 0.70 ││ Script args N/A │
│ Event Loop Latency p95 ││ Interpreter node │
│ ││ Interpreter args N/A │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
Теперь мы знаем, как отслеживать выполнение процессов с помощью PM2, и можем перейти к изучению использования Shipit для отката к предыдущей работающей версии развертывания.
Закройте сеанс ssh
на сервере app с помощью команды exit
.
Иногда при развертывании могут возникнуть непредвиденные ошибки или проблемы, из-за которых сайт выходит из строя. Команда по разработке и обслуживанию Shipit предвидела эту ситуацию и поэтому предлагает возможность отката к предыдущей (работающей) версии вашего приложения.
Чтобы обеспечить сохранение конфигурации PM2,
добавьте еще один модуль прослушивания событий в файл shipitfile.js
для события rollback
:
. . .
shipit.on('rollback', () => {
shipit.start('npm-install', 'copy-config');
});
Модуль прослушивания событий для события rollback
добавляется для запуска заданий npm-install
и copy-config
. Это необходимо, т. к. в отличие от опубликованного
события, обновленное
событие не запускается в рабочем цикле Shipit при откате развертывания. При добавлении этого модуля прослушивания событий ваш диспетчер процессов PM2
указывает на последнюю развернутую версию, даже если был произведен откат.
Этот процесс аналогичен развертыванию, есть лишь небольшое отличие в синтаксисе команд. Попробуйте выполнить откат к предыдущей развернутой версии с помощью следующей команды:
- npx shipit production rollback
Как и команда deploy
, команда rollback
предоставляет детальную информацию по процессу отката и выполняемым задачам:
OutputRunning 'rollback:init' task...
Get current release dirname.
Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com releases/20190531213719
Current release dirname : 20190531213719.
Getting dist releases.
Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com 20190531213719
@centos-ap-app.asciant.com 20190531213519
@centos-ap-app.asciant.com 20190531213027
Dist releases : ["20190531213719","20190531213519","20190531213027"].
Will rollback to 20190531213519.
Finished 'rollback:init' after 3.96 s
Running 'deploy:publish' task...
Publishing release "/home/deployer/example.com/releases/20190531213519"
Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com".
Release published.
Finished 'deploy:publish' after 1.8 s
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.55 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.82 s
Running 'rollback:finish' task...
Finished 'rollback:finish' after 615 μs
Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]
Вы настроили Shipit для сохранения 5 выпусков с помощью параметра конфигурации keepReleases: 5
в shipitfile.js
. Shipit обеспечивает внутреннее отслеживание этих выпусков,чтобы обеспечить возможность отката в случае необходимости. Также Shipit обеспечивает удобную возможность идентификации выпусков посредством создания директории с временной меткой в составе имени (YYYYMMDDHHmmss — пример: /home/deployer/your-domain/releases/20190420210548
).
Если вам требуется дополнительная настройка процедуры отката, вы можете использовать прослушивание событий, связанных с операцией отката. Эти события вы можете использовать для выполнения задач, дополняющих процедуру отката. Вы можете использовать жизненный цикл событий, включенный в детальное описание рабочего цикла Shipit, а также настроить задачи и модули прослушивания в файле shipitfile.js
.
Возможность отката означает, что вы всегда можете предоставить пользователям работающую версию приложения, даже если при развертывании возникнут непредвиденные проблемы или ошибки.
В этом обучающем руководстве мы научились настраивать рабочий процесс, позволяющий создать настраиваемую альтернативу модели «платформа как услуга» с использованием всего двух серверов. Этот рабочий процесс поддерживает персонализацию развертывания и настроек, мониторинг процессов с помощью PM2, возможность масштабирования и добавления служб, дополнительных серверов или сред, если возникнет такая необходимость.
Если вы хотите продолжить развивать свои навыки Node.js, ознакомьтесь с материалами DigitalOcean по Node.js и с серией материалов по программированию на Node.js.
]]>Versionskontrollsysteme unterstützen Sie bei der Freigabe von und Zusammenarbeit an Softwareentwicklungsprojekten. Git ist eines der beliebtesten Versionskontrollsysteme, die derzeit erhältlich sind.
Diese Anleitung erläutert die Installation und Konfiguration von Git auf einem Ubuntu 18.04 Server. Eine ausführlichere Version dieses Tutorials mit besseren Erklärungen zu den einzelnen Schritten finden Sie unter Installation von Git auf Ubuntu 18.04.
Melden Sie sich als sudo-Benutzer ohne Rootberechtigung bei Ihrem Ubuntu 18.04 Server an und aktualisieren Sie zunächst Ihre Standardpakete.
- sudo apt update
- sudo apt install git
Sie können prüfen, ob Sie Git korrekt installiert haben, indem Sie diesen Befehl ausführen und eine Ausgabe erhalten, die der folgenden ähnelt:
- git --version
Outputgit version 2.17.1
Nach der Git-Installation sollten Sie zur Vermeidung von Warnungen es auf Ihre Informationen konfigurieren.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Falls Sie diese Datei bearbeiten müssen, verwenden Sie einen Texteditor wie etwa nano:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Unter diesen Links finden Sie ausführlichere Tutorials, die für diesen Leitfaden relevant sind:
]]>Les systèmes de contrôle de version vous aident à partager et à collaborer sur des projets de développement de logiciels. Git est l’un des systèmes de contrôle de version les plus populaires actuellement disponibles.
Ce tutoriel vous guidera pour installer et configurer Git sur un serveur Ubuntu 18.04. Pour une version plus détaillée de ce tutoriel, avec des explications plus précises de chaque étape, veuillez vous référer à Comment installer Git sur Ubuntu 18.04.
Commencez par mettre à jour vos packages par défaut, en étant connecté à votre serveur Ubuntu 18.04 en tant qu’utilisateur non root avec privilèges sudo.
- sudo apt update
- sudo apt install git
Vous pouvez confirmer que vous avez correctement installé Git en exécutant cette commande et en recevant une sortie similaire à la suivante :
- git --version
Outputgit version 2.17.1
Maintenant que vous avez installé Git, et pour éviter les avertissements, vous devez le configurer avec vos informations.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Si vous avez besoin de modifier ce fichier, vous pouvez utiliser un éditeur de texte tel que nano :
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Voici des liens vers des tutoriels plus détaillés qui sont liés à ce guide :
]]>Shipit is a universal automation and deployment tool for Node.js developers. It features a task flow based on the popular Orchestrator package, login and interactive SSH commands through OpenSSH, and an extensible API. Developers can use Shipit to automate build and deployment workflows for a wide range of Node.js applications.
The Shipit workflow allows developers to not only configure tasks, but also to specify the order in which they are executed; whether they should be run synchronously or asynchronously and on which environment.
In this tutorial you will install and configure Shipit to deploy a Node.js application from your local development environment to your production environment. You’ll use Shipit to deploy your application and configure the remote server by:
rsync
, git
, and ssh
).Before you begin this tutorial you’ll need the following:
rsync
and git
installed.
git
on Linux distributions, follow the How To Install Git tutorial.git
service provider. This tutorial will use GitHub.Note: Windows users will need to install the Windows Subsystem for Linux to execute the commands in this guide.
Shipit requires a Git repository to synchronize between the local development machine and the remote server. In this step you’ll create a remote repository on Github.com
. While each provider is slightly different the commands are somewhat transferrable.
To create a repository, open Github.com
in your web browser and log in. You will notice that in the upper-right corner of any page there is a + symbol. Click +, and then click New repository.
Type a short, memorable name for your repository, for example, hello-world
. Note that whatever name you choose here will be replicated as the project folder that you’ll work from on your local machine.
Optionally, add a description of your repository.
Set your repository’s visibility to your preference, either public or private.
Make sure the repository is initialized with a .gitignore
, select Node
from the Add .gitignore
dropdown list. This step is important to avoid having unnecessary files (like the node_modules
folder) being added to your repository.
Click the Create repository button.
The repository now needs to be cloned from Github.com
to your local machine.
Open your terminal and navigate to the location where you want to store all your Node.js project files. Note that this process will create a sub-folder within the current directory. To clone the repository to your local machine, run the following command:
- git clone https://github.com/your-github-username/your-github-repository-name.git
You will need to replace your-github-username
and your-github-repository-name
to reflect your Github username and the previously supplied repository name.
Note: If you have enabled two-factor authentication (2FA) on Github.com
, you must use a personal access token or SSH key instead of your password when accessing Github on the command line. The Github Help page related to 2FA provides further information.
You’ll see output similar to:
OutputCloning into 'your-github-repository-name'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Unpacking objects: 100% (3/3), done.
Move to the repository by running the following command:
- cd your-github-repository-name
Inside the repository is a single file and folder, both of which are files used by Git to manage the repository. You can verify this with:
- ls -la
You’ll see output similar to the following:
Outputtotal 8
0 drwxr-xr-x 4 asciant staff 128 22 Apr 07:16 .
0 drwxr-xr-x 5 asciant staff 160 22 Apr 07:16 ..
0 drwxr-xr-x 13 asciant staff 416 22 Apr 07:16 .git
8 -rw-r--r-- 1 asciant staff 914 22 Apr 07:16 .gitignore
Now that you have configured a working git
repository, you’ll create the shipit.js
file that manages your deployment process.
In this step, you’ll create an example Node.js project and then add the Shipit packages. This tutorial provides an example app—the Node.js web server that accepts HTTP requests and responds with Hello World
in plain text. To create the application, run the following command:
- nano hello.js
Add the following example application code to hello.js
(updating the APP_PRIVATE_IP_ADDRESS
variable to your app server’s private network IP address):
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(8080, 'APP_PRIVATE_IP_ADDRESS');
console.log('Server running at http://APP_PRIVATE_IP_ADDRESS:8080/');
Now create your package.json
file for your application:
- npm init -y
This command creates a package.json
file, which you’ll use to configure your Node.js application. In the next step, you’ll add dependencies to this file with the npm
command line interface.
OutputWrote to ~/hello-world/package.json:
{
"name": "hello-world",
"version": "1.0.0",
"description": "",
"main": index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC"
}
Next, install the necessary npm
packages with the following command:
- npm install --save-dev shipit-cli shipit-deploy shipit-shared
You use the --save-dev
flag here as the Shipit packages are only required on your local machine. You’ll see output similar to the following:
Output+ shipit-shared@4.4.2
+ shipit-cli@4.2.0
+ shipit-deploy@4.1.4
updated 4 packages and audited 21356 packages in 11.671s
found 62 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details
This also added the three packages to your package.json
file as development dependencies:
. . .
"devDependencies": {
"shipit-cli": "^4.2.0",
"shipit-deploy": "^4.1.4",
"shipit-shared": "^4.4.2"
},
. . .
With your local environment configured, you can now move on to preparing the remote app server for Shipit-based deployments.
In this step, you’ll use ssh
to connect to your app server and install your remote dependency rsync
. Rsync is a utility for efficiently transferring and synchronizing files between local computer drives and across networked computers by comparing the modification times and sizes of files.
Shipit uses rsync
to transfer and synchronize files between your local computer and the remote app server. You won’t be issuing any commands to rsync
directly; Shipit will handle it for you.
Note: How To Set Up a Node.js Application for Production on CentOS 7 left you with two servers app and web. These commands should be executed on app only.
Connect to your remote app server via ssh
:
- ssh deployer@your_app_server_ip
Install rsync
on your server by running the following command:
- sudo yum install rsync
Confirm the installation with:
- rsync --version
You’ll see a similar line within the output of this command:
Outputrsync version 3.1.2 protocol version 31
. . .
You can end your ssh
session by typing exit
.
With rsync
installed and available on the command line, you can move on to deployment tasks and their relationship with events.
Both events and tasks are key components of Shipit deployments and it is important to understand how they complement the deployment of your application. The events triggered by Shipit represent specific points in the deployment lifecycle. Your tasks will execute in response to these events, based on the sequence of the Shipit lifecycle.
A common example of where this task/event system is useful in a Node.js application, is the installation of the app’s dependencies (node_modules
) on the remote server. Later in this step you’ll have Shipit listen for the updated
event (which is issued after the application’s files are transferred) and run a task to install the application’s dependencies (npm install
) on the remote server.
To listen to events and execute tasks, Shipit needs a configuration file that holds information about your remote server (the app server) and registers event listeners and the commands to be executed by these tasks. This file lives on your local development computer, inside your Node.js application’s directory.
To get started, create this file, including information about your remote server, the event listeners you want to subscribe to, and some definitions of your tasks. Create shipitfile.js
within your application root directory on your local machine by running the following command:
- nano shipitfile.js
Now that you’ve created a file, it needs to be populated with the initial environment information that Shipit needs. This is primarily the location of your remote Git
repository and importantly, your app server’s public IP address and SSH user account.
Add this initial configuration and update the highlighted lines to match your environment:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/sammy/your-domain',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'sammy@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
};
Updating the variables in your shipit.initConfig
method provides Shipit with configuration specific to your deployment. These represent the following to Shipit:
deployTo:
is the directory where Shipit will deploy your application’s code to on the remote server. Here you use the /home/
folder for a non-root user with sudo
privileges (/home/sammy
) as it is secure, and will avoid permission issues. The /your-domain
component is a naming convention to distinguish the folder from others in the user’s home folder.repositoryUrl:
is the URL to the full Git repository, Shipit will use this URL to ensure the project files are in sync prior to deployment.keepReleases:
is the number of releases to keep on the remote server. A release
is a date-stamped folder containing your application’s files at the time of release. These can be useful for rollback
of a deployment.shared:
is configuration that corresponds with keepReleases
that allows directories to be shared
between releases. In this instance, we have a single node_modules
folder that is shared between all releases.production:
represents a remote server to deploy your application to. In this instance, you have a single server (app server) that you name production
, with the servers:
configuration matching your SSH user
and public ip address
. The name production
, corresponds with the Shipit deploy command used toward the end of this tutorial (npx shipit server name deploy
or in your case npx shipit production deploy
).Further information on the Shipit Deploy Configuration object can be found in the Shipit Github repository.
Before continuing to update your shipitfile.js
, let’s review the following example code snippet to understand Shipit tasks:
Example event listenershipit.on('deploy', () => {
shipit.start('say-hello');
});
shipit.blTask('say-hello', async () => {
shipit.local('echo "hello from your local computer"')
});
This is an example task that uses the shipit.on
method to subscribe to the deploy
event. This task will wait for the deploy
event to be emitted by the Shipit lifecycle, then when the event is received, the task executes the shipit.start
method that tells Shipit to start
the say-hello
task.
The shipit.on
method takes two parameters, the name of the event to listen for and the callback function to execute when the event is received.
Under the shipit.on
method declaration, the task is defined with the shipit.blTask
method. This creates a new Shipit task that will block other tasks during its execution (it is a synchronous task). The shipit.blTask
method also takes two parameters, the name of the task it is defining and a callback function to execute when the task is triggered by shipit.start
.
Within the callback function of this example task (say-hello
), the shipit.local
method executes a command on the local machine. The local command echos "hello from your local computer"
into the terminal output.
If you wanted to execute a command on the remote server, you would use the shipit.remote
method. The two methods, shipit.local
and shipit.remote
, provide an API to issue commands either locally, or remotely as part of a deployment.
Now update the shipitfile.js
to include event listeners to subscribe to the Shipit lifecycle with shipit.on
. Add the event listeners to your shipitfile.js
, inserting them following the comment placeholder from the initial configuration // Our tasks will go here
:
. . .
shipit.on('updated', () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', () => {
shipit.start('pm2-server');
});
These two methods are listening for the updated
and the published
events that are emitted as part of the Shipit deployment lifecycle. When the event is received, they will each initiate tasks using the shipit.start
method, similarly to the example task.
Now that you’ve scheduled the listeners, you’ll add the corresponding task. Add the following task to your shipitfile.js
, inserting them after your event listeners:
. . .
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
You first declare a task called copy-config
. This task creates a local file called ecosystem.config.js
and then copies that file to your remote app server. PM2
uses this file to manage your Node.js application. It provides the necessary file path information to PM2
to ensure that it is running your latest deployed files. Later in the build process, you’ll create a task that runs PM2
with ecosystem.config.js
as configuration.
If your application needs environment variables (like a database connection string) you can declare them either locally in env:
or on the remote server in env_production:
in the same manner that you set the NODE_ENV
variable in these objects.
Add the next task to your shipitfile.js
following the copy-config
task:
. . .
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
Next, you declare a task called npm-install
. This task uses a remote bash terminal (via shipit.remote
) to install the app’s dependencies (npm
packages).
Add the last task to your shipitfile.js
following the npm-install
task:
. . .
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
Finally you declare a task called pm2-server
. This task also uses a remote bash terminal to first stop PM2
from managing your previous deployment through the delete
command and then start a new instance of your Node.js server providing the ecosystem.config.js
file as a variable. You also let PM2
know that it should be using environment variables from the production
block in your initial configuration and you ask PM2
to watch the application, restarting it if it crashes.
The complete shipitfile.js
file:
module.exports = shipit => {
require('shipit-deploy')(shipit);
require('shipit-shared')(shipit);
const appName = 'hello';
shipit.initConfig({
default: {
deployTo: '/home/deployer/example.com',
repositoryUrl: 'https://git-provider.tld/YOUR_GIT_USERNAME/YOUR_GIT_REPO_NAME.git',
keepReleases: 5,
shared: {
overwrite: true,
dirs: ['node_modules']
}
},
production: {
servers: 'deployer@YOUR_APP_SERVER_PUBLIC_IP'
}
});
const path = require('path');
const ecosystemFilePath = path.join(
shipit.config.deployTo,
'shared',
'ecosystem.config.js'
);
// Our listeners and tasks will go here
shipit.on('updated', async () => {
shipit.start('npm-install', 'copy-config');
});
shipit.on('published', async () => {
shipit.start('pm2-server');
});
shipit.blTask('copy-config', async () => {
const fs = require('fs');
const ecosystem = `
module.exports = {
apps: [
{
name: '${appName}',
script: '${shipit.releasePath}/hello.js',
watch: true,
autorestart: true,
restart_delay: 1000,
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production'
}
}
]
};`;
fs.writeFileSync('ecosystem.config.js', ecosystem, function(err) {
if (err) throw err;
console.log('File created successfully.');
});
await shipit.copyToRemote('ecosystem.config.js', ecosystemFilePath);
});
shipit.blTask('npm-install', async () => {
shipit.remote(`cd ${shipit.releasePath} && npm install --production`);
});
shipit.blTask('pm2-server', async () => {
await shipit.remote(`pm2 delete -s ${appName} || :`);
await shipit.remote(
`pm2 start ${ecosystemFilePath} --env production --watch true`
);
});
};
Save and exit the file when you’re ready.
With your shipitfile.js
configured, event listeners, and associated tasks finalized you can move on to deploying to the app server.
In this step, you will deploy your application remotely and test that the deployment made your application available to the internet.
Because Shipit clones the project files from the remote Git repository, you need to push your local Node.js application files from your local machine to Github. Navigate to your Node.js project’s application directory (where your hello.js
and shiptitfile.js
are located) and run the following command:
- git status
The git status
command displays the state of the working directory and the staging area. It lets you see which changes have been staged, which haven’t, and which files aren’t being tracked by Git. Your files are untracked and appear red in the output:
OutputOn branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
hello.js
package-lock.json
package.json
shipitfile.js
nothing added to commit but untracked files present (use "git add" to track)
You can add these files to your repository with the following command:
- git add --all
This command does not produce any output, although if you were to run git status
again, the files would appear green with a note that there are changes to be committed.
You can create a commit running the following command:
- git commit -m "Our first commit"
The output of this command provides some Git-specific information about the files.
Output[master c64ea03] Our first commit
4 files changed, 1948 insertions(+)
create mode 100644 hello.js
create mode 100644 package-lock.json
create mode 100644 package.json
create mode 100644 shipitfile.js
All that is left now is to push your commit to the remote repository for Shipit to clone to your app server during deployment. Run the following command:
- git push origin master
The output includes information about the synchronization with the remote repository:
OutputEnumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 15.27 KiB | 7.64 MiB/s, done.
Total 6 (delta 0), reused 0 (delta 0)
To github.com:Asciant/hello-world.git
e274312..c64ea03 master -> master
To deploy your application, run the following command:
- npx shipit production deploy
The output of this command (which is too large to include in its entirety) provides detail on the tasks being executed and the result of the specific function. The output following for the pm2-server
task shows the Node.js app has been launched:
OutputRunning 'deploy:init' task...
Finished 'deploy:init' after 432 μs
. . .
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4177 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.27 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.81 s
Running 'deploy:finish' task...
Finished 'deploy:finish' after 222 μs
Finished 'deploy' [ deploy:init, deploy:fetch, deploy:update, deploy:publish, deploy:clean, deploy:finish ]
To view your application as a user would, you can enter your website URL your-domain
in your browser to access your web server. This will serve the Node.js Application, via reverse proxy, on the app server where your files were deployed.
You’ll see a Hello World greeting.
Note: After the first deployment, your Git repository will be tracking a newly created file named ecosystem.config.js
. As this file will be rebuilt on each deploy, and may contain compiled application secrets it should be added to the .gitignore
file in the application root directory on your local machine prior to your next git
commit.
. . .
# ecosystem.config
ecosystem.config.js
You’ve deployed your Node.js application to your app server, that refers to your new deployment. With everything up and running, you can move on to monitoring your application processes.
PM2 is a great tool for managing your remote processes, but it also provides features to monitor the performance of these application processes.
Connect to your remote app server via SSH with this command:
- ssh deployer@your_app_server_ip
To obtain specific information related to your PM2 managed processes, run the following:
- pm2 list
You’ll see output similar to:
Output┌─────────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬──────┬───────────┬──────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├─────────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼──────┼───────────┼──────────┼──────────┤
│ hello │ 0 │ 0.0.1 │ fork │ 3212 │ online │ 0 │ 62m │ 0.3% │ 45.2 MB │ deployer │ enabled │
└─────────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴──────┴───────────┴──────────┴──────────┘
You’ll see a summary of the information PM2 has collected. To see detailed information, you can run:
- pm2 show hello
The output expands on the summary information provided by the pm2 list
command. It also provides information on a number of ancillary commands and provides log file locations:
Output Describing process with id 0 - name hello
┌───────────────────┬─────────────────────────────────────────────────────────────┐
│ status │ online │
│ name │ hello │
│ version │ 1.0.0 │
│ restarts │ 0 │
│ uptime │ 82s │
│ script path │ /home/deployer/example.com/releases/20190531213027/hello.js │
│ script args │ N/A │
│ error log path │ /home/deployer/.pm2/logs/hello-error.log │
│ out log path │ /home/deployer/.pm2/logs/hello-out.log │
│ pid path │ /home/deployer/.pm2/pids/hello-0.pid │
│ interpreter │ node │
│ interpreter args │ N/A │
│ script id │ 0 │
│ exec cwd │ /home/deployer │
│ exec mode │ fork_mode │
│ node.js version │ 4.2.3 │
│ node env │ production │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-05-31T21:30:48.334Z │
└───────────────────┴─────────────────────────────────────────────────────────────┘
Revision control metadata
┌──────────────────┬────────────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ N/A │
│ repository root │ /home/deployer/example.com/releases/20190531213027 │
│ last update │ 2019-05-31T21:30:48.559Z │
│ revision │ 62fba7c8c61c7769022484d0bfa46e756fac8099 │
│ comment │ Our first commit │
│ branch │ master │
└──────────────────┴────────────────────────────────────────────────────┘
Divergent env variables from local env
┌───────────────────────────┬───────────────────────────────────────┐
│ XDG_SESSION_ID │ 15 │
│ HOSTNAME │ N/A │
│ SELINUX_ROLE_REQUESTED │ │
│ TERM │ N/A │
│ HISTSIZE │ N/A │
│ SSH_CLIENT │ 44.222.77.111 58545 22 │
│ SELINUX_USE_CURRENT_RANGE │ │
│ SSH_TTY │ N/A │
│ LS_COLORS │ N/A │
│ MAIL │ /var/mail/deployer │
│ PATH │ /usr/local/bin:/usr/bin │
│ SELINUX_LEVEL_REQUESTED │ │
│ HISTCONTROL │ N/A │
│ SSH_CONNECTION │ 44.222.77.111 58545 209.97.167.252 22 │
└───────────────────────────┴───────────────────────────────────────┘
. . .
PM2 also provides an in-terminal monitoring tool, accessible with:
- pm2 monit
The output of this command is an interactive dashboard, where pm2
provides realtime process information, logs, metrics, and metadata. This dashboard may assist in monitoring resources and error logs:
Output┌─ Process list ────────────────┐┌─ Global Logs ─────────────────────────────────────────────────────────────┐
│[ 0] hello Mem: 22 MB ││ │
│ ││ │
│ ││ │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
┌─ Custom metrics (http://bit.l─┐┌─ Metadata ────────────────────────────────────────────────────────────────┐
│ Heap Size 10.73 ││ App Name hello │
│ Heap Usage 66.14 ││ Version N/A │
│ Used Heap Size 7.10 ││ Restarts 0 │
│ Active requests 0 ││ Uptime 55s │
│ Active handles 4 ││ Script path /home/asciant/hello.js │
│ Event Loop Latency 0.70 ││ Script args N/A │
│ Event Loop Latency p95 ││ Interpreter node │
│ ││ Interpreter args N/A │
└───────────────────────────────┘└───────────────────────────────────────────────────────────────────────────┘
With an understanding of how you can monitor your processes with PM2, you can move on to how Shipit can assist in rolling back to a previous working deployment.
End your ssh
session on your app server by running exit
.
Deployments occasionally expose unforeseen bugs, or issues that cause your site to fail. The developers and maintainers of Shipit have anticipated this and have provided the ability for you to roll back to the previous (working) deployment of your application.
To ensure your PM2
configuration persists, add another event listener to shipitfile.js
on the rollback
event:
. . .
shipit.on('rollback', () => {
shipit.start('npm-install', 'copy-config');
});
You add a listener to the rollback
event to run your npm-install
and copy-config
tasks. This is needed because unlike the published
event, the updated
event is not run by the Shipit lifecycle when rolling back a deployment. Adding this event listener ensures your PM2
process manager points to the most recent deployment, even in the event of a rollback.
This process is similar to deploying, with a minor change in command. To try rolling back to a previous deployment you can execute the following:
- npx shipit production rollback
Like the deploy
command, rollback
provides details on the roll back process and the tasks being executed:
OutputRunning 'rollback:init' task...
Get current release dirname.
Running "if [ -h /home/deployer/example.com/current ]; then readlink /home/deployer/example.com/current; fi" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com releases/20190531213719
Current release dirname : 20190531213719.
Getting dist releases.
Running "ls -r1 /home/deployer/example.com/releases" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com 20190531213719
@centos-ap-app.asciant.com 20190531213519
@centos-ap-app.asciant.com 20190531213027
Dist releases : ["20190531213719","20190531213519","20190531213027"].
Will rollback to 20190531213519.
Finished 'rollback:init' after 3.96 s
Running 'deploy:publish' task...
Publishing release "/home/deployer/example.com/releases/20190531213519"
Running "cd /home/deployer/example.com && if [ -d current ] && [ ! -L current ]; then echo "ERR: could not make symlink"; else ln -nfs releases/20190531213519 current_tmp && mv -fT current_tmp current; fi" on host "centos-ap-app.asciant.com".
Release published.
Finished 'deploy:publish' after 1.8 s
Running 'pm2-server' task...
Running "pm2 delete -s hello || :" on host "centos-ap-app.asciant.com".
Running "pm2 start /home/deployer/example.com/shared/ecosystem.config.js --env production --watch true" on host "centos-ap-app.asciant.com".
@centos-ap-app.asciant.com [PM2][WARN] Node 4 is deprecated, please upgrade to use pm2 to have all features
@centos-ap-app.asciant.com [PM2][WARN] Applications hello not running, starting...
@centos-ap-app.asciant.com [PM2] App [hello] launched (1 instances)
@centos-ap-app.asciant.com ┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────────┬──────────┐
@centos-ap-app.asciant.com │ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
@centos-ap-app.asciant.com ├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────────┼──────────┤
@centos-ap-app.asciant.com │ hello │ 0 │ 1.0.0 │ fork │ 4289 │ online │ 0 │ 0s │ 0% │ 4.5 MB │ deployer │ enabled │
@centos-ap-app.asciant.com └──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────────┴──────────┘
@centos-ap-app.asciant.com Use `pm2 show <id|name>` to get more details about an app
Finished 'pm2-server' after 5.55 s
Running 'deploy:clean' task...
Keeping "5" last releases, cleaning others
Running "(ls -rd /home/deployer/example.com/releases/*|head -n 5;ls -d /home/deployer/example.com/releases/*)|sort|uniq -u|xargs rm -rf" on host "centos-ap-app.asciant.com".
Finished 'deploy:clean' after 1.82 s
Running 'rollback:finish' task...
Finished 'rollback:finish' after 615 μs
Finished 'rollback' [ rollback:init, deploy:publish, deploy:clean, rollback:finish ]
You have configured Shipit to keep 5 releases through the keepReleases: 5
configuration in shipitfile.js
. Shipit keeps track of these releases internally to ensure it is able to roll back when required. Shipit also provides a handy way to identify the releases by creating a directory named as a timestamp (YYYYMMDDHHmmss - Example: /home/deployer/your-domain/releases/20190420210548
).
If you wanted to further customize the roll back process, you can listen for events specific to the roll back operation. You can then use these events to execute tasks that will complement your roll back. You can refer to the event list provided in the breakdown of the Shipit lifecycle and configure the tasks/listeners within your shipitfile.js
.
The ability to roll back means that you can always serve a functioning version of your application to your users even if a deployment introduces unexpected bugs/issues.
In this tutorial, you configured a workflow that allows you to create a highly customizable alternative to Platform as a Service, all from a couple of servers. This workflow allows for customized deployment and configuration, process monitoring with PM2, the potential to scale and add services, or additional servers or environments to the deployment when required.
If you are interested in continuing to develop your Node.js skills, check out the DigtalOcean Node.js content as well as the How To Code in Node.js Series.
]]>O DNSControl é uma ferramenta de infraestrutura como código que permite a implantação e gerenciamento de suas zonas DNS (Sistema de Nomes de Domínios, do inglês Domain Name System) usando princípios de desenvolvimento de software padrão, incluindo o controle, teste e implantação automática de versão. O DNSControl foi criado pelo Stack Exchange e foi escrito em Go.
O DNSControl elimina muitas das armadilhas do gerenciamento manual de DNS, considerando que os arquivos da zona ficam armazenados em um formato programável. Isso permite que você implante zonas para vários provedores de DNS simultaneamente, identifique erros de sintaxe e envie a configuração DNS automaticamente, reduzindo o risco de erro humano. Outro uso comum do DNSControl é a migração rápida de seu DNS para um provedor diferente; por exemplo, no caso de um ataque DDoS ou uma interrupção do sistema.
Neste tutorial, você instalará e configurará o DNSControl, criará uma configuração básica do DNS e começará a implantar registros DNS em um provedor ativo. Como parte deste tutorial, usaremos a DigitalOcean como o provedor DNS do exemplo. Se quiser usar um provedor diferente, a configuração é bastante parecida. Quando terminar, você conseguirá gerenciar e testar sua configuração de DNS em um ambiente seguro e offline e, depois, poderá implantá-lo para a produção.
Antes de iniciar este guia, você precisará do seguinte:
your-server-ipv4-address
se refere ao endereço IP do servidor onde você estiver hospedando seu site ou domínio. O your-server-ipv6-address
se refere ao endereço IPv6 do servidor onde você estiver hospedando seu site ou domínio.your_domain
durante todo o processo e a DigitalOcean como a provedora de serviço.Assim que estiver com tudo pronto, faça login no seu servidor como usuário não raiz para começar.
O DNSControl foi escrito em Go, de modo que você começará este passo instalando o Go em seu servidor e configurando o seu GOPATH
.
O Go está disponível dentro dos repositórios padrão de software do Debian, possibilitando que ele seja instalado com as ferramentas de gerenciamento de pacotes convencionais.
Também será necessário instalar o Git, pois ele é necessário para permitir que o Go baixe e instale o software DNSControl a partir de seu repositório no GitHub.
Comece atualizando o índice local de pacotes para refletir quaisquer novas alterações feitas no pacote original (upstream):
- sudo apt update
Em seguida, instale os pacotes golang-go
e git
:
- sudo apt install golang-go git
Após confirmar a instalação, a ferramenta apt
irá baixar e instalar o Go e o Git, bem como todas as suas respectivas dependências.
Em seguida, você irá configurar as variáveis de ambiente path necessárias para o Go. Se quiser saber mais sobre isso, leia o tutorial Entendendo o GOPATH. Comece editando o arquivo ~/.profile
:
- nano ~/.profile
Adicione as seguintes linhas ao final do seu arquivo:
...
export GOPATH="$HOME/go"
export PATH="$PATH:$GOPATH/bin"
Assim que tiver adicionado essas linhas ao final do arquivo, salve e feche o arquivo. Em seguida, recarregue seu perfil - seja fazendo log-off e fazendo log-in de volta, ou fornecendo o arquivo novamente:
- source ~/.profile
Agora que você instalou e configurou o Go, instale o DNSControl.
O comando go get
pode ser usado para obter uma cópia do código, compilá-lo automaticamente e instalá-lo dentro do seu diretório Go:
- go get github.com/StackExchange/dnscontrol
Assim que terminar, verifique a versão instalada para garantir que tudo está funcionando:
- dnscontrol version
Sua saída será semelhante à seguinte:
Outputdnscontrol 2.9-dev
Se ver um erro dnscontrol: command not found
, verifique novamente sua configuração de caminho do Go.
Agora que você instalou o DNSControl, você pode criar um diretório de configuração e conectar o DNSControl ao seu provedor de DNS, a fim de permitir que ele faça alterações em seus registros de DNS.
Neste passo, você criará os diretórios de configuração necessários para o DNSControl e o conectará ao seu provedor de DNS para que ele possa começar a fazer alterações dinâmicas (em tempo real) nos seus registros do DNS.
Primeiro, crie um novo diretório no qual você possa armazenar sua configuração do DNSControl e, então, vá para ele:
- mkdir ~/dnscontrol
- cd ~/dnscontrol
Nota: este tutorial se concentrará na configuração inicial do DNSControl; no entanto, para o uso na produção, é recomendável armazenar sua configuração do DNSControl em um sistema de controle de versão (VCS) como o Git. As vantagens de se usar um VCS incluem o controle total de versões, integração com CI/CD para teste, reverter implantações sem interrupções e assim por diante.
Se quiser usar o DNSControl para escrever arquivos de zona no BIND, crie também o diretório zones
:
- mkdir ~/dnscontrol/zones
Os arquivos de zona do BIND são um método bruto e padronizado para armazenar zonas/registros do DNS em formato de texto simples. Eles foram originalmente usados para o software de servidor de DNS BIND, mas agora são amplamente adotados como o método padrão para armazenar zonas de DNS. Os arquivos de zona no BIND produzidos pelo DNSControl são úteis caso queira importá-los para um servidor DNS personalizado ou auto-hospedado, ou para fins de auditoria.
No entanto, se quiser usar o DNSControl para forçar alterações do DNS em um provedor gerenciado, o diretório zones
não será necessário.
Em seguida, você precisa configurar o arquivo creds.json
, que é o que permitirá que o DNSControl se autentique para seu provedor DNS e faça as alterações. O formato creds.json
difere ligeiramente, dependendo do provedor de DNS que você estiver usando. Para encontrar a configuração do seu próprio provedor, consulte a Lista de provedores de serviço na documentação oficial do DNSControl.
Crie o arquivo creds.json
no diretório ~/dnscontrol
:
- cd ~/dnscontrol
- nano creds.json
Adicione a configuração do creds.json
do exemplo para o seu provedor de DNS ao arquivo. Se estiver usando o DigitalOcean como seu provedor fr DNS, você pode usar o seguinte:
{
"digitalocean": {
"token": "your-digitalocean-oauth-token"
}
}
Esse arquivo diz ao DNSControl com quais provedores de DNS você quer que ele se conecte.
Você precisará fornecer alguma forma de autenticação para seu provedor de DNS. Normalmente, essa autenticação consiste em uma chave de API ou token do OAuth, mas alguns provedores exigem informações extra, conforme documentado na Lista de provedores de serviço na documentação oficial do DNSControl.
Aviso: esse token dará acesso à conta do seu provedor de DNS; dessa maneira, você deve protegê-lo, do mesmo modo que faria com uma senha. Além disso, se você estiver usando um sistema de controle de versão, certifique-se de que o arquivo que contém o token seja excluído (por exemplo, usando o .gitignore
) ou criptografado de maneira segura.
Se estiver usando o DigitalOcean como seu provedor de DNS, você pode usar o token do OAuth necessário em suas configurações de conta do DigitalOcean que você gerou como parte dos pré-requisitos.
Se você tiver vários provedores de DNS — por exemplo, para nomes de domínios múltiplos, ou zonas de DNS delegadas—você pode definir todos eles no mesmo arquivo creds.json
.
Você definiu os diretórios de configuração inicial do DNSControl e configurou o creds.json
para permitir que o DNSControl autentique seu provedor de DNS e faça alterações. Em seguida, você criará a configuração para suas zonas de DNS.
Neste passo, você criará um arquivo de configuração inicial do DNS, o qual terá os registros de DNS relacionados ao seu nome de domínio ou à zona de DNS delegada.
O dnsconfig.js
é o arquivo principal de configuração de DNS para o DNSControl. Neste arquivo, as zonas de DNS e seus registros correspondentes são definidos usando a sintaxe do JavaScript. Isso é conhecido como DSL, ou Linguagem Específica de Domínio. A página de DSL do JavaScript - na documentação oficial do DNSControl, fornece mais detalhes.
Para começar, crie o arquivo de configuração do DNS no diretório ~/dnscontrol
:
- cd ~/dnscontrol
- nano dnsconfig.js
Então, adicione a seguinte configuração de exemplo ao arquivo:
// Providers:
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
// Domains:
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address')
);
Esse arquivo de exemplo define um nome de domínio ou uma zona de DNS em um provedor em particular que, neste caso, é o your_domain
, hospedado pelo DigitalOcean. Um registro de exemplo A
também é definido para a zona raiz (@
), que aponta para o endereço de IPv4 do servidor no qual você está hospedando seu domínio/site.
Existem três funções principais que constituem um arquivo de configuração básica de DNSControl:
NewRegistrar(name, type, metadata)
: define o registrador de domínios para o seu nome de domínio. O DNSControl pode usar esse registrador para fazer as alterações necessárias, como modificar os servidores de nomes autorizados. Se quiser usar o DNSControl somente para gerenciar suas zonas de DNS, isso geralmente pode ser deixado como NONE
.
NewDnsProvider(name, type, metadata)
: define um provedor de serviços de DNS para seu nome de domínio ou zona delegada. É aqui que o DNSControl irá forçar as alterações do DNS que você fez.
D(name, registrar, modifiers)
: define um nome de domínio ou zona de DNS delegada para o DNSControl gerenciar, além dos registros de DNS presentes na zona.
Você deve configurar o NewRegistrar()
, o NewDnsProvider()
e o D()
, conforme for o caso, usando a Lista de provedores de serviços da documentação oficial do DNSControl.
Se você está usando o DigitalOcean como seu provedor de DNS e precisa ter a capacidade de fazer alterações no DNS (e não em servidores de nome autorizados), o exemplo do bloco de código anterior já está correto.
Assim que terminar, salve e feche o arquivo.
Neste passo, você definiu um arquivo de configuração de DNS para o DNSControl, com os fornecedores relevantes definidos. Em seguida, você irá preencher o arquivo com alguns registros úteis de DNS.
Na sequência, você pode preencher o arquivo de configuração de DNS com registros de DNS úteis para o seu site ou serviço, usando a sintaxe do DNSControl.
Ao contrário dos arquivos tradicionais de zona no BIND, nos quais os registros de DNS são escritos em um formato bruto, linha por linha, os registros DNS dentro do DNSControl são definidos como um parâmetro de função (modificador de domínio) para a função D()
, como mostrado brevemente no Passo 3.
Existe um modificador de domínio para cada um dos tipos de registro padrão de DNS, incluindo A
, AAAA
, MX
, TXT
, NS
, CAA
e assim por diante. Na seção de Modificadores de domínios, da documentação sobre o DNSControl, você encontra uma lista completa dos tipos de registro disponíveis.
Os modificadores de registros individuais também estão disponíveis (modificadores de registro). Atualmente, eles são usados principalmente para definir a vida útil (TTL ou time to live) de registros individuais. Você encontra a lista completa dos modificadores de registros disponíveis na seção de Modificadores de registro da documentação sobre o DNSControl. Os modifiers de registro são opcionais e, na maioria dos casos de uso básico, podem ser deixados de fora.
A sintaxe para definir registros de DNS varia ligeiramente em relação a cada tipo de registro. Em seguida, temos alguns exemplos dos tipos de registros mais comuns:
Registros do tipo A
:
A('name', 'address', optional record modifiers)
A('@', 'your-server-ipv4-address', TTL(30))
Registros do tipo AAAA
:
AAAA('name', 'address', optional record modifiers)
AAAA('@', 'your-server-ipv6-address')
(modificador de registro deixado de lado para que a vida útil (TTL) padrão seja usada)Registros do tipo CNAME
:
CNAME('name', 'target', optional record modifiers)
CNAME('subdomain1', 'example.org.')
(note que um ponto final, .
, deve ser incluído se houver pontos no valor)Registros do tipo MX
:
MX('name', 'priority', 'target', optional record modifiers)
MX('@', 10, 'mail.example.net')
(note que um ponto final, .
, deve ser incluído se houver pontos no valor)Registros do tipo TXT
:
TXT('name', 'content', optional record modifiers)
TXT('@', 'This is a TXT record. ')
Registros do tipo CAA
:
CAA('name', 'tag', 'value', optional record modifiers)
CAA('@', 'issue', 'letsencrypt.org')
Para começar a adicionar registros de DNS ao seu domínio ou zona de DNS delegada, edite seu arquivo de configuração de DNS:
- nano dnsconfig.js
Em seguida, você pode começar a preencher os parâmetros da função D()
existente, usando a sintaxe descrita na lista anterior, além da seção de Modificadores de domínios da documentação oficial sobre DNSControl. Uma vírgula (,
) deve ser usada entre cada registro.
A título de referência, este bloco de código contém um exemplo da configuração completa de uma definição básica, inicial do DNS:
...
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address'),
A('www', 'your-server-ipv4-address'),
A('mail', 'your-server-ipv4-address'),
AAAA('@', 'your-server-ipv6-address'),
AAAA('www', 'your-server-ipv6-address'),
AAAA('mail', 'your-server-ipv6-address'),
MX('@', 10, 'mail.your_domain.'),
TXT('@', 'v=spf1 -all'),
TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;')
);
Assim que tiver completado sua configuração inicial do DNS, salve e feche o arquivo.
Neste passo, você configurou o arquivo de configuração inicial do DNS, contendo seus registros de DNS. Em seguida, você testará a configuração e a implantará.
Neste passo, você executará uma verificação local da sintaxe em sua configuração de DNS e, em seguida, implantará as alterações no servidor/provedor ativo de DNS.
Primeiro, vá até seu diretório dnscontrol
:
- cd ~/dnscontrol
Em seguida, utilize a função preview
do DNSControl para verificar a sintaxe do seu arquivo e mostrar quais alterações ele irá fazer (sem realmente fazer as alterações):
- dnscontrol preview
Se a sintaxe do seu arquivo de configuração do DNS estiver correta, o DNSControl mostrará um panorama das alterações que ele irá fazer. Isso deve ser semelhante ao seguinte:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE A your_domain your-server-ipv4-address ttl=300
#2: CREATE A www.your_domain your-server-ipv4-address ttl=300
#3: CREATE A mail.your_domain your-server-ipv4-address ttl=300
#4: CREATE AAAA your_domain your-server-ipv6-address ttl=300
#5: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
#6: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
#7: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
----- Registrar: none...0 corrections
Done. 8 corrections.
Se ver um aviso de erro em sua saída, o DNSControl dará detalhes sobre o que está errado e onde o erro está localizado dentro do seu arquivo.
Aviso: o próximo comando fará alterações dinâmicas nos seus registros do DNS e possivelmente em outras configurações. Certifique-se de que esteja preparado para isso, incluindo fazendo um um backup da sua configuração de DNS existente, além de garantir que tenha os meios para reverter o processo, se necessário.
Por fim, você pode enviar as alterações para seu provedor de DNS ativo:
- dnscontrol push
Você verá uma saída similar à seguinte:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
SUCCESS!
#2: CREATE A your_domain your-server-ipv4-address ttl=300
SUCCESS!
#3: CREATE AAAA your_domain your-server-ipv6-address ttl=300
SUCCESS!
#4: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#5: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#6: CREATE A www.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#7: CREATE A mail.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
SUCCESS!
----- Registrar: none...0 corrections
Done. 8 corrections.
Agora, se verificar as configurações de DNS do seu domínio no painel de controle do DigitalOcean, verá as alterações.
Também é possível verificar a criação de registros, executando uma consulta de DNS em relação à sua zona de domínio/delegada usando o dig
.
Se não tiver o dig
instalado, será necessário instalar o pacote dnsutils
:
- sudo apt install dnsutils
Assim que instalar o dig
, utilize-o para fazer uma pesquisa de DNS em relação ao seu domínio. Você verá que os registros foram atualizados de forma adequada:
- dig +short your_domain
Você verá uma saída que mostra o endereço IP e o registro relevante de DNS da sua zona - que foi implantado usando o DNSControl. Os registros de DNS podem levar algum tempo para se propagarem, assim, talvez seja necessário esperar e executar esse comando novamente.
Neste passo final, você executou uma verificação da sintaxe local do arquivo de configuração do DNS. Em seguida, implantou-a em seu provedor de DNS ativo e testou se as alterações foram feitas com êxito.
Neste artigo, você configurou o DNSControl e implantou uma configuração de DNS em um provedor ativo. Agora, você pode gerenciar e testar suas alterações de configuração do DNS em um ambiente seguro e offline, antes de implantá-las na produção.
Se quiser explorar mais esse assunto, o DNSControl foi criado para se integrar ao seu pipeline de CI/CD (Integração Contínua/Entrega Contínua), o que lhe permite executar testes em profundidade e ter mais controle sobre sua implantação na produção. Você também pode pesquisar sobre a integração do DNSControl aos seus processos de compilação/implantação de infraestrutura, o que permite implantar servidores e adicioná-los ao DNS de maneira completamente automática.
Se quiser avançar ainda mais com o DNSControl, os artigos do DigitalOcean, a seguir, apresentam alguns passos subsequentes interessantes para ajudar a integrar o DNSControl aos fluxos de trabalho de gerenciamento de alterações e implantação de infraestrutura:
]]>DNSControl es una herramienta de infraestructura como código que le permite implementar y administrar sus zonas de DNS usando principios de desarrollo de software estándar, el control de versiones, pruebas e implementaciones automatizadas. DNSControl es una creación de Stack Exchange y se escribió en Go.
El uso de DNSControl elimina muchas de las dificultades de la administración manual de DNS, ya que los archivos de zona se almacenan en un formato programable. Esto le permite implementar zonas en varios proveedores de DNS de manera simultánea, identificar errores de sintaxis y aplicar su configuración de DNS de manera automática, lo que reduce el riesgo que representan los errores humanos. Otro uso frecuente de DNSControl tiene que ver con la migración rápida de su DNS a un proveedor diferente; por ejemplo, en caso de un ataque DDoS o una interrupción del sistema.
A través de este tutorial, instalará y configurará DNSControl, creará una configuración de DNS básica y comenzará a implementar registros de DNS en un proveedor activo. Como parte de este tutorial, usaremos DigitalOcean como proveedor DNS de ejemplo. Si desea usar un proveedor diferente, la configuración es muy similar. Al finalizar, podrá administrar y probar su configuración de DNS en un entorno seguro y sin conexión, y luego implementarla de manera automática en la producción.
Para completar esta guía, necesitará lo siguiente:
your-server-ipv4-address
hace referencia a la dirección IP del servidor en donde aloja su sitio web o dominio. your-server-ipv6-address
hace referencia a la dirección IPv6 del servidor en donde aloja su sitio web o dominio.your_domain
en todo momento y DigitalOcean será el proveedor de servicio.Una vez que tenga todo esto listo, para comenzar inicie sesión en su servidor como usuario no root.
DNSControl está escrito en Go, por lo que comenzará este paso instalando Go en su servidor y configurando su GOPATH
.
Go está disponible en los repositorios de software predeterminados de Debian, lo que permite la instalación con las herramientas convencionales de administración de paquetes.
También deberá instalar Git, ya que es necesario para que en Go se descargue e instale el software DNSControl desde su repositorio en GitHub.
Comience actualizando el índice de paquetes locales de modo que se refleje cualquier cambio anterior:
- sudo apt update
Luego, instale los paquetes golang-go
y git
:
- sudo apt install golang-go git
Una vez confirmada la instalación, apt
descargará e instalará Go y Git, así como todas las dependencias necesarias de estos.
A continuación, configurará las variables de entorno de ruta necesarias para Go. Si desea obtener más información sobre esto, puede leer el siguiente tutorial de información sobre GOPATH. Comience editando el archivo ~/.profile
:
- nano ~/.profile
Añada las siguientes líneas al final de su archivo:
...
export GOPATH="$HOME/go"
export PATH="$PATH:$GOPATH/bin"
Una vez que añada estas líneas al final del archivo, guárdelo y ciérrelo. Luego vuelva a cargar su perfil cerrando y volviendo a abrir la sesión, o bien volviendo a obtener el archivo:
- source ~/.profile
Ahora que instaló y configuró Go, puede instalar DNSControl.
El comando go get
se puede usar para obtener una copia del código, compilarlo automáticamente e instalarlo en su directorio de Go:
- go get github.com/StackExchange/dnscontrol
Una vez completado esto, puede verificar la versión instalada para asegurar que todo funcione:
- dnscontrol version
El resultado debe tener un aspecto similar al siguiente:
Outputdnscontrol 2.9-dev
Si ve un error dnscontrol: command not found
, verifique bien la configuración de su ruta de Go.
Ahora que instaló DNSControl, puede crear un directorio de configuración y conectar DNSControl a su proveedor DNS para permitir que haga cambios en sus registros de DNS.
En este paso, creará los directorios de configuración necesarios para DNSControl y lo conectará a su proveedor DNS para que pueda comenzar a realizar cambios en tiempo real en sus registros de DNS.
Primero, cree un directorio nuevo en el que pueda almacenar su configuración de DNSControl y luego posiciónese en este:
- mkdir ~/dnscontrol
- cd ~/dnscontrol
Nota: En este tutorial, nos enfocaremos en la configuración inicial de DNSControl. Sin embargo, para el uso en producción se le recomienda almacenar su configuración de DNSControl en un sistema de control de versiones (VCS) como Git. Entre las ventajas de esto se incluyen un control de versiones completo, la integración con CI/CD para pruebas y las implementaciones de versiones anteriores sin problemas.
Si planea usar DNSControl para escribir archivos de zona BIND, también deberá crear el directorio zones
:
- mkdir ~/dnscontrol/zones
Los archivos de zona BIND son una alternativa básica estándar para almacenar zonas y registros de DNS en formato de texto simple. Originalmente se utilizaron para el software del servidor DNS BIND, pero ahora se han adoptado ampliamente como el método estándar para almacenar zonas de DNS. Los archivos de zona BIND producidos por DNSControl son útiles si quiere importarlos a un servidor DNS personalizado o con alojamiento propio, o bien para fines de auditoría.
Sin embargo, si solo quiere usar DNSControl para aplicar cambios de DNS a un proveedor administrado, no se necesitará el directorio zones
.
A continuación, deberá configurar el archivo creds.json
, lo cual permitirá la autenticación de DNSControl en su proveedor DNS y la aplicación de cambios. El formato de creds.json
varia ligeramente dependiendo del proveedor DNS que utilice. Consulte la lista de proveedores de servicio en la documentación oficial de DNSControl para hallar la configuración de su propio proveedor.
Cree el archivo creds.json
en el directorio ~/dnscontrol
:
- cd ~/dnscontrol
- nano creds.json
Añada la configuración de muestra creds.json
para su proveedor DNS al archivo. Si utiliza DigitalOcean como su proveedor DNS, puede recurrir a lo siguiente:
{
"digitalocean": {
"token": "your-digitalocean-oauth-token"
}
}
En este archivo se indican a DNSControl los proveedores de DNS con los que desea establecer conexión.
Deberá proporcionar alguna forma de autenticación para proveedor DNS. Normalmente, se utiliza una clave de API o un token de OAuth, pero algunos proveedores requieren información adicional, como se detalla en la lista de proveedores de servicio de la documentación oficial de DNSControl.
Advertencia: Este token brindará acceso a la cuenta de su proveedor DNS, por lo que debe protegerlo como si se tratara de una contraseña. También compruebe que, si utiliza un sistema de control de versiones, el archivo que contenga el token esté excluido (por ejemplo, con .gitignore
), o cifrado de forma segura de alguna manera.
Si utiliza DigitalOcean como su proveedor DNS, puede usar el token de OAuth requerido en la configuración de su cuenta de DigitalOcean que generó como parte de los requisitos previos.
Si tiene varios proveedores de DNS diferentes, por ejemplo, para diferentes nombres de dominio o zonas de DNS delegadas, puede definir todos estos en el mismo archivo creds.json
.
Con esto, estableció los directorios de configuración inicial de DNSControl y configuró creds.json
para permitir que DNSControl se autentique en su proveedor DNS y realice cambios. A continuación, creará la configuración para sus zonas de DNS.
Durante este paso, creará un archivo de configuración de DNS inicial, que contendrá los registros de DNS para su nombre de dominio o zona de DNS delegada.
dnsconfig.js
es el archivo de configuración de DNS principal para DNSControl. En este archivo, las zonas de DNS y sus registros correspondientes se definen usando sintaxis de JavaScript. Esto se conoce como DS o lenguaje específico de dominio. En la página de DSL en JavaScript de la documentación oficial de DNSControl hay más información disponible.
Para comenzar, cree el archivo de configuración de DNS en el directorio ~/dnscontrol
:
- cd ~/dnscontrol
- nano dnsconfig.js
Luego, añada la siguiente configuración de ejemplo al archivo:
// Providers:
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
// Domains:
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address')
);
Mediante este archivo de ejemplo se define un nombre de dominio o una zona de DNS en un proveedor determinado, que en este caso es your_domain
alojado en DigitalOcean. También se define un ejemplo de registro A
para la zona root (@
) con orientación hacia la dirección IPv4 del servidor en el que aloja su dominio y sitio web.
Existen tres funciones principales que conforman un archivo básico de configuración de DNSControl
NewRegistrar(name, type, metadata)
: define el registrador de dominio para su nombre de dominio. DNSControl puede utilizar esto para realizar los cambios necesarios, como la modificación de los servidores de nombres autoritativos. Si solo quiere usar DNSControl para administrar sus zonas de DNS, generalmente puede dejar el valor NONE
.
NewDnsProvider(name, type, metadata)
: define un proveedor de servicio de DNS para su nombre de dominio o zona delegada. Aquí es donde DNSControl aplicará los cambios de DNS que realice.
D(name, registrar, modifiers)
: define un nombre de dominio o una zona de DNS delegada para DNSControl se encargue de la administración, como también los registros de DNS presentes en la zona.
Deberá configurar NewRegistrar()
, NewDnsProvider()
y D()
respectivamente usando la lista de proveedores de servicio de la documentación oficial de DNSControl.
Si utiliza DigitalOcean como su proveedor DNS y solo necesita poder realizar cambios en el DNS (en lugar de servidores de nombres autoritativos también), el ejemplo del bloque de código anterior ya es válido.
Una vez que termine, guarde y cierre el archivo.
En este paso, creó un archivo de configuración de DNS para DNSControl, con los proveedores correspondientes definidos. A continuación, completará el archivo con algunos registros de DNS útiles.
A continuación, podrá completar el archivo de configuración de DNS con registros de DNS útiles para su sitio web o servicio, con la sintaxis de DNSControl.
A diferencia de los archivos de zona BIND tradicionales, en los cuales los registros DNS están escritos en un formato básico, línea por línea, los registros DNS dentro de DNSControl se definen como un parámetro de función (modificador de dominio) en la función D()
, como se muestra en forma resumida en el paso 3.
Existe un modificador de dominio para cada uno de los tipos de registro de DNS estándares, entre los que se incluyen A
, AAAA
, MX
, TXT
, NS
y CAA
. Se encuentra disponible una lista completa de los tipos de registro en la sección Modificadores de dominios de la documentación de DNSControl.
También se encuentran disponibles los modificadores para los registros individuales (modificadores de registros). Actualmente, se utilizan principalmente para configurar el TTL (período de vida) de registros individuales. Se encuentra disponible una lista completa de los modificadores de registros en la sección Modificadores de registros de la documentación de DNSControl. Los modificadores de registros son opcionales y en la mayoría de los casos de uso básicos pueden quedar excluidos.
La sintaxis para configurar registros DNS varía ligeramente según cada tipo de registro. A continuación, se muestran algunos ejemplos para los tipos de registro más comunes:
Registros A
:
A('name', 'address', optional record modifiers)
.A('@', 'your-server-ipv4-address', TTL(30))
.Registros AAAA
:
AAAA('name', 'address', optional record modifiers)
.AAAA('@', 'your-server-ipv6-address')
(se omitió el modificador de registros, por lo que se utilizará el TTL predeterminado).Registros CNAME
:
CNAME('name', 'target', optional record modifiers).
CNAME('subdomain1', 'example.org.')
(tenga en cuenta que se debe incluir un .
al final si hay algún punto en el valor).Registros MX
:
MX('name', 'priority', 'target', optional record modifiers)
.MX('@', 10, 'mail.example.net')
(tenga en cuenta que se debe incluir un . ``al final si hay algún punto en el valor).Registros de TXT
:
TXT('name', 'content', optional record modifiers)
.TXT('@', 'This is a TXT record.')
.Registros CAA
:
CAA('name', 'tag', 'value', optional record modifiers).
CAA('@', 'issue', 'letsencrypt.org')
.Con el fin de comenzar a añadir registros de DNS para su dominio o zona de DNS delegada, edite su archivo de configuración DNS:
- nano dnsconfig.js
A continuación, puede comenzar a completar los parámetros para la función D()
existente usando la sintaxis descrita en la lista anterior, así como la sección Modificadores de dominio de la documentación oficial de DNSControl. Se debe utilizar una coma (,
) entre cada registro.
A modo de referencia, el bloque de código aquí contiene una configuración de ejemplo completa para una configuración básica e inicial de un DNS:
...
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address'),
A('www', 'your-server-ipv4-address'),
A('mail', 'your-server-ipv4-address'),
AAAA('@', 'your-server-ipv6-address'),
AAAA('www', 'your-server-ipv6-address'),
AAAA('mail', 'your-server-ipv6-address'),
MX('@', 10, 'mail.your_domain.'),
TXT('@', 'v=spf1 -all'),
TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;')
);
Una vez que haya completado su configuración inicial de DNS, guarde y cierre el archivo.
Durante este paso, creó el archivo de configuración de DNS inicial que contiene sus registros de DNS. A continuación, probará la configuración y la implementará.
En este paso, ejecutará una verificación de sintaxis local en su configuración de DNS y luego implementará los cambios en el servidor y proveedor DNS activos.
Primero, posiciónese en el directorio dnscontrol
:
- cd ~/dnscontrol
Luego, utilice la función preview
de DNSControl para verificar la sintaxis de su archivo y mostrar los cambios que se realizarán (sin efectuarlos realmente):
- dnscontrol preview
Si la sintaxis de su archivo de configuración de DNS es correcta, en DNSControl se mostrará un resumen de los cambios que se realizarán. Esto debe tener un aspecto similar a lo siguiente:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE A your_domain your-server-ipv4-address ttl=300
#2: CREATE A www.your_domain your-server-ipv4-address ttl=300
#3: CREATE A mail.your_domain your-server-ipv4-address ttl=300
#4: CREATE AAAA your_domain your-server-ipv6-address ttl=300
#5: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
#6: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
#7: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
----- Registrar: none...0 corrections
Done. 8 corrections.
Si ve una advertencia de error en el resultado, DNSControl proporcionará detalles sobre el tipo de error y el punto en el que se encuentra dentro de su archivo.
Advertencia: A través del siguiente comando se harán cambios en tiempo real en sus registros DNS y posiblemente en otras configuraciones. Asegúrese de estar preparado para esto; incluya una copia de seguridad de su configuración de DNS existente y verifique que disponga de los medios necesarios para revertir los cambios si es necesario.
Por último, puede aplicar los cambios a su proveedor DNS activo:
- dnscontrol push
Verá un resultado similar al siguiente:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
SUCCESS!
#2: CREATE A your_domain your-server-ipv4-address ttl=300
SUCCESS!
#3: CREATE AAAA your_domain your-server-ipv6-address ttl=300
SUCCESS!
#4: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#5: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#6: CREATE A www.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#7: CREATE A mail.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
SUCCESS!
----- Registrar: none...0 corrections
Done. 8 corrections.
Ahora, si revisa las configuraciones de DNS para su dominio en el panel de control de DigitalOcean, podrá visualizar los cambios.
También puede verificar la creación de registros ejecutando una solicitud de DNS para su dominio o zona delegando usando dig
.
Si no instaló dig
, deberá instalar el paquete dnsutils
:
- sudo apt install dnsutils
Una vez que instale dig
, podrá utilizarlo con el propósito de realizar una búsqueda de DNS para su dominio. Verá que los registros se actualizaron de forma correspondiente:
- dig +short your_domain
Verá un resultado que muestra la dirección IP y los registros de DNS pertinentes de su zona implementada usando DNSControl. Los registros DNS pueden tardar tiempo en propagarse, por lo cual es posible que necesite esperar y ejecutar este comando de nuevo.
En este último paso, realizó una verificación de sintaxis local del archivo de configuración de DNS, luego lo implementó en su proveedor DNS activo y verificó mediante pruebas que los cambios se realizaran correctamente.
A lo largo de este artículo, configuró DNSControl e implementó una configuración de DNS en un proveedor activo. Ahora podrá administrar y probar sus cambios de configuración de DNS en un entorno seguro y sin conexión antes de implementarlos en la producción.
Si desea profundizar en este tema, DNSControl está diseñado para integrarse en su proceso de CI/CD, lo que le permite realizar pruebas exhaustivas y tener más control sobre su implementación en la producción. También podría considerar la integración de DNSControl en sus procesos de compilación e implementación de infraestructuras, lo que le permitirá implementar servidores y agregarlos al DNS de manera completamente automática.
Si desea dar un paso más con DNSControl, en los siguientes artículos de DigitalOcean se ofrecen algunos pasos interesantes que puede seguir para contribuir a la integración de DNSControl en sus flujos de trabajo de administración de cambios e implementación de infraestructura:
]]>DNSControl — это инструмент, построенный по принципу «инфраструктура как код», который поддерживает развертывание и управление зонами DNS с использованием стандартных принципов разработки программного обеспечения, включая контроль версий, тестирование и автоматизированное развертывание. Инструмент DNSControl разработан Stack Exchange и написан на Go.
Использование DNSControl помогает избавиться от многих сложностей ручного управления DNS, поскольку файлы зон хранятся в программируемом формате. Инструмент позволяет одновременно развертывать зоны для нескольких поставщиков DNS, определять ошибки синтаксиса и автоматически извлекать конфигурации DNS, за счет чего снижается риск человеческой ошибки. Также DNSControl часто используется для быстрого переноса DNS на другого провайдера; например, в случае DDoS-атаки или выхода системы из строя.
В этом обучающем руководстве мы научимся устанавливать и настраивать DNSControl, создадим базовую конфигурацию DNS и начнем развертывание записей DNS на рабочем провайдере. Для этого обучающего руководства мы используем DigitalOcean в качестве примера провайдера DNS. Если вы хотите использовать другого провайдера, настройки будут выглядеть очень похоже. После завершения работы вы сможете управлять своей конфигурацией DNS и тестировать ее в безопасной среде, отключенной от сети, а затем автоматически развертывать в производственной среде.
Для прохождения этого обучающего руководства вам потребуется следующее:
your-server-ipv4-address
обозначает IP-адрес сервера, где вы размещаете свой сайт или домен. your-server-ipv6-address
обозначает адрес IPv6 сервера, где вы размещаете свой сайт или домен.your_domain
и DigitalOcean как провайдера услуг.Подготовив все вышеперечисленное, войдите на сервер без привилегий root, чтобы начать подготовку.
Инструмент DNSControl написан на языке Go, и поэтому вначале вам нужно установить Go на свой сервер и задать GOPATH
.
Go доступен в репозиториях программного обеспечения Debian по умолчанию, и поэтому его можно установить с помощью стандартных инструментов управления пакетами.
Также вам нужно будет установить Git, поскольку это требуется, чтобы дать Go возможность загрузки и установки программного обеспечения DNSControl из репозитория на GitHub.
Для начала мы обновим указатель локальных пакетов, чтобы отразить последние изменения на предыдущих уровнях:
- sudo apt update
Затем мы установим пакеты golang-go
и git
:
- sudo apt install golang-go git
После подтверждения установки apt
выполнит загрузку и установку Go и Git, а также всех требуемых зависимостей.
Далее вы настроите требуемые переменные среды path для Go. Дополнительную информацию можно получить в обучающем материале по GOPATH. Для начала отредактируем файл ~/.profile
:
- nano ~/.profile
Добавим в конец файла следующие строки:
...
export GOPATH="$HOME/go"
export PATH="$PATH:$GOPATH/bin"
После добавления этих строк в конец файла мы сохраним и закроем его. Затем мы перезагрузим профиль, для чего выйдем из системы и снова войдем в нее или снова укажем файл в качестве источника:
- source ~/.profile
Мы установили и настроили Go и теперь можем перейти к установке DNSControl.
Команду go get
можно использовать для доставки копии кода, его автоматической компиляции и установки в директорию Go:
- go get github.com/StackExchange/dnscontrol
После этого мы можем проверить установленную версию и убедиться, что все работает правильно:
- dnscontrol version
Экран результатов должен выглядеть примерно следующим образом:
Outputdnscontrol 2.9-dev
Если выводится сообщение об ошибке dnscontrol: command not found
, необходимо еще раз проверить настройки путей Go.
Мы установили DNSControl и теперь можем создать директорию конфигурации и подключить DNSControl к провайдеру DNS, чтобы дать инструменту возможность вносить изменения в ваши записи DNS.
На этом шаге мы создадим требуемые директории конфигурации DNSControl и подключим их к нашему провайдеру DNS, чтобы начать вносить изменения в записи DNS в реальном времени.
Вначале нам нужно создать новую директорию для хранения конфигурации DNSControl и перейти в нее:
- mkdir ~/dnscontrol
- cd ~/dnscontrol
Примечание. В этом обучающем модуле мы рассматриваем начальную настройку DNSControl, но для использования в производственной среде рекомендуется хранить конфигурацию DNSControl в системе контроля версий (VCS), такой как Git. Это дает преимущества полного контроля версий, интеграции с CI/CD для тестирования, удобства откатов при развертывании и т. д.
Если вы планируете использовать DNSControl для записи файлов зоны BIND, вам также следует создать директорию zones
:
- mkdir ~/dnscontrol/zones
Файлы зоны BIND представляют собой чистый стандартизированный метод сохранения зон и записей DNS в простом текстовом формате. Они первоначально использовались для программного обеспечения сервера BIND DNS, однако сейчас они широко используются в качестве стандартного метода хранения зон DNS. Файлы зоны BIND, предоставляемые DNSControl, очень полезны, если их нужно импортировать в персонализированный или собственный сервер DNS, а также для целей аудита.
Если вы хотите просто использовать DNSControl для передачи изменений DNS управляемому провайдеру, директория zones
не потребуется.
Далее требуется настроить файл creds.json
, что позволит DNSControl выполнить аутентификацию вашего провайдера DNS и внести изменения. Формат файла creds.json
может немного отличаться в зависимости от используемого провайдера DNS. Ознакомьтесь со списком провайдеров в официальной документации DNSControl, чтобы найти конфигурацию для вашего провайдера.
Создайте файл creds.json
в директории ~/dnscontrol
:
- cd ~/dnscontrol
- nano creds.json
Добавьте в файл образец конфигурации creds.json
для вашего провайдера DNS. Если вы используете DigitalOcean в качестве своего провайдера DNS, вы можете использовать следующее:
{
"digitalocean": {
"token": "your-digitalocean-oauth-token"
}
}
Этот файл сообщает DNSControl, к каким провайдерам DNS нужно подключаться.
Необходимо указать форму аутентификации для провайдера DNS. Обычно это ключ API или токен OAuth, однако для некоторых провайдеров требуется дополнительная информация, как описано в списке провайдеров в официальной документации DNSControl.
Предупреждение. Этот токен предоставляет доступ к учетной записи провайдера DNS, так что его следует защитить паролем. Также необходимо убедиться, что если вы используете систему контроля версий, файл с токеном исключен (например, с помощью .gitignore
) или зашифрован.
Если в качестве провайдера DNS используется DigitalOcean, можно использовать требуемый токен OAuth в параметрах учетной записи DigitalOcean, сгенерированных при выполнении предварительных требований.
Если вы используете нескольких провайдеров DNS, например для разных доменных имен или делегированных зон DNS, вы можете определить их в том же самом файле creds.json
.
Мы выполнили настройку первоначальных директорий конфигурации DNSControl и настроили файл creds.json
так, чтобы DNSControl мог проходить аутентификацию у провайдера DNS и вносить изменения. Теперь мы создадим конфигурацию для наших зон DNS.
На этом шаге мы создадим начальный файл конфигурации DNS, который будет содержать записи DNS для вашего доменного имени или делегированной зоны DNS.
dnsconfig.js
— основной файл конфигурации DNS для DNSControl. В этом файле зоны DNS и соответствующие им записи определяются с использованием синтаксиса JavaScript. Этот синтаксис называется DSL или предметно-ориентированный язык. Дополнительную информацию можно найти на странице JavaScript DSL в официальной документации по DNSControl.
Вначале создадим файл конфигурации DNS в директории ~/dnscontrol
:
- cd ~/dnscontrol
- nano dnsconfig.js
Затем добавим в файл следующий образец конфигурации:
// Providers:
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
// Domains:
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address')
);
В файле образца определяется доменное имя или зона DNS определенного провайдера. В данном случае это домен your_domain
на хостинге DigitalOcean. Пример записи A
также определен для зоны root (@
) и указывает на адрес IPv4 сервера, где размещен ваш домен или сайт.
Базовый файл конфигурации DNSControl состоит из трех основных функций:
NewRegistrar(имя, тип, метаданные)
: определяет регистратора домена для доменного имени. DNSControl может использовать эту функцию для внесения требуемых изменений, в частности для модификации авторитетных серверов имен. Если вы хотите использовать DNSControl только для управления зонами DNS, их можно оставить как NONE
.
NewDnsProvider(имя, тип, метаданные)
: определяет провайдера DNS для доменного имени или делегированной зоны. Здесь DNSControl применяет вносимые нами изменения DNS.
D(имя, регистратор, модификаторы)
: определяет доменное имя или делегируемую зону DNS для управления DNSControl, а также присутствующие в зоне записи DNS.
Мы должны настроить NewRegistrar()
, NewDnsProvider()
и D()
соответствующим образом, используя список провайдеров в официальной документации по DNSControl.
Если в качестве провайдера DNS используется DigitalOcean и нам требуется только вносить изменения DNS (а не авторитетных серверов имен), образцы из предыдущего блока кода подойдут для наших целей.
После завершения следует сохранить и закрыть файл.
На этом шаге мы настроили файл конфигурации DNS для DNSControl с определением соответствующих провайдеров. Далее мы заполним файл полезными записями DNS.
Далее мы можем заполнить файл конфигурации DNS полезными записями DNS для нашего сайта или сервиса, используя синтаксис DNSControl.
В отличие от традиционных файлов зоны BIND, где записи DNS имеют формат построчных необработанных данных, в DNSControl записи DNS определяются как параметр функции D()
(модификатор домена), как мы вкратце показали на шаге 3.
Модификатор домена существует для каждого стандартного типа записей DNS, включая A
, AAAA
, MX
, TXT
, NS
, CAA
и т. д. Полный список доступных типов записей можно найти в разделе Модификаторы домена в документации по DNSControl.
Также доступны модификаторы отдельных записей (модификаторы записей). Они в основном используются для настройки TTL (времени существования) для отдельных записей. Полный список доступных модификаторов записей можно найти в разделе Модификаторы записей документации по DNSControl. Модификаторы записей необязательны для использования, в большинстве простых случаев их можно опускать.
Синтаксис настройки записей DNS немного отличается для каждого типа записей. Далее приведены несколько примеров наиболее распространенных типов записей:
Записи A
:
A('name', 'address', необязательные модификаторы записи)
A('@', 'your-server-ipv4-address', TTL(30))
Записи AAAA
:
AAAA('name', 'address', необязательные модификаторы записи)
AAAA('@', 'your-server-ipv6-address')
(модификатор записи опущен, поэтому используется TTL по умолчанию)Записи CNAME
:
CNAME('name', 'target', необязательные модификаторы записи)
CNAME('subdomain1', 'example.org.')
(обратите внимание, что если значение содержит точку, завершающий символ .
обязательно должен присутствовать в конце)Записи MX
:
MX('name', 'priority', 'target', необязательные модификаторы записи)
MX('@', 10, 'mail.example.net')
(обратите внимание, что если значение содержит точку, завершающий символ .
обязательно должен присутствовать в конце)Записи TXT
:
TXT('name', 'content', необязательные модификаторы записи)
TXT('@', 'Это запись TXT. ')
Записи CAA
:
CAA('name', 'tag', 'value', необязательные модификаторы записи)
CAA('@', 'issue', 'letsencrypt.org')
Чтобы начать добавлять записи DNS для нашего домена или делегированной зоны DNS, нам нужно отредактировать файл конфигурации DNS:
- nano dnsconfig.js
Далее мы можем начать указывать параметры для существующей функции D()
, используя синтаксис,описанный в предыдущем списке и в разделе Модификаторы доменов официальной документации DNSControl. Для разделения записей должна использоваться запятая (,
).
Приведенный здесь блок кода содержит полный образец базовой конфигурации DNS, предназначенный для справочных целей:
...
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address'),
A('www', 'your-server-ipv4-address'),
A('mail', 'your-server-ipv4-address'),
AAAA('@', 'your-server-ipv6-address'),
AAAA('www', 'your-server-ipv6-address'),
AAAA('mail', 'your-server-ipv6-address'),
MX('@', 10, 'mail.your_domain.'),
TXT('@', 'v=spf1 -all'),
TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;')
);
После завершения начальной настройки DNS следует сохранить и закрыть файл.
На этом шаге мы можем настроить начальный файл конфигурации DNS, содержащий наши записи DNS. Далее нам нужно будет протестировать конфигурацию и развернуть ее.
На этом шаге мы проведем проверку локального синтаксиса нашей конфигурации DNS, а затем внесем изменения на рабочий сервер/провайдер DNS.
Вначале мы перейдем в директорию dnscontrol
:
- cd ~/dnscontrol
Затем мы используем функцию preview
в DNSControl для проверки синтаксиса файла и вывода вносимых изменений (без их фактического внесения):
- dnscontrol preview
Если файл конфигурации DNS имеет правильный синтаксис, DNSControl выведет обзор изменений, которые будут произведены. Результат должен выглядеть примерно так:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE A your_domain your-server-ipv4-address ttl=300
#2: CREATE A www.your_domain your-server-ipv4-address ttl=300
#3: CREATE A mail.your_domain your-server-ipv4-address ttl=300
#4: CREATE AAAA your_domain your-server-ipv6-address ttl=300
#5: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
#6: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
#7: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
----- Registrar: none...0 corrections
Done. 8 corrections.
Если вы увидите предупреждение об ошибке в составе результатов, DNSControl даст дополнительные данные о сущности ошибки и ее расположении в файле.
Предупреждение. Следующая команда вносит изменения в рабочие записи DNS и возможно в другие параметры. Необходимо убедиться, что мы готовы к этому. В частности, следует сделать резервную копию существующий конфигурации DNS на случай, если потребуется произвести откат.
В заключение мы можем передать изменения на рабочий провайдер DNS:
- dnscontrol push
Результат будет выглядеть примерно так:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
SUCCESS!
#2: CREATE A your_domain your-server-ipv4-address ttl=300
SUCCESS!
#3: CREATE AAAA your_domain your-server-ipv6-address ttl=300
SUCCESS!
#4: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#5: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#6: CREATE A www.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#7: CREATE A mail.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
SUCCESS!
----- Registrar: none...0 corrections
Done. 8 corrections.
Если теперь мы проверим параметры DNS нашего домена на панели управления DigitalOcean, мы увидим изменения.
Также мы можем проверить создание записи, отправив запрос DNS для нашего домена или делегированной зоны с помощью команды dig
.
Если команда dig
не установлена, необходимо установить пакет dnsutils
:
- sudo apt install dnsutils
После установки команды dig
мы сможем использовать ее для запроса DNS нашего домена. Мы увидим, что записи обновлены соответствующим образом:
- dig +short your_domain
В результатах выводится IP-адрес и запись DNS из вашей зоны, которая была развернута с помощью DNSControl. Распространение записей DNS может занять некоторое время, поэтому можно подождать и запустить команду чуть позже.
На этом заключительном шаге мы выполнили проверку локального синтаксиса файла конфигурации DNS, развернули его на рабочем провайдере DNS и убедились, что изменения были внесены успешно.
В этом обучающем руководстве мы выполнили настройку DNSControl и развертывание конфигурации DNS на рабочем провайдере. Теперь мы можем управлять изменениями конфигурации DNS и тестировать их в безопасной автономной среде, прежде чем внедрять их в производственную среду.
Если вы хотите узнать об этом больше, DNSControl предусматривает интеграцию с конвейером CI/CD, что позволяет проводить детальные испытания и обеспечивает дополнительный контроль над развертыванием в производственной среде. Также можно интегрировать DNSControl в процессы сборки и развертывания инфраструктуры, что позволит развертывать серверы и добавлять их в службу DNS в полностью автоматическом режиме.
Если вас интересует дополнительная информация о DNSControl, следующие материалы DigitalOcean содержат описание дальнейших шагов по интеграции DNSControl в рабочие процессы управления изменениями и развертывания инфраструктуры:
]]>Os sistemas de controle de versão de software permitem que você mantenha o controle do seu software no nível de código. Com ferramentas de versão, é possível rastrear as alterações, retornar para etapas anteriores e ramificar para criar versões alternativas de arquivos e diretórios.
O Git é um dos sistemas de controle de versão mais populares disponíveis atualmente. Muitos arquivos de projetos são mantidos em um repositório Git, e sites como o GitHub, o GitLab, e o Bitbucket ajudam a facilitar o compartilhamento e colaboração de projetos de desenvolvimento de software.
Neste tutorial, vamos instalar e configurar o Git em um servidor Debian 9. Iremos cobrir como instalar o software em duas maneiras diferentes, cada uma delas tem seus próprios benefícios dependendo das suas necessidades específicas.
Para completar este tutorial, é necessário ter um usuário não raiz com privilégios sudo
em um servidor Debian 9. Para aprender a como chegar a essa configuração, siga nosso guia de configuração inicial do servidor Debian 9.
Com seu servidor e usuário configurados, você está pronto para começar.
Os repositórios padrão do Debian fornecem a você um método rápido para instalar o Git. Note que a versão que você instala por esses repositórios pode ser mais antiga que a versão mais nova atualmente disponível. Se for necessário a última versão, considere se mudar para a próxima seção deste tutorial para aprender como instalar e compilar o Git da fonte.
Primeiramente, utilize as ferramentas de gerenciamento de pacotes apt para atualizar seu índice de pacotes local. Com a atualização completa, é possível baixar e instalar o Git:
- sudo apt update
- sudo apt install git
É possível confirmar que você instalou o Git corretamente executando o seguinte comando:
- git --version
Outputgit version 2.11.0
Com o Git instalado com sucesso, agora é possível seguir em frente para a seção Como configurar o Git deste tutorial para completar sua configuração.
Um método mais flexível de instalar o Git é compilar o software do código. Isso leva mais tempo e não será mantido através do seu gerenciador de pacotes, mas ele irá permitir que você baixe a versão mais recente e dará a você controle sobre as opções que desejar personalizar.
Antes de começar, é necessário instalar o software que o Git depende. Tudo isso está disponível nos repositórios padrão, para que possamos atualizar nosso índice de pacotes e em seguida instalar os pacotes.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
Após instalar as dependências necessárias, é possível prosseguir e obter a versão do Git que quiser ao visitar o Mirror do projeto Git no GitHub, disponível pelo seguinte URL:
https://github.com/git/git
A partir daqui, certifique-se de que está no ramo principal
. Clique no link Tag e selecione sua versão Git desejada. A menos que tenha um motivo para baixar uma versão do release candidate (marcado como rc), procure evitá-lo, uma vez que eles podem ser instáveis.
Em seguida, no lado direito da página, clique no botão Clonar ou download, e então clique no botão Download ZIP e copie o endereço de link que termina em .zip
.
Volte ao seu servidor Debian 9, vá para o diretório tmp
e baixe os arquivos temporários.
- cd /tmp
A partir daí, é possível usar o comando wget
para instalar o link do arquivo zip copiado. Vamos dar um novo nome para o arquivo: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Descompacte o arquivo que baixou e mova ele para o diretório resultante digitando:
- unzip git.zip
- cd git-*
Agora, é possível fazer o pacote e instalá-lo digitando esses dois comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Para garantir que a instalação foi bem sucedida, digite git --version
e receba a saída relevante que especifica a versão atual do Git.
Agora que tem o Git instalado, se quiser atualizar para uma versão mais recente, será possível clonar o repositório, e então compilar e instalar. Para encontrar o URL para usar para a operação de clone, navegue até o ramo ou tag que quiser na página GitHub do projeto e, em seguida, copie o URL clone no lado direito:
No momento em que este artigo está sendo escrito, o URL relevante é:
https://github.com/git/git.git
Altere seu diretório inicial e utilize o git clone
no URL que acabou de copiar:
- cd ~
- git clone https://github.com/git/git.git
Isso irá criar um novo diretório dentro do seu diretório atual, onde pode reconstruir o pacote e instalar a versão mais recente, do jeito que fez acima. Isso irá sobrepor sua versão mais antiga com a nova versão:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Terminado isso, certifique-se de que sua versão do está atualizada.
Agora que tem o Git instalado, será necessário configurá-lo para que as mensagens de entrega geradas contenham as suas informações corretas.
Isso pode ser alcançado utilizando o comando git config
. Especificamente, precisamos dar nosso e endereço de e-mail porque o Git incorpora esta informação em cada entrega que fazemos. Podemos seguir em frente e adicionar esta informação digitando:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
Podemos ver todos os itens de configuração que foram configurados digitando:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
A informação que digitou está armazenada no seu arquivo de configuração Git, que você pode editar opcionalmente com um editor de texto como este:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
Há muitas outras opções que é possível definir, mas essas duas são necessárias. Se pular este passo, provavelmente verá avisos quando colocar o Git em funcionamento. Isso dará mais trabalho para você pois será necessário revisar as entregas que tiver feito com as informações corretas.
Agora, você deve ter o Git instalado e pronto para usar no seu sistema.
Para aprender mais sobre como usar o Git, verifique esses artigos e séries:
]]>Los sistemas de control de versión de software le permiten controlar su software al nivel de la fuente. Con herramientas de control de versiones, puede realizar un seguimiento de los cambios, volver a etapas anteriores y producir ramificaciones para crear versiones alternativas de archivos y directorios.
Git es uno de los sistemas de control de versión más populares disponibles actualmente. Los archivos de muchos proyectos se mantienen en un repositorio Git y sitios como GitHub, GitLab y Bitbucket facilitan el intercambio y la colaboración en proyectos de desarrollo de software.
En este tutorial, instalaremos y configuraremos Git en un servidor de Debian 9. Abarcaremos la instalación del software de dos formas diferentes, cada una con sus propios beneficios según sus necesidades específicas.
Para completar este tutorial, debe tener un usuario no root con privilegios sudo
en un servidor de Debian 9. Para aprender a realizar esta configuración, siga nuestra Guía de configuración inicial para servidores de Debian 9.
Con su servidor y usuario configurado, estará listo para comenzar.
Los repositorios predeterminados de Debian le proporcionan un método rápido para instalar Git. Tenga en cuenta que la versión que instale a través de estos repositorios puede ser anterior a la más reciente disponible en la actualidad. Si necesita la última versión, considere pasar a la siguiente sección de este tutorial para aprender a instalar y compilar Git desde la fuente.
Primero, utilice las herramientas de gestión de paquetes apt para actualizar su índice local de paquetes. Con la actualización completa, puede descargar e instalar Git:
- sudo apt update
- sudo apt install git
Puede confirmar que instaló Git de forma correcta ejecutando el siguiente comando:
- git --version
Outputgit version 2.11.0
Una vez que instale Git correctamente, podrá pasar a la sección Configurar Git de este tutorial para completar su configuración.
Un método más flexible para instalar Git consiste en compilar el software desde la fuente. Esto toma más tiempo y no se mantendrá en su administrador de paquetes, pero le permitirá descargar la versión más reciente y le brindará cierto control sobre las opciones que incluya si desea personalizar.
Antes de comenzar, debe instalar el software necesario para Git. Se encuentra disponible en los repositorios predeterminados, de modo que podemos actualizar nuestro índice de paquetes locales y luego instalar los paquetes.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
Después de instalar las dependencias necesarias, puede obtener la versión de Git que desee visitando el espejo del proyecto de Git en GitHub, disponible a través de la siguiente URL:
https://github.com/git/git
A partir de este punto, asegúrese de encontrarse en la rama maestra
, Haga clic en el enlace Tags y seleccione la versión de Git que desee. A menos que tenga una razón para descargar una versión candidata (marcada como rc), intente evitar opciones como esta porque pueden ser inestables.
Luego, en la parte derecha de la página, haga clic con el botón primario en Clone or download y con el secundario en Download ZIP, y copie la dirección del enlace que termina en .zip
.
En su servidor de Debian 9, diríjase al directorio tmp
para descargar archivos temporales.
- cd /tmp
Desde allí, puede usar el comando wget
para instalar el enlace del archivo zip copiado. Especificaremos un nuevo nombre para el archivo: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Descomprima el archivo que descargó y vaya al directorio resultante escribiendo lo siguiente:
- unzip git.zip
- cd git-*
Ahora, podrá crear el paquete e instalarlo escribiendo estos dos comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Para asegurarse de que la instalación se haya completado de forma correcta, puede escribir git --version
deberá obtener un resultado pertinente que especifique la versión actualmente instalada de Git.
Ahora que instaló Git, si desea realizar una actualización a una versión posterior podrá clonar el repositorio y luego proceder con la compilación e instalación. Para encontrar la URL que usará en la operación de clonación, diríjase hasta la rama o etiqueta que desee en la página de GitHub del proyecto y luego copie la URL de clonación en el lado derecho:
En el momento de escribir, la URL pertinente es la siguiente:
https://github.com/git/git.git
Cambie la posición a su directorio de inicio y utilice el git clone
en la URL que acaba de copiar:
- cd ~
- git clone https://github.com/git/git.git
Con esto, dentro de su directorio actual se creará un nuevo directorio en el que podrá reconstruir el paquete y reinstalar la versión más reciente, como antes. Con esto se sobrescribirá su versión anterior con la nueva:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Una vez completado esto, podrá estar seguro de que su versión de Git estará actualizada.
Ahora que instaló Git, debe configurarlo de modo que los mensajes de confirmación generados contengan su información correcta.
Esto es posible usando el comando git config
. Específicamente, debemos proporcionar nuestro nombre y nuestra dirección de correo electrónico debido a que Git inserta esta información en cada confirmación que hacemos. Podemos añadir esta información escribiendo lo siguiente:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
Podemos ver todos los elementos de configuración ajustados escribiendo lo siguiente:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
La información que introduce se almacena en su archivo de configuración de Git. Tendrá la opción de modificarlo con un editor de texto de la siguiente manera:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
Existen muchas otras opciones que puede configurar, pero estas son las dos esenciales que se necesitan. Si omite este paso, probablemente verá las advertencias cuando realice una confirmación con Git. Esto implica un mayor trabajo para usted, pues tendrá que revisar las confirmaciones que haya realizado con la información corregida.
De esta manera, deberá tener Git instalado y listo para usarse en su sistema.
Para obtener más información sobre cómo usar Git, consulte los artículos y las series siguientes:
]]>I have run all the SSH steps in the terminal of my droplet: ssh-keygen ssh-add goto github. new ssh key. copy and paste the ssh public key. ssh -T git@github.com
got the message saying I had successfully authenticated.
I also ran: git config --global user.email xxxx git config --global user.name xxx
]]>I did not run git push
so the changes were only committed to my local Git repository.
So I decided to share how I reverted the last commit here with the community in case someone gets in the same situation.
So rather than wiping out your whole local repo and cloning a fresh copy, you could do the following:
]]>Системы контроля версий программного обеспечения помогают отслеживать программное обеспечение на уровне исходного кода. С помощью инструментов контроля версий вы сможете отслеживать изменения, возвращаться к предыдущим версиям и создавать ответвления для создания альтернативных версий файлов и директорий.
Git — одна из наиболее популярных систем управления версиями из доступных сегодня. Многие проектные файлы хранятся в репозитории Git, а такие сайты, как GitHub, GitLab и Bitbucket, упрощают работу над проектами разработки программного обеспечения и совместную работу.
В этом обучающем модуле мы научимся устанавливать и настраивать Git на сервере Debian 9. Мы расскажем, как выполнить установку программного обеспечения двумя различными способами, каждый из которых имеет свои преимущества в зависимости от ваших конкретных потребностей.
Для выполнения этого обучающего руководства у вас должен быть пользователь без прав root с привилегиями sudo
на сервере Debian 9. Данная настройка описывается в нашем руководстве по начальной настройке сервера Debian 9.
После настройки сервера и пользователя вы можете продолжить.
Один из самых быстрых способов установки Git — использование репозиториев Debian, заданных по умолчанию. Обратите внимание, что версия, которую вы устанавливаете через эти хранилища, может отличаться от новейшей доступной версии. Если вам потребуется последняя версия, перейдите к следующему разделу этого обучающего руководства, чтобы узнать, как выполнить установку и компиляцию Git из заданного вами источника.
Во-первых, воспользуйтесь инструменты управления пакетами apt для обновления локального индекса пакетов. После завершения обновления вы сможете загрузить и установить Git:
- sudo apt update
- sudo apt install git
Вы можете убедиться, что установка Git выполнена корректно, запустив следующую команду:
- git --version
Outputgit version 2.11.0
После успешной установки Git вы можете переходить Настройка Git данного обучающего руководства и выполнению настройки.
Более гибкий метод установки Git — это компиляция программного обеспечения из исходного кода. Это метод требует больше времени, а полученный результат не будет сохранен в менеджере пакетов, но он позволяет загрузить последнюю версию и дает определенный контроль над параметрами, которые вы включаете в ПО при необходимости индивидуальной настройки.
Перед началом установки вам нужно установить программное обеспечение, от которого зависит Git. Его можно найти в репозиториях по умолчанию, поэтому мы можем обновить локальный индекс пакетов, а после этого установить пакеты.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
После установки необходимых зависимостей вы можете продолжить работу и получить нужную вас версию Git, посетив зеркало проекта Git на GitHub, доступное по следующему URL-адресу:
https://github.com/git/git
Перейдя по ссылке, убедитесь, что вы находитесь в ветке master
. Нажмите ссылку Tags и выберите желаемую версию Git. Если у вас нет оснований для загрузки версии-кандидата (помеченная rc), постарайтесь избежать этого, поскольку такие версии могут быть нестабильными.
Затем нажмите кнопку Clone or download на правой стороне страницы, потом нажмите правой кнопкой мыши Download ZIP и скопируйте адрес ссылки, заканчивающийся на .zip
.
Вернитесь на сервер Debian 9 и перейдите в директорию tmp
, чтобы загрузить временные файлы.
- cd /tmp
Здесь вы можете использовать команду wget
для установки скопированной ссылки на файл с архивом. Мы укажем новое имя для файла: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Разархивируйте файл, который вы загрузили, и переместите в полученную директорию:
- unzip git.zip
- cd git-*
Теперь вы можете создать пакет и установить его, введя эти две команды:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Чтобы убедиться, что установка прошла успешно, вы можете ввести git --version
, после чего вы должны получить соответствующий вывод, указывающий текущую установленную версию Git.
Теперь, когда вы установили Git, если вы захотите обновиться до более поздней версии, вы можете клонировать репозиторий, а потом выполнить сборку и установку. Чтобы найти URL-адрес для использования при клонировании, перейдите к нужной ветке или тегу на странице проекта в GitHub и скопируйте клонируемый URL-адрес с правой стороны:
На момент написания соответствующий URL должен выглядеть следующим образом:
https://github.com/git/git.git
Измените домашнюю директорию и используйте git clone
для URL-адреса, который вы только что скопировали:
- cd ~
- git clone https://github.com/git/git.git
В результате будет создана новая директория внутри текущей директории, где вы можете выполнить повторную сборку проекта и переустановить новую версию, как вы уже делали выше. В результате старая версия будет перезаписана на новую:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
После выполнения этих действий вы можете быть уверены, что используете актуальную версию Git.
Теперь, когда вы установили Git, вам нужно настроить его, чтобы сгенерированные сообщения о внесении содержали корректную информацию.
Это можно сделать с помощью команды git config
. В частности, нам нужно указать наше имя и адрес электронной почты, поскольку Git вставляет эту информацию в каждое внесение. Мы можем двигаться дальше и добавить эту информацию с помощью следующей команды:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
Мы можем просмотреть все пункты конфигурации, которые были настроены, введя следующую команду:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
Информация, которую вы вводите, сохраняется в файле конфигурации Git, и вы можете при желании изменить ее вручную с помощью текстового редактора:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
Существует множество других вариантов настроек, но эти две опции устанавливаются в обязательном порядке. Если вы пропустите этот шаг, вы, скорее всего, будете видеть предупреждения при внесении изменений в Git. Это будет требовать дополнительной работы, поскольку вам нужно будет исправлять вносимые изменения, которые вы делали, вводя корректную информацию.
Вы установили Git и готовы к его использованию в системе.
Чтобы узнать больше об использовании Git, прочитайте эти статьи и разделы:
]]>GitLab CE (Community Edition) — приложение с открытым исходным кодом, в основном используемое для хостинга хранилищ Git. Приложение имеет дополнительные функции, связанные с разработкой, такие как отслеживание проблем. Оно предназначено для размещения вашей собственной инфраструктуры, обеспечения гибкости развертывания внутреннего хранилища для вашей группы разработчиков, открытого взаимодействия с пользователями и размещения собственных проектов вкладчиков.
Проект GitLab позволяет относительно легко создать экземпляр GitLab на собственном оборудование с использованием удобного механизма установки. В этом руководстве мы расскажем, как устанавливать и настраивать GitLab на сервере Ubuntu 18.04.
Для данного обучающего руководства вам потребуется следующее:
В опубликованных требованиях GitLab к аппаратному обеспечению рекомендуется использовать сервер со следующими параметрами:
Хотя вы можете обойти это требование, используя файл подкачки оперативной памяти, делать этого не рекомендуется. Для целей настоящего руководства мы предполагаем, что ваша система как минимум отвечает вышеуказанным требованиям.
Прежде чем выполнять установку GitLab, важно установить программное обеспечение, используемое при установке и на постоянной основе. К счастью, все требуемое программное обеспечение можно легко установить из заданных по умолчанию хранилищ пакетов Ubuntu.
Поскольку мы впервые используем apt
в этом сеансе, мы можем обновить индекс локального пакета и установить зависимости с помощью следующей команды:
- sudo apt update
- sudo apt install ca-certificates curl openssh-server postfix
Вероятно, некоторые элементы этого программного обеспечения у вас уже установлены. Для установки postfix
выберите вариант Internet Site в диалоговом окне. На следующем экране введите доменное имя вашего сервера, чтобы настроить отправку почты в системе.
Настроив зависимости, мы можем приступить к установке GitLab. Это простой процесс, использующий сценарий установки для настройки хранилищ GitLab в системе.
Перейдите в каталог /tmp
и загрузите установочный скрипт:
- cd /tmp
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Вы можете изучить загруженный скрипт, чтобы понять, какие действия он будет выполнять. Размещенную версию сценария можно найти здесь:
- less /tmp/script.deb.sh
Когда вы будете довольны безопасностью скрипта, запустите программу установки:
- sudo bash /tmp/script.deb.sh
Скрипт настроит ваш сервер для использования хранилищ, обслуживаемых GitLab. Это позволит вам управлять GitLab с помощью тех же средств управления пакетами, которые вы используете для других системных пакетов. Завершив эту задачу, вы можете начать установку приложения GitLab с помощью apt
:
- sudo apt install gitlab-ce
При этом в системе будут установлены необходимые компоненты.
Прежде чем настраивать GitLab, убедитесь, что правила брандмауэра разрешают веб-трафик. Если вы следовали указаниям, ссылки на которые приведены в предварительных требованиях, у вас будет активирован брандмауэр ufw
.
Просмотрите текущий статус активного брандмауэра, введя следующее:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Как видите, текущие правила разрешают трафик SSH, однако доступ к другим службам ограничен. Поскольку GitLab представляет собой веб-приложение, мы должны разрешить доступ HTTP. Поскольку мы будем использовать возможность GitLab запрашивать и активировать бесплатный сертификат TLS/SSL от Let’s Encrypt, необходимо также разрешить доступ HTTPS.
Протокол сопоставления портов HTTP и HTTPS доступен в файле /etc/services
, и мы можем разрешить этот входящий трафик по имени. Если у вас еще не разрешен трафик OpenSSH, вы также должны разрешить этот трафик:
- sudo ufw allow http
- sudo ufw allow https
- sudo ufw allow OpenSSH
Проверьте статус ufw
еще раз; на настоящий момент должен быть настроен доступ как минимум к следующим двум службам:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
Вышеуказанные результаты показывают, что веб-интерфейс GitLab будет доступен, как только мы настроим это приложение.
Прежде чем вы можете использовать приложение, вам нужно обновить файл конфигурации и запустить команду изменения конфигурации. Прежде всего, откройте файл конфигурации Gitlab:
- sudo nano /etc/gitlab/gitlab.rb
Наверху вы увидите строку конфигурации external_url
. Измените эту строку для соответствия с вашим доменом. Замените http
на https
, чтобы GitLab автоматически перенаправлял пользователей на сайт, защищенный сертификатом Let’s Encrypt:
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
external_url 'https://example.com'
Теперь посмотрим настройку letsencrypt['contact_emails']
. Эта настройка определяет список адресов электронной почты, которые проект Let’s Encrypt мог бы использовать для связи с вами, если с вашим доменом возникнут проблемы. Будет полезно указать эти адреса в комментарии, чтобы вы могли получать уведомления о любых проблемах:
letsencrypt['contact_emails'] = ['sammy@example.com']
Сохраните и закройте файл. Запустите следующую команду, чтобы изменить конфигурацию Gitlab:
- sudo gitlab-ctl reconfigure
Команда выполняет инициализацию GitLab, используя информацию о вашем сервере, которую она сможет найти. Этот процесс полностью автоматизирован, и вам не нужно вводить никакие данные в диалоги. Данный процесс также настроит сертификат Let’s Encrypt для вашего домена.
Когда GitLab работает, и доступ разрешен, мы можем выполнить начальную настройку конфигурации приложения через веб-интерфейс.
Откройте доменное имя вашего сервера GitLab в вашем браузере:
https://example.com
При первом входе откроется диалоговое окно установки пароля для учетной записи администратора:
В начальном диалоговом окне ввода пароля укажите и подтвердите защищенный пароль для административной учетной записи. Нажмите кнопку Change your password (Изменить пароль), когда будете готовы.
После этого вы перейдете на стандартную страницу входа в систему GitLab:
Здесь вы можете войти в систему с помощью только что заданного пароля. Учетные данные:
Введите эти значения в поля для существующих пользователей и нажмите кнопку Sign in (Вход). После входа в приложение откроется начальная страница, куда вы сможете добавлять проекты:
Теперь вы можете внести простые изменения и настроить GitLab желаемым образом.
Одной из первых задач после установки должна стать настройка профиля. GitLab использует разумные значения по умолчанию, но обычно после начала использования программного обеспечения их требуется изменить.
Чтобы внести необходимые изменения, нажмите значок пользователя в правом верхнем углу интерфейса. Выберите пункт Settings (Настройки) в выпадающем меню:
Откроется раздел настроек Profile (Профиль):
Измените имя и адрес электронной почты с Administrator и admin@example.com на более подходящие значения. Введенное имя будет отображаться другим пользователям, а адрес электронной почты будет использоваться для определения аватара по умолчанию, отправки уведомлений, в действиях Git через интерфейс и т. д.
После завершения настройки нажмите кнопку Update Profile settings (Обновить настройки профиля):
На указанный адрес электронной почты будет отправлено письмо с подтверждением. Следуйте указаниям в письме, чтобы подтвердить учетную запись и начать ее использовать с GitLab.
Нажмите Account (Учетная запись) в левой панели меню:
Здесь вы можете найти свой частный токен API или настроить двухфакторную аутентификацию. Однако пока что нас интересует раздел Change username (Изменить имя пользователя).
По умолчанию первой административной учетной записи присваивается имя root. Поскольку это имя широко известно, безопаснее заменить его другим именем. Изменится только имя учетной записи, права администратора у вас сохранятся. Замените root предпочитаемым именем пользователя:
Нажмите кнопку Update username (Обновить имя пользователя), чтобы внести изменения:
При следующем входе в GitLab обязательно используйте новое имя пользователя.
В большинстве случаев вы захотите использовать с Git ключи SSH для взаимодействия с проектами GitLab. Для этого вам нужно добавить свой открытый ключ SSH в учетную запись GitLab.
Если вы уже создали на локальном компьютере пару ключей SSH, вы можете просмотреть открытый ключ с помощью следующей команды:
- cat ~/.ssh/id_rsa.pub
Вы увидите большой блок текста, выглядящий примерно так:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Скопируйте этот текст и вернитесь на страницу настройки профиля в веб-интерфейсе GitLab.
Если вместо этого вы получите сообщение, выглядящее следующим образом, это будет означать, что на вашем компьютере не настроена пара ключей SSH:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
В этом случае вы можете создать пару ключей SSH, для чего нужно ввести следующую команду:
- ssh-keygen
Примите параметры по умолчанию, а при желании укажите пароль для локальной защиты ключа:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
После этого вы сможете просматривать свой открытый ключ с помощью следующей команды:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Скопируйте отображаемый блок текста и вернитесь в настройки профиля в веб-интерфейсе GitLab.
Нажмите пункт SSH Keys (Ключи SSH) в левом меню:
Вставьте в указанное место открытый ключ, скопированный с локального компьютера. Присвойте ему описательное название и нажмите кнопку Add key (Добавить ключ):
Теперь вы должны иметь возможность управлять проектами и хранилищами GitLab с локального компьютера без ввода учетных данных GitLab.
Возможно на начальной странице GitLab вы заметили, что зарегистрировать учетную запись может кто угодно. Если вы собираетесь разместить публичный проект, эта возможность будет полезна. Однако в большинстве случаев желательны более строгие ограничения.
Прежде всего, откройте область администрирования. Для этого нажмите значок гаечного ключа на панели меню вверху страницы:
На следующей странице вы увидите обзор своего экземпляра GitLab. Для изменения настройки нажмите Settings (Настройки) в нижней части левого меню:
Вы перейдете в раздел глобальных настроек для вашего экземпляра GitLab. Здесь вы можете изменить ряд настроек, влияющих на возможность регистрации новых пользователей и их уровень доступа.
Если вы хотите полностью отключить регистрацию (вы все равно сможете сами создавать учетные записи для новых пользователей), прокрутите страницу до раздела Sign-up Restrictions (Ограничения регистрации).
Уберите отметку из поля Sign-up enabled (Регистрация разрешена):
Прокрутите страницу до конца и нажмите кнопку Save changes (Сохранить изменения):
Теперь на начальной странице GitLab не должен отображаться раздел регистрации.
Если вы используете GitLab в организации, предоставляющей адреса электронной почты в определенном домене, вы можете разрешить регистрацию только для этого домена, а не отключать ее полностью.
В разделе Sign-up Restrictions (Ограничения регистрации) установите отметку в поле** Send confirmation email on sign-up (Отправлять письмо с подтверждением при регистрации)**. Так пользователи смогут входить в систему только после подтверждения своего адреса электронной почты.
Затем добавьте свой домен или домены в поле Whitelisted domains for sign-ups (Белый список доменов для регистрации). В каждой строке можно указать только один домен. Вы можете использовать звездочку (*) как подстановочный символ в именах доменов:
Прокрутите страницу до конца и нажмите кнопку Save changes (Сохранить изменения):
Теперь на начальной странице GitLab не должен отображаться раздел регистрации.
По умолчанию новые пользователи могут создавать до 10 проектов. Если вы хотите разрешить новым пользователям видеть данные и участвовать в проектах, но при этом ограничить им возможность создания новых проектов, вы можете использовать для этой цели раздел Account and Limit Settings (Настройки учетных записей и ограничений).
Вы можете установить для параметра Default projects limit (Лимит проектов по умолчанию) значение 0, чтобы полностью запретить новым пользователям создавать проекты:
Новых пользователей можно будет добавлять в проекты вручную, и у них будет доступ к внутренним или публичным проектам, созданным другими пользователями.
Прокрутите страницу до конца и нажмите кнопку Save changes (Сохранить изменения):
Новые пользователи смогут создавать учетные записи, но не смогут создавать проекты.
По умолчанию в GitLab имеется запланированная задача обновления сертификатов Let’s Encrypt каждые четыре дня после полуночи, точное время зависит от параметра external_url
. Вы можете изменить эти настройки в файле /etc/gitlab/gitlab.rb
. Например, если вы хотите выполнять обновление каждый 7-й день в 12:30, вы можете задать это следующим образом:
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
Автоматическое обновление можно отключить, добавив соответствующий параметр в /etc/gitlab/gitlab.rb
:
letsencrypt['auto_renew'] = false
Когда автоматическое обновление включено, вам не нужно беспокоиться о перебоях в обслуживании.
Теперь на вашем сервере должен размещаться работающий экземпляр GitLab. Вы можете импортировать или создавать новые проекты и настраивать надлежащие уровни доступа для членов вашей команды. GitLab постоянно добавляет функции и обновляет свою платформу, и мы рекомендуем часто посещать главную страницу проекта, чтобы не пропустить информацию об улучшениях или важных уведомлениях.
]]>DNSControl is an infrastructure-as-code tool that allows you to deploy and manage your DNS zones using standard software development principles, including version control, testing, and automated deployment. DNSControl was created by Stack Exchange and is written in Go.
Using DNSControl eliminates many of the pitfalls of manual DNS management, as zone files are stored in a programmable format. This allows you to deploy zones to multiple DNS providers simultaneously, identify syntax errors, and push out your DNS configuration automatically, reducing the risk of human error. Another common usage of DNSControl is to quickly migrate your DNS to a different provider; for example, in the event of a DDoS attack or system outage.
In this tutorial, you’ll install and configure DNSControl, create a basic DNS configuration, and begin deploying DNS records to a live provider. As part of this tutorial, we will use DigitalOcean as the example DNS provider. If you wish to use a different provider, the setup is very similar. When you’re finished, you’ll be able to manage and test your DNS configuration in a safe, offline environment, and then automatically deploy it to production.
Before you begin this guide you’ll need the following:
your-server-ipv4-address
refers to the IP address of the server where you’re hosting your website or domain. your-server-ipv6-address
refers to the IPv6 address of the server where you’re hosting your website or domain.your_domain
throughout and DigitalOcean as the service provider.Once you have these ready, log in to your server as your non-root user to begin.
DNSControl is written in Go, so you’ll start this step by installing Go to your server and setting your GOPATH
.
Go is available within Debian’s default software repositories, making it possible to install using conventional package management tools.
You’ll also need to install Git, as this is required to allow Go to download and install the DNSControl software from it’s repository on GitHub.
Begin by updating the local package index to reflect any new upstream changes:
- sudo apt update
Then, install the golang-go
and git
packages:
- sudo apt install golang-go git
After confirming the installation, apt
will download and install Go and Git, as well as all of their required dependencies.
Next, you’ll configure the required path environment variables for Go. If you would like to know more about this, you can read this tutorial on Understanding the GOPATH. Start by editing the ~/.profile
file:
- nano ~/.profile
Add the following lines to the very end of your file:
...
export GOPATH="$HOME/go"
export PATH="$PATH:$GOPATH/bin"
Once you have added these lines to the bottom of the file, save and close it. Then reload your profile by either logging out and back in, or sourcing the file again:
- source ~/.profile
Now you’ve installed and configured Go, you can install DNSControl.
The go get
command can be used to fetch a copy of the code, automatically compile it, and install it into your Go directory:
- go get github.com/StackExchange/dnscontrol
Once this is complete, you can check the installed version to make sure that everything is working:
- dnscontrol version
Your output will look similar to the following:
Outputdnscontrol 2.9-dev
If you see a dnscontrol: command not found
error, double-check your Go path setup.
Now that you’ve installed DNSControl, you can create a configuration directory and connect DNSControl to your DNS provider in order to allow it to make changes to your DNS records.
In this step, you’ll create the required configuration directories for DNSControl, and connect it to your DNS provider so that it can begin to make live changes to your DNS records.
Firstly, create a new directory in which you can store your DNSControl configuration, and then move into it:
- mkdir ~/dnscontrol
- cd ~/dnscontrol
Note: This tutorial will focus on the initial set up of DNSControl; however for production use it is recommended to store your DNSControl configuration in a version control system (VCS) such as Git. The advantages of this include full version control, integration with CI/CD for testing, seamlessly rolling-back deployments, and so on.
If you plan to use DNSControl to write BIND zone files, you should also create the zones
directory:
- mkdir ~/dnscontrol/zones
BIND zone files are a raw, standardized method for storing DNS zones/records in plain text format. They were originally used for the BIND DNS server software, but are now widely adopted as the standard method for storing DNS zones. BIND zone files produced by DNSControl are useful if you want to import them to a custom or self-hosted DNS server, or for auditing purposes.
However, if you just want to use DNSControl to push DNS changes to a managed provider, the zones
directory will not be needed.
Next, you need to configure the creds.json
file, which is what will allow DNSControl to authenticate to your DNS provider and make changes. The format of creds.json
differs slightly depending on the DNS provider that you are using. Please see the Service Providers list in the official DNSControl documentation to find the configuration for your own provider.
Create the file creds.json
in the ~/dnscontrol
directory:
- cd ~/dnscontrol
- nano creds.json
Add the sample creds.json
configuration for your DNS provider to the file. If you’re using DigitalOcean as your DNS provider, you can use the following:
{
"digitalocean": {
"token": "your-digitalocean-oauth-token"
}
}
This file tells DNSControl to which DNS providers you want it to connect.
You’ll need to provide some form of authentication for your DNS provider. This is usually an API key or OAuth token, but some providers require extra information, as documented in the Service Providers list in the official DNSControl documentation.
Warning: This token will grant access to your DNS provider account, so you should protect it as you would a password. Also, ensure that if you’re using a version control system, either the file containing the token is excluded (e.g. using .gitignore
), or is securely encrypted in some way.
If you’re using DigitalOcean as your DNS provider, you can use the required OAuth token in your DigitalOcean account settings that you generated as part of the prerequisites.
If you have multiple different DNS providers—for example, for multiple domain names, or delegated DNS zones—you can define these all in the same creds.json
file.
You’ve set up the initial DNSControl configuration directories, and configured creds.json
to allow DNSControl to authenticate to your DNS provider and make changes. Next you’ll create the configuration for your DNS zones.
In this step, you’ll create an initial DNS configuration file, which will contain the DNS records for your domain name or delegated DNS zone.
dnsconfig.js
is the main DNS configuration file for DNSControl. In this file, DNS zones and their corresponding records are defined using JavaScript syntax. This is known as a DSL, or Domain Specific Language. The JavaScript DSL page in the official DNSControl documentation provides further details.
To begin, create the DNS configuration file in the ~/dnscontrol
directory:
- cd ~/dnscontrol
- nano dnsconfig.js
Then, add the following sample configuration to the file:
// Providers:
var REG_NONE = NewRegistrar('none', 'NONE');
var DNS_DIGITALOCEAN = NewDnsProvider('digitalocean', 'DIGITALOCEAN');
// Domains:
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address')
);
This sample file defines a domain name or DNS zone at a particular provider, which in this case is your_domain
hosted by DigitalOcean. An example A
record is also defined for the zone root (@
), pointing to the IPv4 address of the server that you’re hosting your domain/website on.
There are three main functions that make up a basic DNSControl configuration file:
NewRegistrar(name, type, metadata)
: defines the domain registrar for your domain name. DNSControl can use this to make required changes, such as modifying the authoritative nameservers. If you only want to use DNSControl to manage your DNS zones, this can generally be left as NONE
.
NewDnsProvider(name, type, metadata)
: defines a DNS service provider for your domain name or delegated zone. This is where DNSControl will push the DNS changes that you make.
D(name, registrar, modifiers)
: defines a domain name or delegated DNS zone for DNSControl to manage, as well as the DNS records present in the zone.
You should configure NewRegistrar()
, NewDnsProvider()
, and D()
accordingly using the Service Providers list in the official DNSControl documentation.
If you’re using DigitalOcean as your DNS provider, and only need to be able to make DNS changes (rather than authoritative nameservers as well), the sample in the preceding code block is already correct.
Once complete, save and close the file.
In this step, you set up a DNS configuration file for DNSControl, with the relevant providers defined. Next, you’ll populate the file with some useful DNS records.
Next, you can populate the DNS configuration file with useful DNS records for your website or service, using the DNSControl syntax.
Unlike traditional BIND zone files, where DNS records are written in a raw, line-by-line format, DNS records within DNSControl are defined as a function parameter (domain modifier) to the D()
function, as shown briefly in Step 3.
A domain modifier exists for each of the standard DNS record types, including A
, AAAA
, MX
, TXT
, NS
, CAA
, and so on. A full list of available record types is available in the Domain Modifiers section of the DNSControl documentation.
Modifiers for individual records are also available (record modifiers). Currently these are primarily used for setting the TTL (time to live) of individual records. A full list of available record modifiers is available in the Record Modifiers section of the DNSControl documentation. Record modifiers are optional, and in most basic use cases can be left out.
The syntax for setting DNS records varies slightly for each record type. Following are some examples for the most common record types:
A
records:
A('name', 'address', optional record modifiers)
A('@', 'your-server-ipv4-address', TTL(30))
AAAA
records:
AAAA('name', 'address', optional record modifiers)
AAAA('@', 'your-server-ipv6-address')
(record modifier left out, so default TTL will be used)CNAME
records:
CNAME('name', 'target', optional record modifiers)
CNAME('subdomain1', 'example.org.')
(note that a trailing .
must be included if there are any dots in the value)MX
records:
MX('name', 'priority', 'target', optional record modifiers)
MX('@', 10, 'mail.example.net')
(note that a trailing .
must be included if there are any dots in the value)TXT
records:
TXT('name', 'content', optional record modifiers)
TXT('@', 'This is a TXT record.')
CAA
records:
CAA('name', 'tag', 'value', optional record modifiers)
CAA('@', 'issue', 'letsencrypt.org')
In order to begin adding DNS records for your domain or delegated DNS zone, edit your DNS configuration file:
- nano dnsconfig.js
Next, you can begin populating the parameters for the existing D()
function using the syntax described in the previous list, as well as the Domain Modifiers section of the official DNSControl documentation. A comma (,
) must be used in-between each record.
For reference, the code block here contains a full sample configuration for a basic, initial DNS setup:
...
D('your_domain', REG_NONE, DnsProvider(DNS_DIGITALOCEAN),
A('@', 'your-server-ipv4-address'),
A('www', 'your-server-ipv4-address'),
A('mail', 'your-server-ipv4-address'),
AAAA('@', 'your-server-ipv6-address'),
AAAA('www', 'your-server-ipv6-address'),
AAAA('mail', 'your-server-ipv6-address'),
MX('@', 10, 'mail.your_domain.'),
TXT('@', 'v=spf1 -all'),
TXT('_dmarc', 'v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;')
);
Once you have completed your initial DNS configuration, save and close the file.
In this step, you set up the initial DNS configuration file, containing your DNS records. Next, you will test the configuration and deploy it.
In this step, you will run a local syntax check on your DNS configuration, and then deploy the changes to the live DNS server/provider.
Firstly, move into your dnscontrol
directory:
- cd ~/dnscontrol
Next, use the preview
function of DNSControl to check the syntax of your file, and output what changes it will make (without actually making them):
- dnscontrol preview
If the syntax of your DNS configuration file is correct, DNSControl will output an overview of the changes that it will make. This should look similar to the following:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE A your_domain your-server-ipv4-address ttl=300
#2: CREATE A www.your_domain your-server-ipv4-address ttl=300
#3: CREATE A mail.your_domain your-server-ipv4-address ttl=300
#4: CREATE AAAA your_domain your-server-ipv6-address ttl=300
#5: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
#6: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
#7: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
----- Registrar: none...0 corrections
Done. 8 corrections.
If you see an error warning in your output, DNSControl will provide details on what and where the error is located within your file.
Warning: The next command will make live changes to your DNS records and possibly other settings. Please ensure that you are prepared for this, including taking a backup of your existing DNS configuration, as well as ensuring that you have the means to roll back if needed.
Finally, you can push out the changes to your live DNS provider:
- dnscontrol push
You’ll see an output similar to the following:
Output******************** Domain: your_domain
----- Getting nameservers from: digitalocean
----- DNS Provider: digitalocean...8 corrections
#1: CREATE TXT _dmarc.your_domain "v=DMARC1; p=reject; rua=mailto:abuse@your_domain; aspf=s; adkim=s;" ttl=300
SUCCESS!
#2: CREATE A your_domain your-server-ipv4-address ttl=300
SUCCESS!
#3: CREATE AAAA your_domain your-server-ipv6-address ttl=300
SUCCESS!
#4: CREATE AAAA www.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#5: CREATE AAAA mail.your_domain your-server-ipv6-address ttl=300
SUCCESS!
#6: CREATE A www.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#7: CREATE A mail.your_domain your-server-ipv4-address ttl=300
SUCCESS!
#8: CREATE MX your_domain 10 mail.your_domain. ttl=300
SUCCESS!
----- Registrar: none...0 corrections
Done. 8 corrections.
Now, if you check the DNS settings for your domain in the DigitalOcean control panel, you’ll see the changes.
You can also check the record creation by running a DNS query for your domain/delegated zone using dig
.
If you don’t have dig
installed, you’ll need to install the dnsutils
package:
- sudo apt install dnsutils
Once you’ve installed dig
, you can use it to make a DNS lookup for your domain. You’ll see that the records have been updated accordingly:
- dig +short your_domain
You’ll see output showing the IP address and relevant DNS record from your zone that was deployed using DNSControl. DNS records can take some time to propagate, so you may need to wait and run this command again.
In this final step, you ran a local syntax check of the DNS configuration file, then deployed it to your live DNS provider, and tested that the changes were made successfully.
In this article you set up DNSControl and deployed a DNS configuration to a live provider. Now you can manage and test your DNS configuration changes in a safe, offline environment before deploying them to production.
If you wish to explore this subject further, DNSControl is designed to be integrated into your CI/CD pipeline, allowing you to run in-depth tests and have more control over your deployment to production. You could also look into integrating DNSControl into your infrastructure build/deployment processes, allowing you to deploy servers and add them to DNS completely automatically.
If you wish to go further with DNSControl, the following DigitalOcean articles provide some interesting next steps to help integrate DNSControl into your change management and infrastructure deployment workflows:
]]>If that’s not possible, can I SSH into the droplet using a pwd and running a command inside of it?
Basically I want to either get the DLLs from github onto the droplet, or alternatively SSH into the droplet and run a script that pulls the source from git, and builds the app on the droplet itself
]]>O GitLab CE ou Community Edition é um aplicativo de código aberto usado principalmente para hospedar repositórios Git, que possui recursos adicionais relacionados ao desenvolvimento como o rastreamento de problemas. Ele foi desenvolvido para ser hospedado usando infraestrutura própria, fornecendo flexibilidade na implantação como uma loja interna de repositórios para a sua equipe de desenvolvimento. Trata-se de uma maneira de interagir com os usuários ou um meio para que colaboradores hospedem seus respectivos projetos.
O projeto GitLab torna relativamente simples a configuração de uma instância do GitLab no seu próprio hardware a partir de um processo de instalação fácil. Neste guia, falaremos sobre como instalar e configurar o GitLab em um servidor Ubuntu 18.04.
Para este tutorial, você precisará de:
Os requisitos de hardware do GitLab recomendam o uso de um servidor que tenha:
É possível ignorar esse requisito substituindo algum espaço de swap por memória RAM, mas não recomendamos que faça isso. Para este guia, supomos que você tenha ao menos os recursos mínimos acima.
Antes de instalar o GitLab propriamente dito, é importante instalar alguns dos software que ele utiliza durante a instalação e em base contínua. Todos os software necessários poderão ser instalados a partir dos repositórios do pacote padrão do Ubuntu.
Como esta será a primeira vez que você usará o apt
na sessão, podemos atualizar o índice de pacotes local e, em seguida, instalar as dependências, digitando:
- sudo apt update
- sudo apt install ca-certificates curl openssh-server postfix
É provável que você já tenha alguns dos software instalados. Para a instalação postfix
, selecione **Internet Site **quando solicitado. Na tela seguinte, digite o nome de domínio do seu servidor e configure o modo como o sistema enviará e-mails.
Agora que as dependências estão instaladas, podemos instalar o GitLab. Este é um processo simples que se utiliza de um script de instalação para configurar seu sistema com os repositórios do GitLab.
Vá para o diretório /tmp
e, em seguida, faça o download do script de instalação:
- cd /tmp
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Fique à vontade para avaliar o script baixado para garantir que se sinta seguro quanto as ações que o script realizará. Também é possível encontrar uma versão hospedada do script aqui:
- less /tmp/script.deb.sh
Se estiver satisfeito com a segurança do script, execute o instalador.
- sudo bash /tmp/script.deb.sh
O script irá configurar o seu servidor para usar os repositórios mantidos pelo GitLab. Isso permitirá que você gerencie o GitLab com as mesmas ferramentas de gerenciamento de pacotes que você usa com outros pacotes do seu sistema. Quando o processo estiver finalizado, instale o aplicativo GitLab com o comando apt
:
- sudo apt install gitlab-ce
Isso instalará os componentes necessários no sistema.
Antes de configurar o GitLab, será necessário conferir suas regras de firewall, de modo que estas permitam o tráfego da Web. Se você seguiu o guia presente nos pré-requisitos, você terá um firewall ufw
habilitado.
Para ver o status atual do firewall ativo, digite:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Como é possível ver, as regras atuais permitem o tráfego SSH, mas o acesso a outros serviços se encontra restrito. O GitLab é um aplicativo da Web e, por isso, devemos habilitar o acesso HTTP. Como iremos aproveitar a capacidade do GitLab de solicitar e habilitar um certificado TLS/SSL gratuito a partir de Let’s Encrypt, também iremos permitir o acesso HTTPS.
O protocolo de mapeamentos de portas para HTTP e HTTPS está disponível no arquivo /etc/services
. Desse modo, permitimos o tráfego pelo nome. Se você ainda não tiver o tráfego OpenSSH habilitado, você também deve permitir esse tráfego agora:
- sudo ufw allow http
- sudo ufw allow https
- sudo ufw allow OpenSSH
Confira o ufw status
novamente. Você verá o serviço configurado para pelo menos estes dois serviços:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
O resultado acima indica que a interface da Web do GitLab ficará acessível assim que configurarmos o aplicativo.
Antes de usar o aplicativo, será necessário atualizar o arquivo de configuração e executar um comando de reconfiguração. Primeiramente, abra o arquivo de configuração do GitLab:
- sudo nano /etc/gitlab/gitlab.rb
Na parte superior podemos ver a linha de configuração external_url
. Atualize o arquivo para que ele corresponda ao seu domínio. Altere http
para https
. Assim, o GitLab redirecionará os usuários para o site protegido pelo certificado Let’s Encrypt:
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
external_url 'https://example.com'
Depois, encontre a configuração letsencrypt['contact_emails']
. Esta configuração define uma lista de endereços de e-mail que o projeto Let’s Encrypt usará para contatar você caso haja algum problema com o domínio. É recomendável remover o comentário e colocar seu e-mail para que você fique sabendo de quaisquer problemas:
letsencrypt['contact_emails'] = ['sammy@example.com']
Salve e feche o arquivo. Execute o comando a seguir para reconfigurar o Gitlab:
- sudo gitlab-ctl reconfigure
Fazer isso inicializará o GitLab usando a informação do seu servidor. Trata-se de um processo completamente automatizado. Não será necessário responder a nenhum prompt durante o processo. O processo também configurará um certificado Let’s Encrypt para o domínio.
Com o GitLab em execução e p acesso concedido, podemos realizar algumas configurações iniciais do aplicativo através da interface da Web.
Visite o nome de domínio do seu servidor GitLab no seu navegador da Web.
https://example.com
Na primeira visita, você verá um prompt inicial para definir uma senha para a conta administrativa:
No prompt de senha inicial, forneça e confirme uma senha segura para a conta administrativa. Clique no botão Change your password quando tiver terminado.
Você será redirecionado para a página padrão de login do GitLab:
Aqui, é possível fazer login com a senha que você definiu. As credenciais são:
Digite esses dados nos campos em relação aos usuários existentes e clique no botão Sign in Você agora estará logado no aplicativo e será direcionado até a página de destino, onde será solicitado que você adicione projetos:
Agora é possível fazer algumas alterações simples para deixar o GitLab do jeito que você quiser.
Uma das primeiras coisas a fazer após uma instalação nova é melhorar seu perfil. O GitLab possui alguns padrões razoáveis, mas que não são geralmente apropriados para quando você começar a usar o software.
Para fazer as modificações necessárias, clique no ícone de usuário, no canto superior direito da interface. No menu suspenso, selecione Configurações:
Você será levado à seção do Perfil de suas configurações:
Ajuste o Name e endereço de E-mail do “Administrador” e “admin@example.com” para algo mais preciso. O nome que você selecionar será exibido para os outros usuários, enquanto o e-mail será usado como padrão para a detecção de avatar, notificações, ações Git através da interface, etc.
Clique no botão Atualizar configurações de perfil, na parte inferior, quando tiver terminado:
Um e-mail de confirmação será enviado para o endereço que você forneceu. Siga as instruções no e-mail para confirmar sua conta para poder começar a usá-la com o GitLab.
Em seguida, clique no item Conta na barra de menu do lado esquerdo:
Aqui, você pode encontrar seu token pessoal da API ou configurar a autenticação de dois fatores. Entretanto, a funcionalidade em que estamos interessados no momento é a seção Alterar nome de usuário.
Por padrão, a primeira conta administrativa recebe o nome de root. Como se trata de um nome de conta conhecido, é mais seguro mudar para um nome diferente. Você ainda terá privilégios administrativos; a única coisa que mudará é o nome. Substitua **root **com seu nome de usuário preferido:
Clique no botão Atualizar nome de usuário para fazer a alteração:
Da próxima vez que fizer login no Gitlab, lembre-se de utilizar seu novo nome de usuário.
Na maioria dos casos, você irá querer utilizar as chaves SSH com Git para interagir com seus projetos do GitLab. Para isso, será necessário adicionar sua chave pública SSH à sua conta do GitLab.
Se já tiver um par de chaves SSH criado no seu computador local, normalmente é possível visualizar a chave pública digitando:
- cat ~/.ssh/id_rsa.pub
Você verá um texto grande, como este:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copie este texto e volte para a página de Configuração do perfil na interface Web do GitLab.
Se, em vez disso, você receber uma mensagem parecida com esta, isto significa que você ainda não possui um par de chaves SSH configurado em sua máquina:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
Se for este o caso, você pode criar um par de chaves SSH digitando:
- ssh-keygen
Aceite os padrões e, se desejar, forneça uma senha para proteger a chave localmente:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Uma vez que a tiver, você pode mostrar sua chave pública como acima digitando:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copie o bloco de texto exibido e volte para sua Configuração de perfil na interface Web do GitLab.
Clique no item **Chaves SSH **no menu do lado esquerdo:
No espaço indicado, cole a chave pública que copiou da sua máquina local. Dê a ela um título descritivo e clique no botão Adicionar chave:
Agora, você deve conseguir gerenciar seus projetos e repositórios do GitLab a partir de sua máquina local sem ter que fornecer suas credenciais da conta GitLab.
Você deve ter notado que é possível para qualquer um registrar uma conta ao visitar a página de destino de sua instância do GitLab. Isso pode ser o que você quer caso esteja procurando hospedar um projeto público. Entretanto, muitas vezes é desejável usar configurações mais restritivas.
Para começar, vá para a área administrativa, clicando no ícone de** chave inglesa**, na barra de menu principal no todo da página:
Na página que segue, é possível ver um resumo da instância GitLab como um todo. Para ajustar as configurações, clique no item Configurações no final do menu do lado esquerdo:
Você será levado para as configurações globais de sua instância GitLab. Aqui é possível ajustar uma série de configurações que afetam se novos usuários podem se inscrever e o nível de acesso deles.
Se deseja desativar as inscrições completamente (você ainda pode criar contas para novos usuários manualmente), role para baixo até a seção de Restrições de inscrição.
Desmarque a caixa de seleção Inscrição habilitada:
Role até a parte inferior e clique no botão Salvar alterações:
A seção de inscrição deve estar removida da página de destino da GitLab agora.
Se estiver usando o GitLab como parte de uma organização que fornece endereços de e-mail associados a um domínio, você pode restringir as inscrições por domínio, em vez de desabilitá-las totalmente.
Na seção Restrições de inscrição, selecione a caixa Enviar e-mail de confirmação no momento da inscrição, o que permitirá que os usuários façam login apenas depois de confirmarem seus respectivos e-mails.
Na sequência, adicione seu domínio ou domínios à caixa Domínios whitelist para inscritos, um domínio por linha. Você pode usar o asterisco “*” para especificar domínios wildcard:
Role até a parte inferior e clique no botão Salvar alterações:
A seção de inscrição deve estar removida da página de destino da GitLab agora.
Por padrão, novos usuários podem criar até 10 projetos. Se quiser dar permissões de visibilidade e participação aos novos usuários externos, mas também quiser limitar o acesso deles à criação de novos projetos, é possível fazer isso** na seção Configurações de cont**a e limite.
Dentro, é possível alterar o Limite de projetos padrão para 0 para desabilitar completamente a criação de projetos dos novos usuários:
Novos usuários ainda podem ser adicionados aos projetos manualmente e terão acesso aos projetos internos ou públicos criados por outros usuários.
Role até a parte inferior e clique no botão Salvar alterações:
Agora, os novos usuários poderão criar contas, mas não poderão criar projetos.
Por padrão, o GitLab tem uma tarefa programada configurada para renovar os certificados do Let’s Encrypt após a meia-noite a cada quatro dias, com o minuto exato baseado em seu external_url
. Você pode modificar essas configurações no arquivo /etc/gitlab/gitlab.rb
. Por exemplo, se quisesse renovar todo dia 7, às 12h30, poderia configurar isso como segue:
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
Também é possível desabilitar a renovação automática, adicionando uma configuração adicional para /etc/gitlab/gitlab
.rb:
letsencrypt['auto_renew'] = false
Com a renovação automática funcionando, não será mais necessário se preocupar com interrupções de serviço.
Agora, você deve ter uma instância GitLab funcionando hospedada em seu próprio servidor. Você pode começar a importar ou criar novos projetos e configurar o nível de acesso apropriado para sua equipe. O GitLab adiciona recursos e atualiza sua plataforma regularmente. Assim, assegure-se de verificar a página inicial de projetos para se manter atualizado em relação às melhorias ou avisos importantes.
]]>GitLab CE, o Community Edition, es una aplicación de código abierto que se utiliza, principalmente, para alojar repositorios Git, con funciones adicionales relacionadas con el desarrollo, como el seguimiento de incidentes. Está diseñado para alojarse usando su propia infraestructura, y ofrece flexibilidad al implementarse como almacén repositorio interno para su equipo de desarrollo, una forma pública de interactuar con los usuarios o un medio para que colaboradores alojen sus propios proyectos.
El proyecto GitLab hace que sea relativamente fácil configurar una instancia de GitLab en su propio hardware con un sencillo mecanismo de instalación. En esta guía, explicaremos cómo instalar y configurar GitLab en un servidor Ubuntu 18.04.
Para este tutorial, necesitará lo siguiente:
Los requisitos de hardware publicados de GitLab recomiendan usar un servidor con lo siguiente:
Si bien es posible que se pueda arreglar sustituyendo algún espacio de intercambio por RAM, no se recomienda hacerlo. Para esta guía, asumiremos que tiene, como mínimo, los recursos anteriores.
Antes de poder instalar GitLab, es importante instalar algunos de los programas de software que utiliza durante la instalación y de forma continua. Afortunadamente, todo el software requerido puede instalarse fácilmente desde los repositorios del paquete predeterminado de Ubuntu.
Ya que esta es la primera vez que usamos apt
durante esta sesión, podemos actualizar el índice de paquete local y, luego, instalar las dependencias escribiendo lo siguiente:
- sudo apt update
- sudo apt install ca-certificates curl openssh-server postfix
Probablemente, ya tenga algunos de estos programas de software instalados. Para la instalación de postfix
, seleccione el sitio de Internet cuando se le indique. En la siguiente pantalla, introduzca el nombre de dominio de su servidor para configurar la manera en que el sistema enviará el correo.
Ahora que las dependencias están instaladas, podemos instalar GitLab. Este es un proceso sencillo que utiliza una secuencia de comandos de instalación para configurar su sistema con los repositorios de GitLab.
Vaya al directorio /tmp
y descargue la secuencia de comandos de instalación:
- cd /tmp
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Examine la secuencia de comandos descargada para asegurarse de sentirse cómodo con las acciones que realizará. También puede encontrar una versión alojada de la secuencia de comandos aquí:
- less /tmp/script.deb.sh
Cuando esté satisfecho con la seguridad de la secuencia de comandos, ejecute el instalador:
- sudo bash /tmp/script.deb.sh
La secuencia de comandos configurará su servidor para que utilice los repositorios que mantiene GitLab. Esto le permite administrar GitLab con las mismas herramientas de administración de paquetes que utiliza para otros paquetes del sistema. Una vez completado esto, pude instalar la aplicación GitLab con apt
:
- sudo apt install gitlab-ce
Esto instalará los componentes necesarios en su sistema.
Antes de configurar GitLab, deberá asegurarse de que las reglas de su firewall permitan el tráfico web. Si siguió la guía indicada en los requisitos previos, tendrá un firewall ufw
habilitado.
Consulte el estado actual de su firewall activo escribiendo lo siguiente:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Como puede ver, las reglas actuales permiten tráfico SSH, pero el acceso a otros servicios está restringido. Ya que GitLab es una aplicación web, debemos permitir el acceso HTTP. Debido a que aprovecharemos la capacidad de GitLab para solicitar y habilitar un certificado TLS/SSL gratuito de Let’s Encrypt, también vamos a permitir el acceso HTTPS.
El protocolo para asignar un puerto para HTTP y HTTPS está disponible en el archivo /etc/services
, para que podamos permitir ese tráfico entrante por nombre. Si aún no tiene el tráfico OpenSSH, habilitado, también deberá permitir ese tráfico ahora:
- sudo ufw allow http
- sudo ufw allow https
- sudo ufw allow OpenSSH
Vuelva a comprobar ufw status
; debería ver el acceso configurado, por lo menos, para estos dos servicios:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
El resultado anterior indica que la interfaz web de GitLab será accesible una vez que configuremos la aplicación.
Antes de poder usar la aplicación, deberá actualizar el archivo de configuración y ejecutar un comando de reconfiguración. Primero, abra el archivo de configuración de GitLab:
- sudo nano /etc/gitlab/gitlab.rb
Cerca de la parte superior, se encuentra la línea de configuración external_url
. Actualícela para que coincida con su dominio. Cambie http
por https
para que GitLab redirija automáticamente a los usuarios al sitio protegido por el certificado Let´s Encrypt:
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
external_url 'https://example.com'
A continuación, busque el ajuste letsencrypt['contact_emails']
. Este ajuste define una lista de direcciones de correo electrónico que el proyecto de Let’s Encrypt puede usar para ponerse en contacto con usted si hay problemas con su dominio. Es recomendable quitar los comentarios y completar este ajuste para que esté al tanto cualquier problema:
letsencrypt['contact_emails'] = ['sammy@example.com']
Guarde y cierre el archivo. Ejecute el siguiente comando para reconfigurar GitLab:
- sudo gitlab-ctl reconfigure
Esto inicializará GitLab usando la información que puede encontrar sobre su servidor. Este es un proceso completamente automatizado, por lo que no tendrá que responder ninguna pregunta. El proceso también configurará un certificado Let’s Encrypt para su domino.
Con GitLab ejecutándose y el acceso permitido, podemos realizar algunos ajustes iniciales de la aplicación a través de la interfaz web.
Visite el nombre del dominio de su servidor GitLab en su navegador web:
https://example.com
En su primera visita, verá una solicitud inicial para configurar una contraseña para la cuenta administrativa:
En la solicitud de contraseña inicial, proporcione y confirme una contraseña segura para la cuenta administrativa. Haga clic en el botón Cambiar contraseña cuando haya terminado.
Se lo redirigirá a la página de inicio de sesión convencional de GitLab:
Aquí, puede iniciar sesión con la contraseña que acaba de configurar. Las credenciales son:
Introduzca estos valores en los campos para los usuarios existentes y haga clic en el botón Iniciar sesión. Iniciará sesión en la aplicación y se lo dirigirá a una página de destino que le indicará que comience a añadir proyectos:
Ahora, puede realizar algunos cambios sencillos para configurar GitLab de la forma que desee.
Una de las primeras cosas que debe hacer tras una nueva instalación es mejorar su perfil. GitLab selecciona algunos valores predeterminados razonables, pero estos no suelen ser apropiados una vez que comienza a usar el software.
Para realizar las modificaciones necesarias, haga clic en el icono de usuario en la esquina superior derecha de la interfaz. En el menú desplegable que aparece, seleccione Ajustes:
Se lo dirigirá a la sección Perfil de sus ajustes:
Cambie el nombre “Administrador” y la dirección de correo electrónico “admin@example.com” por datos más precisos. El nombre que seleccione se mostrará a otros usuarios, y el correo electrónico se usará para la detección predeterminada de avatar, las notificaciones, acciones de Git en la interfaz, etc.
Haga clic en el botón Actualizar ajustes de perfil en la parte inferior cuando esté listo:
Se enviará un correo electrónico de confirmación a la dirección que proporcionó. Siga las instrucciones del correo electrónico para confirmar su cuenta y poder comenzar a usarla en GitLab.
A continuación, haga clic en el elemento Cuenta en la barra de menú izquierda:
Aquí, puede encontrar su token de API privado o configurar una autenticación de dos factores. Sin embargo, la funcionalidad en la que estamos interesados en este momento es la sección Cambiar nombre de usuario:
Por defecto, la primera cuenta administrativa recibe el nombre** root**. Ya que este es un nombre de cuenta conocido, es más seguro cambiarlo a un nombre diferente. Seguirá teniendo privilegios administrativos; lo único que cambiará será el nombre. Sustituya **root **por su nombre de usuario preferido:
Haga clic en el botón Actualizar nombre de usuario para realizar el cambio:
La próxima vez que inicie sesión en GitLab, recuerde usar su nuevo nombre de usuario.
En la mayoría de los casos, le convendrá usar claves SSH con Git para interactuar con sus proyectos de GitLab. Para hacerlo, deberá añadir su clave pública SSH a su cuenta de GitLab.
Si ya tiene un par de claves SSH creado en su equipo local, normalmente, podrá ver la clave pública escribiendo lo siguiente:
- cat ~/.ssh/id_rsa.pub
Debería ver una gran porción de texto similar a esta:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copie este texto y vuelva a la página Ajustes de perfil en la interfaz web de GitLab.
Si, en su lugar, recibe un mensaje como este, aún no tiene un par de claves SSH configurado en su máquina:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
En este caso, puede crear un par de claves SSH escribiendo lo siguiente:
- ssh-keygen
Acepte los valores predeterminados y, opcionalmente, proporcione una contraseña para proteger la clave de forma local:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Una vez que lo haya realizado, podrá visualizar su clave pública como se muestra arriba escribiendo lo siguiente:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copie el bloque de texto que se muestra, y vuelva a los Ajustes de su perfil en la interfaz web de GitLab.
Haga clic en el elemento Claves SSH en el menú izquierdo:
En el espacio proporcionado, pegue la clave pública que copió desde su máquina local. Dele un título descriptivo, y haga clic en el botón Añadir clave:
Ahora, debería poder administrar sus proyectos y repositorios de GitLab desde su máquina local sin tener que proporcionar las credenciales de su cuenta de GitLab.
Quizá haya advertido que cualquier persona puede registrar una cuenta al visitar su página de destino de la instancia de GitLab. Tal vez sea lo que quiere si está alojando un proyecto público. Sin embargo, muchas veces, es necesario tener ajustes más restrictivos.
Para comenzar, diríjase al área administrativa haciendo clic en el icono **de llave inglesa **en la barra de menú principal ubicada en la parte superior de la página.
En la siguiente página, puede ver un resumen de su instancia de GitLab en su totalidad. Para configurar los ajustes, haga clic en el elemento Ajustes en la parte inferior del menú de la izquierda:
Se lo dirigirá a los ajustes globales de su instancia de GitLab. Aquí, puede configurar varios ajustes que afectan a la capacidad de los nuevos usuarios para registrarse, así como su nivel de acceso.
Si desea deshabilitar los registros por completo (aún podrá crear cuentas manualmente para nuevos usuarios), desplácese a la sección Restricciones de registro.
Anule la selección de la casilla Registro habilitado:
Desplácese a la parte inferior y haga clic en el botón Guardar cambios:
Ahora, la sección de registro no debería estar presente en la página de destino de GitLab.
Si está usando GitLab como parte de una organización que proporciona direcciones de correo electrónico asociadas con un dominio, puede restringir los registros por dominio en vez de deshabilitarlos por completo.
En la sección Restricciones de registro, seleccione el cuadro Enviar correo de confirmación al registrarse, que permitirá a los usuarios iniciar sesión solo tras haber confirmado su correo electrónico.
A continuación, añada su dominio o dominios al cuadro Dominios con permiso de registro, uno por línea. Puede usar el asterisco “*” para especificar dominios comodín:
Desplácese a la parte inferior y haga clic en el botón Guardar cambios:
Ahora, la sección de registro no debería estar presente en la página de destino de GitLab.
Por defecto, los nuevos usuarios pueden crear hasta 10 proyectos. Si desea permitir que nuevos usuarios externos tengan visibilidad y capacidad de manipulación, pero quiere restringir su acceso para crear nuevos proyectos, puede hacerlo en la sección Ajustes de cuenta y límites.
Aquí, puede cambiar el Límite predeterminado de proyectos a 0 o deshabilitar por completo la capacidad de los nuevos usuarios para crear proyectos:
Se podrán continuar añadiendo nuevos usuarios al proyecto manualmente, y tendrán acceso a los proyectos internos o públicos creados por otros usuarios.
Desplácese a la parte inferior y haga clic en el botón Guar dar cambios:
Ahora, los nuevos usuarios podrán crear cuentas, pero no proyectos.
Por defecto, GitLab tiene una tarea programada para renovar los certificados de Let’s Encrypt después de medianoche cada 4 días (el minuto exacto se basa en su external_url
). Puede modificar estos ajustes en el archivo /etc/gitlab/gitlab.rb
. Por ejemplo, si quisiera establecer la renovación cada 7 días a las 12:30, podría hacerlo de la siguiente manera:
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
También puede desactivar la renovación automática añadiendo un ajuste adicional a /etc/gitlab/gitlab.rb
.
letsencrypt['auto_renew'] = false
Con las renovaciones automáticas configuradas, no tendrá que preocuparse por las interrupciones del servicio.
Ahora, debería tener una instancia de GitLab operativa alojada en su propio servidor. Puede comenzar a importar o crear nuevos proyectos y configurar el nivel apropiado de acceso para su equipo. GitLab añade nuevas funciones y realiza actualizaciones a su plataforma con regularidad, por lo tanto, asegúrese de visitar la página de inicio del proyecto para estar al día de cualquier mejora o aviso importante.
]]>O controle de versão não é apenas para código. É para qualquer coisa que você queira acompanhar, incluindo conteúdo. Usar o Git para gerenciar o seu próximo projeto de redação permite a você exibir vários rascunhos ao mesmo tempo, ver diferenças entre esses rascunhos e até reverter para uma versão anterior. E se você estiver confortável com isso, poderá compartilhar seu trabalho com outras pessoas no GitHub ou em outros repositórios centrais do Git.
Neste tutorial, você usará o Git para gerenciar um pequeno documento Markdown. Você armazenará uma versão inicial, fará o commit ou confirmação dela, fará alterações, visualizará a diferença entre essas alterações e revisará a versão anterior. Quando terminar, você terá um fluxo de trabalho que poderá aplicar aos seus próprios projetos de redação.
Para gerenciar suas alterações, você criará um repositório Git local. Um repositório Git vive dentro de um diretório existente; portanto, comece criando um novo diretório para o seu artigo:
- mkdir artigo
Mude para o novo diretório artigo
:
- cd artigo
O comando git init
cria um novo repositório Git vazio no diretório atual. Execute esse comando agora:
- git init
Você verá a seguinte saída que confirma que seu repositório foi criado:
OutputInitialized empty Git repository in /Users/sammy/artigo/.git/
O arquivo .gitignore
permite que você informe ao Git que alguns arquivos devem ser ignorados. Você pode usar isso para ignorar arquivos temporários que seu editor de texto possa criar ou arquivos de sistemas operacionais. No macOS, por exemplo, o aplicativo Finder cria arquivos .DS_Store
em diretórios. Crie um arquivo .gitignore
que os ignore:
- nano .gitignore
Adicione as seguintes linhas ao arquivo:
# Ignore Finder files
.DS_store
A primeira linha é um comentário, que o ajudará a identificar o que você está ignorando no futuro. A segunda linha especifica o arquivo a ser ignorado.
Salve o arquivo e saia do editor.
À medida que você descobrir mais arquivos que deseja ignorar, abra o arquivo .gitignore
e adicione uma nova linha para cada arquivo ou diretório que deseja ignorar.
Agora que seu repositório está configurado, você pode começar a trabalhar.
O Git só conhece os arquivos que você conta para ele. Só porque um arquivo existe no diretório que contém o repositório não significa que o Git irá rastrear suas alterações. Você precisa adicionar o arquivo ao repositório e confirmar as alterações.
Crie um novo arquivo Markdown chamado artigo.md
:
- nano artigo.md
Adicione algum texto ao arquivo:
# Como Usar o Git para Gerenciar seu Projeto de Redação
### Introdução
O controle de versão não é apenas para código. É para qualquer coisa que você queira acompanhar, incluindo conteúdo. Usar o [Git](https://git-scm.com) para gerenciar o seu próximo projeto de redação permite a você exibir vários rascunhos ao mesmo tempo, ver diferenças entre esses rascunhos e até reverter para uma versão anterior. E se você estiver confortável com isso, poderá compartilhar seu trabalho com outras pessoas no GitHub ou em outros repositórios centrais do Git.
Neste tutorial, você usará o Git para gerenciar um pequeno documento Markdown. Você armazenará uma versão inicial, fará o commit ou confirmação dela, fará alterações, visualizará a diferença entre essas alterações e revisará a versão anterior. Quando terminar, você terá um fluxo de trabalho que poderá aplicar aos seus próprios projetos de redação.
Salve as alterações e saia do editor.
O comando git status
mostrará o estado do seu repositório. Ele mostrará quais arquivos precisam ser adicionados para que o Git possa rastreá-los. Execute este comando:
- git status
Você verá esta saída:
OutputOn branch master
No commits yet
Untracked files:
(use "git add <file>..." to include in what will be committed)
.gitignore
artigo.md
nothing added to commit but untracked files present (use "git add" to track)
Na saída, a seção Untracked files
mostra os arquivos que o Git não está vendo. Esses arquivos precisam ser adicionados ao repositório para que o Git possa observá-los quanto a alterações. Use o comando git add
para fazer isso:
- git add .gitignore
- git add artigo.md
Agora execute o git status
para verificar se esses arquivos foram adicionados:
OutputOn branch master
No commits yet
Changes to be committed:
(use "git rm --cached <file>..." to unstage)
new file: .gitignore
new file: artigo.md
Agora os dois arquivos estão listados na seção Changes to be committed
, ou seja, alterações a serem confirmadas. O Git sabe sobre eles, mas ainda não criou um instantâneo do trabalho. Use o comando git commit
para fazer isso.
Quando você cria um novo commit, você precisa fornecer uma mensagem de commit. Uma boa mensagem de commit fala sobre quais são as suas alterações. Quando você está trabalhando com outras pessoas, quanto mais detalhadas forem suas mensagens de commit, melhor.
Use o comando git commit
para confirmar suas alterações:
- git commit -m "Adicionar arquivo gitignore e versão inicial do artigo"
A saída do comando mostra que os arquivos foram confirmados:
Output[master (root-commit) 95fed84] Adicionar arquivo gitignore e versão inicial do artigo
2 files changed, 9 insertions(+)
create mode 100644 .gitignore
create mode 100644 artigo.md
Use o comando git status
para ver o estado do repositório:
- git status
A saída mostra que não há alterações que precisam ser adicionadas ou confirmadas.
OutputOn branch master
nothing to commit, working tree clean
Agora vamos ver como trabalhar com alterações.
Você adicionou sua versão inicial do artigo. Agora você adicionará mais texto para poder ver como gerenciar alterações com o Git.
Abra o artigo no seu editor:
- nano artigo.md
Adicione um pouco mais de texto ao final do arquivo:
## Pré-requisitos
* Git instalado em seu computador local. O tutorial [How to Contribute to Open Source: Getting Started with Git](https://www.digitalocean.com/community/tutorials/how-to-contribute-to-open-source-getting-started-with-git) orienta você na instalação do Git e cobre algumas informações básicas que você pode achar úteis.
Salve o artigo
Use o comando git status
para ver onde estão as coisas no seu repositório:
- git status
A saída mostra que há alterações:
OutputOn branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: artigo.md
no changes added to commit (use "git add" and/or "git commit -a")
Como esperado, o arquivo artigo.md
sofreu alterações.
Use git diff
para ver o quais são elas:
- git diff artigo.md
A saída mostra as linhas que você adicionou:
diff --git a/artigo.md b/artigo.md
index 77b081c..ef6c301 100644
--- a/artigo.md
+++ b/artigo.md
@@ -5,3 +5,7 @@
O controle de versão não é apenas para código. É para qualquer coisa que você queira acompanhar, incluindo conteúdo. Usar o [Git](https://git-scm.com) para gerenciar o seu próximo projeto de redação permite a você exibir vários rascunhos ao mesmo tempo, ver diferenças entre esses rascunhos e até reverter para uma versão anterior. E se você estiver confortável com isso, poderá compartilhar seu trabalho com outras pessoas no GitHub ou em outros repositórios centrais do Git.
Neste tutorial, você usará o Git para gerenciar um pequeno documento Markdown. Você armazenará uma versão inicial, fará o commit ou confirmação dela, fará alterações, visualizará a diferença entre essas alterações e revisará a versão anterior. Quando terminar, você terá um fluxo de trabalho que poderá aplicar aos seus próprios projetos de redação.
+
+## Pré-requisitos
+
+* Git instalado em seu computador local. O tutorial [How to Contribute to Open Source: Getting Started with Git](https://www.digitalocean.com/community/tutorials/how-to-contribute-to-open-source-getting-started-with-git) orienta você na instalação do Git e cobre algumas informações básicas que você pode achar úteis.
Na saída, linhas começando com um sinal de mais (+) são linhas que você adicionou. As linhas removidas apareceriam com um sinal de menos (-). As linhas inalteradas não teriam nenhum desses caracteres na frente.
Usar git diff
e git status
é uma maneira útil de ver o que você mudou. Você também pode salvar o diff em um arquivo para visualizá-lo mais tarde com o seguinte comando:
- git diff artigo.md > artigo_diff.diff
O uso da extensão .diff
ajudará seu editor de texto a aplicar o realce apropriado da sintaxe.
Salvar as alterações no seu repositório é um processo de duas etapas. Primeiro, adicione o arquivo artigo.md
novamente e depois faça o commit. O Git quer que você diga a ele explicitamente quais arquivos estão em cada commit, portanto, mesmo que você tenha adicionado o arquivo antes, você deve adicioná-lo novamente. Note que a saída do comando git status
lembra você disso.
Adicione o arquivo e faça o commit das alterações, fornecendo uma mensagem de commit:
- git add artigo.md
- git commit -m "adicionada seção de pré-requisitos"
A saída verifica que o commit funcionou:
Output[master 1fbfc21] adicionada seção de pré-requisitos
1 file changed, 4 insertions(+)
Use git status
para ver o status do seu repositório. Você verá que não há mais nada a fazer.
- git status
OutputOn branch master
nothing to commit, working tree clean
Continue esse processo enquanto revisa seu artigo. Faça as alterações, verifique-as, adicione o arquivo e faça o commit das alterações com uma mensagem detalhada. Faça commit de suas alterações com a frequência que desejar. Você pode realizar um commit depois de concluir cada rascunho ou antes de fazer uma grande reformulação da estrutura do seu artigo.
Se você enviar um rascunho de um documento para outra pessoa e ela fizer alterações, pegue a cópia e substitua o arquivo pelo dela. Em seguida, use git diff
para ver rapidamente as alterações que ela fez. O Git verá as alterações, tenha você digitado diretamente ou substituido o arquivo por um que você baixou da web, e-mail ou outro local.
Agora, vamos analisar o gerenciamento das versões do seu artigo.
Às vezes, é útil olhar para uma versão anterior de um documento. Sempre que você usou o git commit
, você forneceu uma mensagem útil que resume o que você fez.
O comando git log
mostra o histórico de commit do seu repositório. Toda mudança que você fez commit tem uma entrada no log.
- git log
Outputcommit 1fbfc2173f3cec0741e0a6b21803fbd0be511bc4
Author: Sammy Shark <sammy@digitalocean>
Date: Thu Sep 19 16:35:41 2019 -0500
adicionada seção de pré-requisitos
commit 95fed849b0205c49eda994fff91ec03642d59c79
Author: Sammy Shark <sammy@digitalocean>
Date: Thu Sep 19 16:32:34 2019 -0500
Adicionar arquivo gitignore e versão inicial do artigo
Cada commit possui um identificador específico. Você usa esse número para referenciar as alterações de um commit específico. Você só precisa dos primeiros caracteres do identificador. O comando git log --oneline
fornece a você uma versão condensada do log com identificadores mais curtos:
- git log --oneline
Output1fbfc21 adicionada seção de pré-requisitos
95fed84 Adicionar arquivo gitignore e versão inicial do artigo
Para visualizar a versão inicial do seu arquivo, use git show
e o identificador do commit. Os identificadores no seu repositório serão diferentes dos que estão nesses exemplos.
- git show 95fed84 artigo.md
A saída mostra os detalhes do commit, bem como as alterações que ocorreram durante esse commit:
Outputcommit 95fed849b0205c49eda994fff91ec03642d59c79
Author: Sammy Shark <sammy@digitalocean>
Date: Thu Sep 19 16:32:34 2019 -0500
Adicionar arquivo gitignore e versão inicial do artigo
diff --git a/artigo.md b/artigo.md
new file mode 100644
index 0000000..77b081c
--- /dev/null
+++ b/artigo.md
@@ -0,0 +1,7 @@
+# Como Usar o Git para Gerenciar seu Projeto de Redação
+
+### Introdução
+
+O controle de versão não é apenas para código. É para qualquer coisa que você queira acompanhar, incluindo conteúdo. Usar o [Git](https://git-scm.com) para gerenciar o seu próximo projeto de redação permite a você exibir vários rascunhos ao mesmo tempo, ver diferenças entre esses rascunhos e até reverter para uma versão anterior. E se você estiver confortável com isso, poderá compartilhar seu trabalho com outras pessoas no GitHub ou em outros repositórios centrais do Git.
+
+Neste tutorial, você usará o Git para gerenciar um pequeno documento Markdown. Você armazenará uma versão inicial, fará o commit ou confirmação dela, fará alterações, visualizará a diferença entre essas alterações e revisará a versão anterior. Quando terminar, você terá um fluxo de trabalho que poderá aplicar aos seus próprios projetos de redação.
Para ver o próprio arquivo, modifique o comando levemente. Em vez de um espaço entre o identificador do commit e o arquivo, substitua por :./
assim:
- git show 95fed84:./artigo.md
Você verá o conteúdo desse arquivo, nessa revisão:
Output### Introdução
O controle de versão não é apenas para código. É para qualquer coisa que você queira acompanhar, incluindo conteúdo. Usar o [Git](https://git-scm.com) para gerenciar o seu próximo projeto de redação permite a você exibir vários rascunhos ao mesmo tempo, ver diferenças entre esses rascunhos e até reverter para uma versão anterior. E se você estiver confortável com isso, poderá compartilhar seu trabalho com outras pessoas no GitHub ou em outros repositórios centrais do Git.
Neste tutorial, você usará o Git para gerenciar um pequeno documento Markdown. Você armazenará uma versão inicial, fará o commit ou confirmação dela, fará alterações, visualizará a diferença entre essas alterações e revisará a versão anterior. Quando terminar, você terá um fluxo de trabalho que poderá aplicar aos seus próprios projetos de redação.
Você pode salvar essa saída em um arquivo se precisar disso para outra coisa:
- git show 95fed84:./artigo.md > old_artigo.md
À medida que você fizer mais alterações, seu log aumentará e você poderá revisar todas as alterações feitas no seu artigo ao longo do tempo.
Neste tutorial, você usou um repositório Git local para rastrear as alterações no seu projeto de redação. Você pode usar essa abordagem para gerenciar artigos individuais, todas as postagens do seu blog ou até mesmo seu próximo romance. E se você enviar seu repositório para o GitHub, poderá convidar outras pessoas para ajudá-lo a editar seu trabalho.
]]>Los sistemas de control de versión le permiten hacer aportes y colaborar en proyectos de desarrollo de software. Git es uno de los sistemas de control de versión más populares disponibles actualmente.
Este tutorial le servirá como orientación en la instalación y configuración de Git en un servidor de Ubuntu 18.04. Para obtener una versión más detallada de este tutorial, con mejores explicaciones de cada paso, consulte Cómo instalar Git en Ubuntu 18.04.
Con la sesión iniciada en su servidor de Ubuntu 18.04 como usuario sudo no root, primero actualice sus paquetes predeterminados.
- sudo apt update
- sudo apt install git
Puede confirmar que instaló correctamente Git si ejecuta el siguiente comando y recibe un resultado similar al que se muestra:
- git --version
Outputgit version 2.17.1
Ahora que instaló Git, y a fin de prevenir las advertencias, debe configurarlo con su información.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Si debe editar este archivo, puede usar un editor de texto como nano:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Aquí tiene enlaces a tutoriales más detallados relacionados con esta guía:
]]>Los sistemas de control de versión son cada vez más indispensables en el desarrollo de software moderno, ya que el control de versiones le permite dar seguimiento a su software al nivel de la fuente. Puede rastrear cambios, volver a etapas anteriores y producir ramificaciones para crear versiones alternativas de archivos y directorios.
Git es uno de los sistemas de control de versión más populares disponibles actualmente. Los archivos de muchos proyectos se mantienen en un repositorio Git y sitios como GitHub, GitLab y Bitbucket facilitan el intercambio y la colaboración en proyectos de desarrollo de software.
En esta guía, mostraremos la forma de instalar y configurar Git en un servidor de Ubuntu 18.04. Abarcaremos la instalación del software de dos formas diferentes, cada una con sus propios beneficios según sus necesidades específicas.
Para completar este tutorial, debe tener un usuario no root con privilegios sudo
en un servidor de Ubuntu 18.04. Para aprender a lograr esta configuración, siga los pasos de nuestra guía de configuración inicial manual de servidores o ejecute nuestro script automatizado.
Con su servidor y usuario configurado, estará listo para comenzar.
Los repositorios predeterminados de Ubuntu le proporcionan un método rápido para instalar Git. Tenga en cuenta que la versión que instale a través de estos repositorios puede ser anterior a la más reciente disponible en la actualidad. Si necesita la última versión, considere pasar a la siguiente sección de este tutorial para aprender a instalar y compilar Git desde la fuente.
Primero, utilice las herramientas de gestión de paquetes apt para actualizar su índice local de paquetes. Con la actualización completa, puede descargar e instalar Git:
- sudo apt update
- sudo apt install git
Puede confirmar que instaló Git de forma correcta ejecutando el siguiente comando:
- git --version
Outputgit version 2.17.1
Una vez que instale Git correctamente, podrá pasar a la sección Configurar Git de este tutorial para completar su configuración.
Un método más flexible para instalar Git consiste en compilar el software desde la fuente. Esto toma más tiempo y no se mantendrá en su administrador de paquetes, pero le permitirá descargar la versión más reciente y le brindará cierto control sobre las opciones que incluya si desea personalizar.
Antes de comenzar, debe instalar el software necesario para Git. Se encuentra disponible en los repositorios predeterminados, de modo que podemos actualizar nuestro índice de paquetes locales y luego instalar los paquetes.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
Después de instalar las dependencias necesarias, puede obtener la versión de Git que desee visitando el espejo del proyecto de Git en GitHub, disponible a través de la siguiente URL:
https://github.com/git/git
A partir de este punto, asegúrese de encontrarse en la rama maestra
. Haga clic en el enlace Tags y seleccione la versión de Git que desee. A menos que tenga una razón para descargar una versión candidata (marcada como rc), intente evitar opciones como esta porque pueden ser inestables.
Luego, en la parte derecha de la página, haga clic con el botón primario en Clone or download y con el secundario en Download ZIP, y copie la dirección del enlace que termina en .zip
.
En su servidor de Ubuntu 16.04, vaya al directorio tmp
para descargar archivos temporales.
- cd /tmp
Desde allí, puede usar el comando wget
para instalar el enlace del archivo zip copiado. Especificaremos un nuevo nombre para el archivo: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Descomprima el archivo que descargó y vaya al directorio resultante escribiendo lo siguiente:
- unzip git.zip
- cd git-*
Ahora, podrá crear el paquete e instalarlo escribiendo estos dos comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Para asegurarse de que la instalación se haya completado de forma correcta, puede escribir git --version
; deberá obtener un resultado pertinente que especifique la versión actualmente instalada de Git.
Ahora que instaló Git, si desea realizar una actualización a una versión posterior podrá clonar el repositorio y luego proceder con la compilación e instalación. Para encontrar la URL que usará en la operación de clonación, diríjase hasta la rama o etiqueta que desee en la página de GitHub del proyecto y luego copie la URL de clonación en el lado derecho:
En el momento de escribir, la URL pertinente es la siguiente:
https://github.com/git/git.git
Cambie la posición a su directorio de inicio y utilice el clon de git
en la URL que acaba de copiar:
- cd ~
- git clone https://github.com/git/git.git
Con esto, dentro de su directorio actual se creará un nuevo directorio en el que podrá reconstruir el paquete y reinstalar la versión más reciente, como antes. Con esto se sobrescribirá su versión anterior con la nueva:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Una vez completado esto, podrá estar seguro de que su versión de Git estará actualizada.
Ahora que instaló Git, debe configurarlo de modo que los mensajes de confirmación generados contengan su información correcta.
Esto es posible usando el comando git config
. Específicamente, debemos proporcionar nuestro nombre y nuestra dirección de correo electrónico debido a que Git inserta esta información en cada confirmación que hacemos. Podemos añadir esta información escribiendo lo siguiente:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Podemos ver todos los elementos de configuración ajustados escribiendo lo siguiente:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
La información que introduce se almacena en su archivo de configuración de Git. Tendrá la opción de modificarlo con un editor de texto de la siguiente manera:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Existen muchas otras opciones que puede configurar, pero estas son las dos esenciales que se necesitan. Si omite este paso, probablemente verá las advertencias cuando realice una confirmación con Git. Esto implica un mayor trabajo para usted, pues tendrá que revisar las confirmaciones que haya realizado con la información corregida.
De esta manera, deberá tener Git instalado y listo para usarse en su sistema.
Para obtener más información sobre cómo usar Git, consulte los artículos y las series siguientes:
]]>Os sistemas de controle de versão ajudam a compartilhar e colaborar em projetos de desenvolvimento de software. O Git é um dos sistemas de controle de versão mais populares disponíveis atualmente.
Este tutorial irá mostrar como instalar e configurar o Git em um servidor Ubuntu 18.04. Para uma versão mais detalhada deste tutorial, contendo explicações mais completas para cada etapa, consulte Como instalar o Git no Ubuntu 18.04.
Estando logado no servidor Ubuntu 18.04, como um usuário sudo não-root, atualize primeiro seus pacotes padrão.
- sudo apt update
- sudo apt install git
É possível confirmar que sua instalação está funcionando corretamente ao executar este comando e receber a saída similar ao seguinte:
- git --version
Outputgit version 2.17.1
Agora que você tem o Git instalado, para prevenir avisos, você deve configurá-lo com as suas informações.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Se for necessário editar este arquivo, será possível usar um editor de texto como o nano:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Alguns links para tutoriais mais detalhados relacionados a este guia:
]]>Os sistemas de controle de versão são cada vez mais indispensáveis no desenvolvimento de software moderno uma ver que o controle de versão permite que você mantenha o controle do seu software em nível fonte. É possível rastrear as alterações, retornar a etapas anteriores, e os ramos para criar versões alternativas de arquivos e diretórios.
Um dos sistemas de controle de versão mais populares disponíveis atualmente é o Git. Muitos arquivos de projetos são mantidos em um repositório Git, e sites como o GitHub, o GitLab, e o Bitbucket ajudam a facilitar o compartilhamento e colaboração de projetos de desenvolvimento de software.
Neste guia, iremos demonstrar como instalar e configurar o Git em um servidor Ubuntu 18.04. Iremos cobrir como instalar o software em duas maneiras diferentes, cada uma delas tem seus próprios benefícios dependendo das suas necessidades específicas.
Para completar este tutorial, é necessário ter um usuário não-root com privilégios sudo
em um servidor Ubuntu 18.04. Para aprender como chegar a essa configuração, siga nosso guia de configuração inicial do servidor ou executar nosso script automático.
Com seu servidor e usuário configurados, você está pronto para começar.
Os repositórios padrão do Ubuntu fornecem-lhe um método rápido para instalar o Git. Note que a versão que você instala por esses repositórios pode ser mais antiga que a versão mais nova atualmente disponível. Se for necessário a última versão, considere se mudar para a próxima seção deste tutorial para aprender como instalar e compilar o Git da fonte.
Primeiramente, utilize as ferramentas de gerenciamento de pacotes apt para atualizar seu índice de pacotes local. Com a atualização completa, é possível baixar e instalar o Git:
- sudo apt update
- sudo apt install git
É possível confirmar que você instalou o Git corretamente executando o seguinte comando:
- git --version
Outputgit version 2.17.1
Com o Git instalado com sucesso, agora é possível seguir em frente para a seção Como configurar o Git deste tutorial para completar sua configuração.
Um método mais flexível de instalar o Git é compilar o software do código. Isso leva mais tempo e não será mantido através do seu gerenciador de pacotes, mas ele irá permitir que você baixe a versão mais recente e dará a você controle sobre as opções que desejar personalizar.
Antes de começar, é necessário instalar o software que o Git depende. Tudo isso está disponível nos repositórios padrão, para que possamos atualizar nosso índice de pacotes e em seguida instalar os pacotes.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
Após instalar as dependências necessárias, é possível prosseguir e obter a versão do Git que quiser ao visitar o Mirror do projeto Git no GitHub, disponível pelo seguinte URL:
https://github.com/git/git
A partir daqui, certifique-se de que está no ramo principal
. Clique no link Tag e selecione sua versão Git desejada. A menos que tenha um motivo para baixar uma versão do release candidate (marcado como rc), procure evitá-lo, uma vez que eles podem ser instáveis.
Em seguida, no lado direito da página, clique no botão Clonar ou download, e então clique no botão **Download ZIP **e copie o endereço de link que termina em .zip
.
Volte ao seu servidor Ubuntu 16.04, vá para o diretório tmp
e baixe os arquivos temporários.
- cd /tmp
A partir daí, é possível usar o comando wget
para instalar o link do arquivo zip copiado. Vamos dar um novo nome para o arquivo: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Descompacte o arquivo que baixou e mova ele para o diretório resultante digitando:
- unzip git.zip
- cd git-*
Agora, é possível fazer o pacote e instalá-lo digitando esses dois comandos:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Para garantir que a instalação foi bem sucedida, digite git --version
e receba a saída relevante que especifica a versão atual do Git.
Agora que tem o Git instalado, se quiser atualizar para uma versão mais recente, será possível clonar o repositório, e então compilar e instalar. Para encontrar o URL para usar para a operação de clone, navegue até o ramo ou tag que quiser na página GitHub do projeto e, em seguida, copie o URL clone no lado direito:
No momento em que este artigo está sendo escrito, o URL relevante é:
https://github.com/git/git.git
Altere seu diretório inicial e utilize o git clone
no URL que acabou de copiar:
- cd ~
- git clone https://github.com/git/git.git
Isso irá criar um novo diretório dentro do seu diretório atual, onde pode reconstruir o pacote e instalar a versão mais recente, do jeito que fez acima. Isso irá sobrepor sua versão mais antiga com a nova versão:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Terminado isso, certifique-se de que sua versão do está atualizada.
Agora que tem o Git instalado, será necessário configurá-lo para que as mensagens de entrega geradas contenham as suas informações corretas.
Isso pode ser alcançado utilizando o comando git config
. Especificamente, precisamos dar nosso e endereço de e-mail porque o Git incorpora esta informação em cada entrega que fazemos. Podemos seguir em frente e adicionar esta informação digitando:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Podemos ver todos os itens de configuração que foram configurados digitando:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
A informação que digitou está armazenada no seu arquivo de configuração Git, que você pode editar opcionalmente com um editor de texto como este:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Há muitas outras opções que é possível definir, mas essas duas são necessárias. Se pular este passo, provavelmente verá avisos quando colocar o Git em funcionamento. Isso dará mais trabalho para você pois será necessário revisar as entregas que tiver feito com as informações corretas.
Agora, você deve ter o Git instalado e pronto para usar no seu sistema.
Para aprender mais sobre como usar o Git, verifique esses artigos e séries:
]]>Системы контроля версий помогают осуществлять обмен данными и сотрудничать в рамках проектов разработки программного обеспечения. Git — одна из наиболее популярных систем управления версиями из доступных сегодня.
Это руководство поможет выполнить установку и настройку Git на сервере Ubuntu 18.04. Более детальный вариант настоящего руководства с подробным разъяснениями каждого шага см. в статье «Установка Git в Ubuntu 18.04».
Выполните вход на ваш сервер Ubuntu 18.04 в качестве пользователя sudo без прав root, сразу же обновив используемые по умолчанию пакеты.
- sudo apt update
- sudo apt install git
Вы можете подтвердить, что установка Git выполнена корректно с помощью этой команды, получив в результате примерно следующий вывод:
- git --version
Outputgit version 2.17.1
Теперь, когда вы установили Git, для предотвращения получения предупреждений вам нужно выполнить настройку с использованием ваших данных.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Если вам нужно изменить этот файл, вы можете использовать текстовый редактор, например nano:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Здесь представлены ссылки на более подробные обучающие руководства, связанные с настоящим руководством:
]]>Системы управления версиями просто незаменимы в современной разработке программного обеспечения, поскольку контроль версий позволяет отслеживать развитие программного обеспечения на уровне исходного кода. Вы можете отслеживать изменения, возвращаться к предыдущим версиям и создавать ответвления для создания альтернативных версий файлов и директорий.
Одна из наиболее популярных систем управления версиями в настоящее время — это Git. Многие проектные файлы хранятся в репозитории Git, а такие сайты, как GitHub, GitLab и Bitbucket, упрощают работу над проектами разработки программного обеспечения и совместную работу.
В этом руководстве мы расскажем, как установить и настроить Git на сервере Ubuntu 18.04. Мы расскажем, как выполнить установку программного обеспечения двумя различными способами, каждый из которых имеет свои преимущества в зависимости от ваших конкретных потребностей.
Для выполнения этого обучающего руководства у вас должен быть пользователь без прав root с привилегиями sudo
на сервере Ubuntu 18.04. Чтобы выполнить необходимую первоначальную настройку, воспользуйтесь нашим руководством по ручной начальной настройке сервера или запустите наш автоматизированный скрипт.
После настройки сервера и пользователя вы можете продолжить.
Для репозиториев Ubuntu по умолчанию используется быстрый метод установки Git. Обратите внимание, что версия, которую вы устанавливаете через эти хранилища, может отличаться от новейшей доступной версии. Если вам потребуется последняя версия, перейдите к следующему разделу этого обучающего руководства, чтобы узнать, как выполнить установку и компиляцию Git из заданного вами источника.
Во-первых, воспользуйтесь инструменты управления пакетами apt для обновления локального индекса пакетов. После завершения обновления вы сможете загрузить и установить Git:
- sudo apt update
- sudo apt install git
Вы можете убедиться, что установка Git выполнена корректно, запустив следующую команду:
- git --version
Outputgit version 2.17.1
После успешной установки Git вы можете переходить Настройка Git данного обучающего руководства и выполнению настройки.
Более гибкий метод установки Git — это компиляция программного обеспечения из исходного кода. Это метод требует больше времени, а полученный результат не будет сохранен в менеджере пакетов, но он позволяет загрузить последнюю версию и дает определенный контроль над параметрами, которые вы включаете в ПО при необходимости индивидуальной настройки.
Перед началом установки вам нужно установить программное обеспечение, от которого зависит Git. Его можно найти в репозиториях по умолчанию, поэтому мы можем обновить локальный индекс пакетов, а после этого установить пакеты.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
После установки необходимых зависимостей вы можете продолжить работу и получить нужную вас версию Git, посетив зеркало проекта Git на GitHub, доступное по следующему URL-адресу:
https://github.com/git/git
Перейдя по ссылке, убедитесь, что вы находитесь в ветке master
. Нажмите ссылку Tags и выберите желаемую версию Git. Если у вас нет оснований для загрузки версии-кандидата (помеченная rc), постарайтесь избежать этого, поскольку такие версии могут быть нестабильными.
Затем нажмите кнопку Clone or download на правой стороне страницы, потом нажмите правой кнопкой мыши Download ZIP и скопируйте адрес ссылки, заканчивающийся на .zip
.
Вернитесь на сервер Ubuntu 16.04 и перейдите в директорию tmp
, чтобы загрузить временные файлы.
- cd /tmp
Здесь вы можете использовать команду wget
для установки скопированной ссылки на файл с архивом. Мы укажем новое имя для файла: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Разархивируйте файл, который вы загрузили, и переместите в полученную директорию:
- unzip git.zip
- cd git-*
Теперь вы можете создать пакет и установить его, введя эти две команды:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
Чтобы убедиться, что установка прошла успешно, вы можете ввести git --version
, после чего вы должны получить соответствующий вывод, указывающий текущую установленную версию Git.
Теперь, когда вы установили Git, если вы захотите обновиться до более поздней версии, вы можете клонировать репозиторий, а потом выполнить сборку и установку. Чтобы найти URL-адрес для использования при клонировании, перейдите к нужной ветке или тегу на странице проекта в GitHub и скопируйте клонируемый URL-адрес с правой стороны:
На момент написания соответствующий URL должен выглядеть следующим образом:
https://github.com/git/git.git
Измените домашнюю директорию и используйте git clone
для URL-адреса, который вы только что скопировали:
- cd ~
- git clone https://github.com/git/git.git
В результате будет создана новая директория внутри текущей директории, где вы можете выполнить повторную сборку проекта и переустановить новую версию, как вы уже делали выше. В результате старая версия будет перезаписана на новую:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
После выполнения этих действий вы можете быть уверены, что используете актуальную версию Git.
Теперь, когда вы установили Git, вам нужно настроить его, чтобы сгенерированные сообщения о внесении содержали корректную информацию.
Это можно сделать с помощью команды git config
. В частности, нам нужно указать наше имя и адрес электронной почты, поскольку Git вставляет эту информацию в каждое внесение. Мы можем двигаться дальше и добавить эту информацию с помощью следующей команды:
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
Мы можем просмотреть все пункты конфигурации, которые были настроены, введя следующую команду:
- git config --list
Outputuser.name=Your Name
user.email=youremail@domain.com
...
Информация, которую вы вводите, сохраняется в файле конфигурации Git, и вы можете при желании изменить ее вручную с помощью текстового редактора:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Существует множество других вариантов настроек, но эти две опции устанавливаются в обязательном порядке. Если вы пропустите этот шаг, вы, скорее всего, будете видеть предупреждения при внесении изменений в Git. Это будет требовать дополнительной работы, поскольку вам нужно будет исправлять вносимые изменения, которые вы делали, вводя корректную информацию.
Вы установили Git и готовы к его использованию в системе.
Чтобы узнать больше об использовании Git, прочитайте эти статьи и разделы:
]]>I’d like to setup my own Gitlab EE server with the prebuilt app from Digital Ocean served with HTTPS. The application works but I can’t get it to use the Let’s Encrypt certificate created in the DigitalOcean Security section.
The used certificate is apparently signed by me and therefore triggers a “The connection to this website isn’t secure” on the browser.
My configuration on Gitlab is the following:
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/ssl/git.domain.tld.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/git.domain.tld.key"
external_url "https://git.domain.tld"
Do you see what I’m doing wrong here?
]]>Does hosting Gitlab on DO require a per user paid subscription with Gitlab or can I get away with a $10/mo droplet rather than their more expensive cloud hosted plans?
Use will be a couple dozen projects and maybe 10 users over time.
Thanks.
]]>Basically I want to try to create my first CI/CD pipeline.
]]>O controle de versão tornou-se uma ferramenta indispensável no desenvolvimento de software moderno. Os sistemas de controle de versão permitem que você mantenha o controle do seu software no nível do código-fonte. Você pode acompanhar as alterações, reverter para os estágios anteriores e fazer a ramificação ou branching do código base para criar versões alternativas de arquivos e diretórios.
Um dos sistemas de controle de versão mais populares é o git
. Muitos projetos mantêm seus arquivos em um repositório Git, e sites como o GitHub e o Bitbucket tornaram o compartilhamento e a contribuição para o código com o Git mais fácil do que nunca.
Neste guia, demonstraremos como instalar o Git em um servidor do CentOS 7. Vamos abordar como instalar o software de duas maneiras diferentes, cada uma com seus próprios benefícios, além de como configurar o Git para que você possa começar a colaborar imediatamente.
Antes de começar com este guia, há algumas etapas que precisam ser concluídas primeiro.
Você precisará de um servidor CentOS 7 instalado e configurado com um usuário não-root que tenha privilégios sudo
. Se você ainda não fez isso, você pode executar os passos de 1 a 4 no guia de Configuração Inicial do Servidor com CentOS 7 para criar essa conta.
Depois de ter seu usuário não-root, você pode usá-lo para fazer SSH em seu servidor CentOS e continuar com a instalação do Git.
As duas formas mais comuns de instalar o Git serão descritas nesta seção. Cada opção tem suas próprias vantagens e desvantagens, e a escolha que você fizer dependerá de suas próprias necessidades. Por exemplo, os usuários que desejam manter atualizações para o software Git provavelmente vão querer usar o yum
para instalá-lo, enquanto os usuários que precisam de recursos apresentados por uma versão específica do Git vão querer construir essa versão a partir do código-fonte.
A maneira mais fácil de instalar o Git e tê-lo pronto para usar é utilizar os repositórios padrão do CentOS. Este é o método mais rápido, mas a versão do Git que é instalada dessa forma pode ser mais antiga que a versão mais recente disponível. Se você precisa da versão mais recente, considere compilar o git
a partir do código-fonte (as etapas para este método podem ser encontradas mais abaixo neste tutorial).
Use o yum
, gerenciador de pacotes nativo do CentOS, para procurar e instalar o pacote git
mais recente disponível nos repositórios do CentOS:
sudo yum install git
Se o comando for concluído sem erro, você terá o git
baixado e instalado. Para verificar novamente se ele está funcionando corretamente, tente executar a verificação de versão integrada do Git:
git --version
Se essa verificação produziu um número de versão do Git, você pode agora passar para Configurando o Git, encontrado mais abaixo neste artigo.
Se você deseja baixar a versão mais recente do Git disponível, ou simplesmente deseja mais flexibilidade no processo de instalação, o melhor método para você é compilar o software a partir do código-fonte. Isso leva mais tempo, e não será atualizado e mantido através do gerenciador de pacotes yum
, mas permitirá que você baixe uma versão mais recente do que a que está disponível através dos repositórios do CentOS, e lhe dará algum controle sobre as opções que você pode incluir.
Antes de começar, você precisará instalar o software do qual o git
depende. Estas dependências estão todas disponíveis nos repositórios padrão do CentOS, junto com as ferramentas que precisamos para construir um binário a partir do código-fonte:
sudo yum groupinstall "Development Tools"
sudo yum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel
Depois de ter instalado as dependências necessárias, você pode ir em frente e procurar a versão do Git que você deseja, visitando a página de releases do projeto no GitHub.
A versão no topo da lista é a versão mais recente. Se ela não tiver um -rc
(abreviação de “Release Candidate”) no nome, isso significa que é uma versão estável e segura para uso. Clique na versão que você deseja baixar para acessar a página de release dessa versão. Em seguida, clique com o botão direito do mouse no botão Source code (tar.gz) e copie o link para a sua área de transferência.
Agora vamos usar o comando wget
em nosso servidor CentOS para baixar o arquivo fonte do link que copiamos, renomeando-o para git.tar.gz
no processo, para que seja mais fácil trabalhar com ele.
Nota: a URL que você copiou pode ser diferente da minha, pois a versão que você baixou pode ser diferente.
wget https://github.com/git/git/archive/v2.1.2.tar.gz -O git.tar.gz
Quando o download estiver concluído, podemos descompactar o arquivo fonte usando o tar
. Vamos precisar de algumas flags extras para garantir que a descompactação seja feita corretamente: z
descompacta o arquivo (já que todos os arquivos .gz são compactados), x
extrai os arquivos e pastas individuais do arquivo, e f
diz ao tar
que estamos declarando um nome de arquivo para trabalhar.
tar -zxf git.tar.gz
Isto irá descompactar o código-fonte compactado para uma pasta com o nome da versão do Git que baixamos (neste exemplo, a versão é 2.1.2, então a pasta é nomeada como git-2.1.2
). Precisamos nos mover para essa pasta para começar a configurar nossa compilação. Em vez de nos preocuparmos com o nome completo da versão na pasta, podemos usar um caractere curinga (*
) para nos poupar de algum problema ao mudar para essa pasta.
cd git-*
Uma vez que estivermos na pasta de fontes, podemos começar o processo de compilação. Isso começa com algumas verificações de pré-compilação para coisas como dependências de software e configurações de hardware. Podemos verificar tudo o que precisamos com o script configure
gerado pelo make configure
. Este script também usará um --prefix
para declarar /usr/local
(a pasta padrão do programa para plataformas Linux) como o destino apropriado para o novo binário, e criará um Makefile
para ser usado no passo seguinte.
make configure
./configure --prefix=/usr/local
Makefiles são arquivos de configuração de script que são processados pelo utilitário make
. Nosso Makefile dirá ao make
como compilar um programa e vinculá-lo à nossa instalação do CentOS, para que possamos executar o programa corretamente. Com um Makefile pronto, agora podemos executar make install
(com privilégios sudo
) para compilar o código-fonte em um programa funcional e instalá-lo em nosso servidor:
sudo make install
O Git deve agora ser compilado e instalado em seu servidor CentOS 7. Para verificar novamente se está funcionando corretamente, tente executar a verificação de versão integrada do Git:
git --version
Se essa verificação produziu um número de versão do Git, então você pode passar para Configurando o Git abaixo.
Agora que você tem o git
instalado, você precisará enviar algumas informações sobre si mesmo para que as mensagens de commit sejam geradas com as informações corretas anexadas. Para fazer isso, use o comando git config
para fornecer o nome e o endereço de e-mail que você gostaria de ter registrado em seus commits:
git config --global user.name "Seu Nome"
git config --global user.email "voce@example.com"
Para confirmar que essas configurações foram adicionadas com sucesso, podemos ver todos os itens de configuração que foram definidos, digitando:
git config --list
user.name=Seu Nome
user.email=voce@example.com
Essa configuração te poupará do trabalho de ver uma mensagem de erro e ter que revisar os commits após submetê-los.
Agora você deve ter o git
instalado e pronto para uso em seu sistema. Para saber mais sobre como usar o Git, confira estes artigos mais detalhados:
This article supplements a webinar series on doing CI/CD with Kubernetes. The series discusses how to take a cloud native approach to building, testing, and deploying applications, covering release management, cloud native tools, service meshes, and CI/CD tools that can be used with Kubernetes. It is designed to help developers and businesses that are interested in integrating CI/CD best practices with Kubernetes into their workflows.
This tutorial includes the concepts and commands from the last session of the series, GitOps Tool Sets on Kubernetes with CircleCI and Argo CD.
Warning: The procedures in this tutorial are meant for demonstration purposes only. As a result, they don’t follow the best practices and security measures necessary for a production-ready deployment.
Using Kubernetes to deploy your application can provide significant infrastructural advantages, such as flexible scaling, management of distributed components, and control over different versions of your application. However, with the increased control comes an increased complexity that can make CI/CD systems of cooperative code development, version control, change logging, and automated deployment and rollback particularly difficult to manage manually. To account for these difficulties, DevOps engineers have developed several methods of Kubernetes CI/CD automation, including the system of tooling and best practices called GitOps. GitOps, as proposed by Weaveworks in a 2017 blog post, uses Git as a “single source of truth” for CI/CD processes, integrating code changes in a single, shared repository per project and using pull requests to manage infrastructure and deployment.
There are many tools that use Git as a focal point for DevOps processes on Kubernetes, including Gitkube developed by Hasura, Flux by Weaveworks, and Jenkins X, the topic of the second webinar in this series. In this tutorial, you will run through a demonstration of two additional tools that you can use to set up your own cloud-based GitOps CI/CD system: The Continuous Integration tool CircleCI and Argo CD, a declarative Continuous Delivery tool.
CircleCI uses GitHub or Bitbucket repositories to organize application development and to automate building and testing on Kubernetes. By integrating with the Git repository, CircleCI projects can detect when a change is made to the application code and automatically test it, sending notifications of the change and the results of testing over email or other communication tools like Slack. CircleCI keeps logs of all these changes and test results, and the browser-based interface allows users to monitor the testing in real time, so that a team always knows the status of their project.
As a sub-project of the Argo workflow management engine for Kubernetes, Argo CD provides Continuous Delivery tooling that automatically synchronizes and deploys your application whenever a change is made in your GitHub repository. By managing the deployment and lifecycle of an application, it provides solutions for version control, configurations, and application definitions in Kubernetes environments, organizing complex data with an easy-to-understand user interface. It can handle several types of Kubernetes manifests, including ksonnet applications, Kustomize applications, Helm charts, and YAML/json files, and supports webhook notifications from GitHub, GitLab, and Bitbucket.
In this last article of the CI/CD with Kubernetes series, you will try out these GitOps tools by:
Setting up pipeline triggers to automate application testing with CircleCI and GitHub.
Synchronizing and deploying an application from a GitHub repository with Argo CD.
By the end of this tutorial, you will have a basic understanding of how to construct a CI/CD pipeline on Kubernetes with a GitOps tool set.
To follow this tutorial, you will need:
An Ubuntu 16.04 server with 16 GB of RAM or above. Since this tutorial is meant for demonstration purposes only, commands are run from the root account. Note that the unrestrained privileges of this account do not adhere to production-ready best practices and could affect your system. For this reason, it is suggested to follow these steps in a test environment such as a virtual machine or a DigitalOcean Droplet.
A Docker Hub Account. For an overview on getting started with Docker Hub, please see these instructions.
A GitHub account and basic knowledge of GitHub. For a primer on how to use GitHub, check out our How To Create a Pull Request on GitHub tutorial.
Familiarity with Kubernetes concepts. Please refer to the article An Introduction to Kubernetes for more details.
A Kubernetes cluster with the kubectl command line tool. This tutorial has been tested on a simulated Kubernetes cluster, set up in a local environment with Minikube, a program that allows you to try out Kubernetes tools on your own machine without having to set up a true Kubernetes cluster. To create a Minikube cluster, follow Step 1 of the second webinar in this series, Kubernetes Package Management with Helm and CI/CD with Jenkins X.
In this step, you will put together a standard CircleCI workflow that involves three jobs: testing code, building an image, and pushing that image to Docker Hub. In the testing phase, CircleCI will use pytest to test the code for a sample RSVP application. Then, it will build the image of the application code and push the image to DockerHub.
First, give CircleCI access to your GitHub account. To do this, navigate to https://circleci.com/
in your favorite web browser:
In the top right of the page, you will find a Sign Up button. Click this button, then click Sign Up with GitHub on the following page. The CircleCI website will prompt you for your GitHub credentials:
Entering your username and password here gives CircleCI the permission to read your GitHub email address, deploy keys and add service hooks to your repository, create a list of your repositories, and add an SSH key to your GitHub account. These permissions are necessary for CircleCI to monitor and react to changes in your Git repository. If you would like to read more about the requested permissions before giving CircleCI your account information, see the CircleCI documentation.
Once you have reviewed these permissions, enter your GitHub credentials and click Sign In. CircleCI will then integrate with your GitHub account and redirect your browser to the CircleCI welcome page:
Now that you have access to your CircleCI dashboard, open up another browser window and navigate to the GitHub repository for this webinar, https://github.com/do-community/rsvpapp-webinar4
. If prompted to sign in to GitHub, enter your username and password. In this repository, you will find a sample RSVP application created by the CloudYuga team. For the purposes of this tutorial, you will use this application to demonstrate a GitOps workflow. Fork this repository to your GitHub account by clicking the Fork button at the top right of the screen.
When you’ve forked the repository, GitHub will redirect you to https://github.com/your_GitHub_username/rsvpapp-webinar4
. On the left side of the screen, you will see a Branch: master button. Click this button to reveal the list of branches for this project. Here, the master branch refers to the current official version of the application. On the other hand, the dev branch is a development sandbox, where you can test changes before promoting them to the official version in the master branch. Select the dev branch.
Now that you are in the development section of this demonstration repository, you can start setting up a pipeline. CircleCI requires a YAML configuration file in the repository that describes the steps it needs to take to test your application. The repository you forked already has this file at .circleci/config.yml
; in order to practice setting up CircleCI, delete this file and make your own.
To create this configuration file, click the Create new file button and make a file named .circleci/config.yml
:
Once you have this file open in GitHub, you can configure the workflow for CircleCI. To learn about this file’s contents, you will add the sections piece by piece. First, add the following:
version: 2
jobs:
test:
machine:
image: circleci/classic:201808-01
docker_layer_caching: true
working_directory: ~/repo
. . .
In the preceding code, version
refers to the version of CircleCI that you will use. jobs:test:
means that you are setting up a test for your application, and machine:image:
indicates where CircleCI will do the testing, in this case a virtual machine based on the circleci/classic:201808-01
image.
Next, add the steps you would like CircleCI to take during the test:
. . .
steps:
- checkout
- run:
name: install dependencies
command: |
sudo rm /var/lib/dpkg/lock
sudo dpkg --configure -a
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sleep 5
sudo rm /var/lib/dpkg/lock
sudo dpkg --configure -a
sudo apt-get install python3.5
sleep 5
python -m pip install -r requirements.txt
# run tests!
# this example uses Django's built-in test-runner
# other common Python testing frameworks include pytest and nose
# https://pytest.org
# https://nose.readthedocs.io
- run:
name: run tests
command: |
python -m pytest tests/test_rsvpapp.py
. . .
The steps of the test are listed out after steps:
, starting with - checkout
, which will checkout your project’s source code and copy it into the job’s space. Next, the - run: name: install dependencies
step runs the listed commands to install the dependencies required for the test. In this case, you will be using the Django Web framework’s built-in test-runner and the testing tool pytest
. After CircleCI downloads these dependencies, the -run: name: run tests
step will instruct CircleCI to run the tests on your application.
With the test
job completed, add in the following contents to describe the build
job:
. . .
build:
machine:
image: circleci/classic:201808-01
docker_layer_caching: true
working_directory: ~/repo
steps:
- checkout
- run:
name: build image
command: |
docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
push:
machine:
image: circleci/classic:201808-01
docker_layer_caching: true
working_directory: ~/repo
steps:
- checkout
- run:
name: Push image
command: |
docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1
. . .
As before, machine:image:
means that CircleCI will build the application in a virtual machine based on the specified image. Under steps:
, you will find - checkout
again, followed by - run: name: build image
. This means that CircleCi will build a Docker container from the rsvpapp
image in your Docker Hub repository. You will set the $DOCKERHUB_USERNAME
environment variable in the CircleCI interface, which the tutorial will cover after this YAML file is complete.
After the build
job is done, the push
job will push the resulting image to your Docker Hub account.
Finally, add the following lines to determine the workflows
that coordinate the jobs you defined earlier:
. . .
workflows:
version: 2
build-deploy:
jobs:
- test:
context: DOCKERHUB
filters:
branches:
only: dev
- build:
context: DOCKERHUB
requires:
- test
filters:
branches:
only: dev
- push:
context: DOCKERHUB
requires:
- build
filters:
branches:
only: dev
These lines ensure that CircleCI executes the test
, build
, and push
jobs in the correct order. context: DOCKERHUB
refers to the context in which the test will take place. You will create this context after finalizing this YAML file. The only: dev
line restrains the workflow to trigger only when there is a change to the dev branch of your repository, and ensures that CircleCI will build and test the code from dev.
Now that you have added all the code for the .circleci/config.yml
file, its contents should be as follows:
version: 2
jobs:
test:
machine:
image: circleci/classic:201808-01
docker_layer_caching: true
working_directory: ~/repo
steps:
- checkout
- run:
name: install dependencies
command: |
sudo rm /var/lib/dpkg/lock
sudo dpkg --configure -a
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:fkrull/deadsnakes
sudo apt-get update
sleep 5
sudo rm /var/lib/dpkg/lock
sudo dpkg --configure -a
sudo apt-get install python3.5
sleep 5
python -m pip install -r requirements.txt
# run tests!
# this example uses Django's built-in test-runner
# other common Python testing frameworks include pytest and nose
# https://pytest.org
# https://nose.readthedocs.io
- run:
name: run tests
command: |
python -m pytest tests/test_rsvpapp.py
build:
machine:
image: circleci/classic:201808-01
docker_layer_caching: true
working_directory: ~/repo
steps:
- checkout
- run:
name: build image
command: |
docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
push:
machine:
image: circleci/classic:201808-01
docker_layer_caching: true
working_directory: ~/repo
steps:
- checkout
- run:
name: Push image
command: |
docker build -t $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1 .
echo $DOCKERHUB_PASSWORD | docker login --username $DOCKERHUB_USERNAME --password-stdin
docker push $DOCKERHUB_USERNAME/rsvpapp:$CIRCLE_SHA1
workflows:
version: 2
build-deploy:
jobs:
- test:
context: DOCKERHUB
filters:
branches:
only: dev
- build:
context: DOCKERHUB
requires:
- test
filters:
branches:
only: dev
- push:
context: DOCKERHUB
requires:
- build
filters:
branches:
only: dev
Once you have added this file to the dev branch of your repository, return to the CircleCI dashboard.
Next, you will create a CircleCI context to house the environment variables needed for the workflow that you outlined in the preceding YAML file. On the left side of the screen, you will find a SETTINGS button. Click this, then select Contexts under the ORGANIZATION heading. Finally, click the Create Context button on the right side of the screen:
CircleCI will then ask you for the name of this context. Enter DOCKERHUB
, then click Create. Once you have created the context, select the DOCKERHUB context and click the Add Environment Variable button. For the first, type in the name DOCKERHUB_USERNAME
, and in the Value enter your Docker Hub username.
Then add another environment variable, but this time, name it DOCKERHUB_PASSWORD
and fill in the Value field with your Docker Hub password.
When you’ve create the two environment variables for your DOCKERHUB context, create a CircleCI project for the test RSVP application. To do this, select the ADD PROJECTS button from the left-hand side menu. This will yield a list of GitHub projects tied to your account. Select rsvpapp-webinar4 from the list and click the Set Up Project button.
Note: If rsvpapp-webinar4 does not show up in the list, reload the CircleCI page. Sometimes it can take a moment for the GitHub projects to show up in the CircleCI interface.
You will now find yourself on the Set Up Project page:
At the top of the screen, CircleCI instructs you to create a config.yml
file. Since you have already done this, scroll down to find the Start Building button on the right side of the page. By selecting this, you will tell CircleCI to start monitoring your application for changes.
Click on the Start Building button. CircleCI will redirect you to a build progress/status page, which as yet has no build.
To test the pipeline trigger, go to the recently forked repository at https://github.com/your_GitHub_username/rsvpapp-webinar4
and make some changes in the dev
branch only. Since you have added the branch filter only: dev
to your .circleci/config
file, CI will build only when there is change in the dev branch. Make a change to the dev branch code, and you will find that CircleCI has triggered a new workflow in the user interface. Click on the running workflow and you will find the details of what CircleCI is doing:
With your CircleCI workflow taking care of the Continuous Integration aspect of your GitOps CI/CD system, you can install and configure Argo CD on top of your Kubernetes cluster to address Continuous Deployment.
Just as CircleCI uses GitHub to trigger automated testing on changes to source code, Argo CD connects your Kubernetes cluster into your GitHub repository to listen for changes and to automatically deploy the updated application. To set this up, you must first install Argo CD into your cluster.
First, create a namespace named argocd
:
- kubectl create namespace argocd
Within this namespace, Argo CD will run all the services and resources it needs to create its Continuous Deployment workflow.
Next, download the Argo CD manifest from the official GitHub respository for Argo:
- kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v0.9.2/manifests/install.yaml
In this command, the -n
flag directs kubectl
to apply the manifest to the namespace argocd
, and -f
specifies the file name for the manifest that it will apply, in this case the one downloaded from the Argo repository.
By using the kubectl get
command, you can find the pods that are now running in the argocd
namespace:
- kubectl get pod -n argocd
Using this command will yield output similar to the following:
NAME READY STATUS RESTARTS AGE
application-controller-6d68475cd4-j4jtj 1/1 Running 0 1m
argocd-repo-server-78f556f55b-tmkvj 1/1 Running 0 1m
argocd-server-78f47bf789-trrbw 1/1 Running 0 1m
dex-server-74dc6c5ff4-fbr5g 1/1 Running 0 1m
Now that Argo CD is running on your cluster, download the Argo CD CLI tool so that you can control the program from your command line:
- curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/v0.9.2/argocd-linux-amd64
Once you’ve downloaded the file, use chmod
to make it executable:
- chmod +x /usr/local/bin/argocd
To find the Argo CD service, run the kubectl get
command in the namespace argocd
:
kubectl get svc -n argocd argocd-server
You will get output similar to the following:
OutputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server ClusterIP 10.109.189.243 <none> 80/TCP,443/TCP 8m
Now, access the Argo CD API server. This server does not automatically have an external IP, so you must first expose the API so that you can access it from your browser at your local workstation. To do this, use kubectl port-forward
to forward port 8080
on your local workstation to the 80
TCP port of the argocd-server
service from the preceding output:
- kubectl port-forward svc/argocd-server -n argocd 8080:80
The output will be:
OutputForwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Once you run the port-forward
command, your command prompt will disappear from your terminal. To enter more commands for your Kubernetes cluster, open a new terminal window and log onto your remote server.
To complete the connection, use ssh
to forward the 8080
port from your local machine. First, open up an additional terminal window and, from your local workstation, enter the following command, with remote_server_IP_address
replaced by the IP address of the remote server on which you are running your Kubernetes cluster:
- ssh -L 8080:localhost:8080 root@remote_server_IP_address
To make sure that the Argo CD server is exposed to your local workstation, open up a browser and navigate to the URL localhost:8080
. You will see the Argo CD landing page:
Now that you have installed Argo CD and exposed its server to your local workstation, you can continue to the next step, in which you will connect GitHub into your Argo CD service.
To allow Argo CD to listen to GitHub and synchronize deployments to your repository, you first have to connect Argo CD into GitHub. To do this, log into Argo.
By default, the password for your Argo CD account is the name of the pod for the Argo CD API server. Switch back to the terminal window that is logged into your remote server but is not handling the port forwarding. Retrieve the password with the following command:
- kubectl get pods -n argocd -l app=argocd-server -o name | cut -d'/' -f 2
You will get the name of the pod running the Argo API server:
Outputargocd-server-b686c584b-6ktwf
Enter the following command to log in from the CLI:
- argocd login localhost:8080
You will receive the following prompt:
[secondary_label Output]
WARNING: server certificate had error: x509: certificate signed by unknown authority. Proceed insecurely (y/n)?
For the purposes of this demonstration, type y
to proceed without a secure connection. Argo CD will then prompt you for your username and password. Enter admin for username and the complete argocd-server
pod name for your password. Once you put in your credentials, you’ll receive the following message:
Output'admin' logged in successfully
Context 'localhost:8080' updated
Now that you have logged in, use the following command to change your password:
- argocd account update-password
Argo CD will ask you for your current password and the password you would like to change it to. Choose a secure password and enter it at the prompts. Once you have done this, use your new password to relogin:
- argocd relogin
Enter your password again, and you will get:
OutputContext 'localhost:8080' updated
If you were deploying an application on a cluster external to the Argo CD cluster, you would need to register the application cluster’s credentials with Argo CD. If, as is the case with this tutorial, Argo CD and your application are on the same cluster, then you will use https://kubernetes.default.svc
as the Kubernetes API server when connecting Argo CD to your application.
To demonstrate how one might register an external cluster, first get a list of your Kubernetes contexts:
- kubectl config get-contexts
You’ll get:
OutputCURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
To add a cluster, enter the following command, with the name of your cluster in place of the highlighted name:
- argocd cluster add minikube
In this case, the preceding command would yield:
OutputINFO[0000] ServiceAccount "argocd-manager" created
INFO[0000] ClusterRole "argocd-manager-role" created
INFO[0000] ClusterRoleBinding "argocd-manager-role-binding" created, bound "argocd-manager" to "argocd-manager-role"
Cluster 'minikube' added
Now that you have set up your log in credentials for Argo CD and tested how to add an external cluster, move over to the Argo CD landing page and log in from your local workstation. Argo CD will direct you to the Argo CD applications page:
From here, click the Settings icon from the left-side tool bar, click Repositories, then click CONNECT REPO. Argo CD will present you with three fields for your GitHub information:
In the field for Repository URL, enter https://github.com/your_GitHub_username/rsvpapp-webinar4
, then enter your GitHub username and password. Once you’ve entered your credentials, click the CONNECT button at the top of the screen.
Once you’ve connected your repository containing the demo RSVP app to Argo CD, choose the Apps icon from the left-side tool bar, click the + button in the top right corner of the screen, and select New Application. From the Select Repository page, select your GitHub repository for the RSVP app and click next. Then choose CREATE APP FROM DIRECTORY to go to a page that asks you to review your application parameters:
The Path field designates where the YAML file for your application resides in your GitHub repository. For this project, type k8s
. For Application Name, type rsvpapp
, and for Cluster URL, select https://kubernetes.default.svc
from the dropdown menu, since Argo CD and your application are on the same Kubernetes cluster. Finally, enter default
for Namespace.
Once you have filled out your application parameters, click on CREATE at the top of the screen. A box will appear, representing your application:
After Status:, you will see that your application is OutOfSync with your GitHub repository. To deploy your application as it is on GitHub, click ACTIONS and choose Sync. After a few moments, your application status will change to Synced, meaning that Argo CD has deployed your application.
Once your application has been deployed, click your application box to find a detailed diagram of your application:
To find this deployment on your Kubernetes cluster, switch back to the terminal window for your remote server and enter:
- kubectl get pod
You will receive output with the pods that are running your app:
OutputNAME READY STATUS RESTARTS AGE
rsvp-755d87f66b-hgfb5 1/1 Running 0 12m
rsvp-755d87f66b-p2bsh 1/1 Running 0 12m
rsvp-db-54996bf89-gljjz 1/1 Running 0 12m
Next, check the services:
- kubectl get svc
You’ll find a service for the RSVP app and your MongoDB database, in addition to the number of the port from which your app is running, highlighted in the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
mongodb ClusterIP 10.102.150.54 <none> 27017/TCP 25m
rsvp NodePort 10.106.91.108 <none> 80:31350/TCP 25m
You can find your deployed RSVP app by navigating to your_remote_server_IP_address:app_port_number
in your browser, using the preceding highlighted number for app_port_number
:
Now that you have deployed your application using Argo CD, you can test your Continuous Deployment system and adjust it to automatically sync with GitHub.
With Argo CD set up, test out your Continuous Deployment system by making a change in your project and triggering a new build of your application.
In your browser, navigate to https://github.com/your_GitHub_username/rsvpapp-webinar4
, click into the master branch, and update the k8s/rsvp.yaml
file to deploy your app using the image built by CircleCI as a base. Add dev
after image: nkhare/rsvpapp:
, as shown in the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rsvp
spec:
replicas: 2
selector:
matchLabels:
app: rsvp
template:
metadata:
labels:
app: rsvp
spec:
containers:
- name: rsvp-app
image: nkhare/rsvpapp: dev
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /
port: 5000
periodSeconds: 30
timeoutSeconds: 1
initialDelaySeconds: 50
env:
- name: MONGODB_HOST
value: mongodb
ports:
- containerPort: 5000
name: web-port
. . .
Instead of pulling the original image from Docker Hub, Argo CD will now use the dev image created in the Continuous Integration system to build the application.
Commit the change, then return to the ArgoCD UI. You will notice that nothing has changed yet; this is because you have not activated automatic synchronization and must sync the application manually.
To manually sync the application, click the blue circle in the top right of the screen, and click Sync. A new menu will appear, with a field to name your new revision and a checkbox labeled PRUNE:
Clicking this checkbox will ensure that, once Argo CD spins up your new application, it will destroy the outdated version. Click the PRUNE box, then click SYNCHRONIZE at the top of the screen. You will see the old elements of your application spinning down, and the new ones spinning up with your CircleCI-made image. If the new image included any changes, you would find these new changes reflected in your application at the URL your_remote_server_IP_address:app_port_number
.
As mentioned before, Argo CD also has an auto-sync option that will incorporate changes into your application as you make them. To enable this, open up your terminal for your remote server and use the following command:
- argocd app set rsvpapp --sync-policy automated
To make sure that revisions are not accidentally deleted, the default for automated sync has prune turned off. To turn automated pruning on, simply add the --auto-prune
flag at the end of the preceding command.
Now that you have added Continuous Deployment capabilities to your Kubernetes cluster, you have completed the demonstration GitOps CI/CD system with CircleCI and Argo CD.
In this tutorial, you created a pipeline with CircleCI that triggers tests and builds updated images when you change code in your GitHub repository. You also used Argo CD to deploy an application, automatically incorporating the changes integrated by CircleCI. You can now use these tools to create your own GitOps CI/CD system that uses Git as its organizing theme.
If you’d like to learn more about Git, check out our An Introduction to Open Source series of tutorials. To explore more DevOps tools that integrate with Git repositories, take a look at How To Install and Configure GitLab on Ubuntu 18.04.
]]>This makes me think that the “user” (the script being run) doesn’t have permission to access git? Because I can manually run “git pull” from the directory in Putty successfully, but when using PHP and putting shell_exec(‘git pull’), it returns that error.
I assume the solution needs to involve setting user permissions in the line before calling git pull. But I’m unsure what this shell snippet code should be.
Thanks every much for your help, I’m brand new to Digital ocean, and cloud computing in general.
]]>Livre e open-source, o Git é um sistema de controle de versão distribuído que torna os projetos de software colaborativo mais gerenciáveis. Muitos projetos mantém seus arquivos em um repositório Git, e sites como o Github tornaram o compartilhamento e a contribuição para o código simples e efetiva.
Projetos open-source que são hospedados em repositórios públicos beneficiam-se de contribuições feitas pela ampla comunidade de desenvolvedores através de pull requests, que solicitam que um projeto aceite as alterações feitas em seu repositório de código.
Este tutorial vai guiá-lo no processo de realizar um pull request para um repositório Git através da linha de comando para que você possa contibuir com projetos de software open-source.
Você deve ter o Git instalado em sua máquina local. Você pode verificar se o Git está instalado em seu computador e passar pelo processo de instalação para o seu sistema operacional, seguindo este guia.
Você também precisará ter ou criar uma conta no GitHub. Você pode fazer isso através do website do GitHub, github.com, e pode ou efetuar login ou criar sua conta.
Finalmente, você deve identificar um projeto de software open-source para contribuir. Você pode se familiarizar mais com os projetos open-source lendo essa introdução.
Um repositório, ou repo para abreviar, é essencialmente a pasta principal do projeto. O repositório contém todos os arquivos relevantes do projeto, incluindo documentação, e também armazena o histórico de revisão para cada arquivo. No GitHub, os repositórios podem ter vários colaboradores e podem ser públicos ou privados.
Para trabalhar em um projeto open-source, primeiro você precisará criar sua própria cópia do repositório. Para fazer isso, você deve fazer um fork do repositório e então fazer a clonagem dele para que você tenha uma cópia de trabalho local.
Você pode fazer um fork de um repositório navegando até a URL GitHub do projeto open-source que você gostaria de contribuir.
As URLs de repositórios GitHub irão referenciar o nome do usuário associado com o proprietário do repositório, bem como o nome do repositório. Por exemplo, DigitalOcean Community é o proprietário do repositório do projeto cloud_haiku, assim a URL GitHub para esse projeto é:
https://github.com/do-community/cloud_haiku
No exemplo acima, do-community é o nome do usuário e cloud_haiku é o nome do repositório.
Um vez que você identificou o projeto que você gostaria de contribuir, você pode navegar até a URL, que estará formatada da seguinte forma:
https://github.com/nome-do-usuário/repositório
Ou você pode procurar o projeto usando a barra de pesquisa do GitHub.
Quando você estiver na página principal do repositório, você verá um botão “Fork” no seu lado superior direito da página, abaixo do seu ícone de usuário:
Clique no botão fork para iniciar o processo de fork. Dentro da janela do seu navegador, você receberá um feedback assim:
Quando o processo estiver concluído, o seu navegador irá para uma tela semelhante à imagem do repositório acima, exceto que no topo você verá seu nome de usuário antes do nome do repositório, e na URL ela também mostrará seu nome de usuário antes do nome do repositório.
Então, no exemplo acima, em vez de do-community / cloud_haiku na parte superior da página, você verá seu-nome-de-usuário / cloud_haiku, e a nova URL será parecida com isto:
https://github.com/seu-nome-de-usuário/cloud_haiku
Com o fork do repositório realizado, você está pronto para cloná-lo para que você tenha uma cópia de trabalho local da base de código.
Para criar sua própria cópia local do repositório com o qual você gostaria de contribuir, primeiro vamos abrir uma janela de terminal.
Vamos utilizar o comando git clone
juntamente com a URL que aponta para o seu fork do repositório.
Esta URL será semelhante à URL acima, exceto que agora ela irá terminar com .git
. No exemplo do cloud_haiku acima, a URL ficará assim:
https://github.com/seu-nome-de-usuário/cloud_haiku.git
Você pode, alternativamente, copiar a URL usando o botão verde “Clone or download” da página do seu repositório que você acabou de fazer fork. Depois de clicar no botão, você poderá copiar a URL clicando no botão do fichário ao lado da URL:
Uma vez que tenhamos a URL, estamos prontos para clonar o repositório. Para fazer isto, vamos combinar o comando git clone
com a URL do repositório a partir da linha de comando em uma janela de terminal:
- git clone https://github.com/seu-nome-de-usuário/repositório.git
Agora que você tem uma cópia local do código, podemos passar para a criação de uma nova branch ou ramificação na qual iremos trabalhar com o código.
Sempre que você trabalha em um projeto colaborativo, você e outros programadores que contribuem para o repositório terão ideias diferentes para novos recursos ou correções de uma só vez. Alguns desses novos recursos não levarão tempo significativo para serem implementados, mas alguns deles estarão em andamento. Por isso, é importante ramificar o repositório para que você possa gerenciar o fluxo de trabalho, isolar seu código e controlar quais recursos serão retornados à branch principal do repositório do projeto.
A branch principal padrão de um repositório de projeto é geralmente chamada de master branch. Uma prática comum recomendada é considerar qualquer coisa na branch master como sendo passível de se fazer o deploy para outras pessoas usarem a qualquer momento.
Ao criar uma nova branch, é muito importante que você a crie fora da branch master. Você também deve se certificar de que o nome da sua branch é descritivo. Em vez de chamá-la de minha-branch, você deve usar frontend-hook-migration
ou Corrigir erros de digitação na documentação
.
Para criar nossa branch, na nossa janela de terminal, vamos mudar nosso diretório para que estejamos trabalhando no diretório do repositório. Certifique-se de usar o nome real do repositório (como cloud_haiku
) para mudar para esse diretório.
- cd repositório
Agora, vamos criar nossa nova branch com o comando git branch
. Certifique-se de nomeá-la de maneira descritiva para que outras pessoas trabalhando no projeto entendam no que você está trabalhando.
- git branch nova-branch
Agora que nossa nova branch está criada, podemos mudar para nos certificar de que estamos trabalhando nessa branch usando o comando git checkout
:
- git checkout nova-branch
Depois de inserir o comando git checkout
, você receberá a seguinte saída:
OutputSwitched to branch nova-branch
Alternativamente, você pode condensar os dois comandos acima, criando e mudando para a nova branch, com o seguinte comando e com a flag -b
:
- git checkout -b nova-branch
Se você quiser mudar de volta para o master, você irá usar o comando checkout
com o nome da branch master:
- git checkout master
O comando checkout
vai lhe permitir alternar entre várias branches, para que você possa trabalhar em vários recursos de uma só vez.
Neste ponto, agora você pode modificar arquivos existentes ou adicionar novos arquivos ao projeto em sua própria branch.
Depois de modificar os arquivos existentes ou adicionar novos arquivos ao projeto, você pode adicioná-los ao seu repositório local, o que podemos fazer com o comando git add
. Vamos adicionar a flag -A
para adicionar todas as alterações que fizemos:
- git add -A
Em seguida, queremos registrar as alterações que fizemos no repositório com o comando git commit
.
A mensagem de commit é um aspecto importante da sua contribuição de código; ela ajuda os outros contribuidores a entenderem completamente a mudança que você fez, por que você fez e o quanto é importante. Adicionalmente, as mensagens de commit fornecem um registro histórico das mudanças para o projeto em geral, ajudando os futuros contribuidores ao longo do caminho.
Se tivermos uma mensagem muito curta, podemos gravar isso com a flag -m
e a mensagem entre aspas:
- git commit -m "Corrigidos erros de digitação na documentação"
Mas, a menos que seja uma mudança muito pequena, é bem provável que incluiremos uma mensagem de confirmação mais longa para que nossos colaboradores estejam totalmente atualizados com nossa contribuição. Para gravar esta mensagem maior, vamos executar o comando git commit
que abrirá o editor de texto padrão:
- git commit
Se você gostaria de configurar seu editor de texto padrão, você pode fazê-lo com o comando git config
e definir o nano como editor padrão, por exemplo:
git config --global core.editor "nano"
Ou o vim:
- git config --global core.editor "vim"
Depois de executar o comando git commit
, dependendo do editor de texto padrão que você está usando, sua janela de terminal deve exibir um documento pronto para edição que será semelhante a este:
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch nova-branch
# Your branch is up-to-date with 'origin/new-branch'.
#
# Changes to be committed:
# modified: novo-recurso.py
#
Abaixo dos comentários introdutórios, você deve adicionar a mensagem de commit ao arquivo de texto.
Para escrever uma mensagem útil no commit, você deve incluir um sumário na primeira linha com cerca de 50 caracteres. Abaixo disso, e dividido em seções de fácil entendimento, você deve incluir uma descrição que indique o motivo pelo qual você fez essa alteração, como o código funciona, e informações adicionais que irão contextualizar e esclarecer o código para que outras pessoas revisem o trabalho ao mesclá-lo. Tente ser o mais útil e proativo possível para garantir que os responsáveis pela manutenção do projeto possam entender totalmente sua contribuição.
Depois de salvar e sair do arquivo de texto da mensagem de commit, você poderá verificar o commit que o git estará fazendo com o seguinte comando:
- git status
Dependendo das alterações que você fez, você receberá uma saída parecida com esta:
OutputOn branch nova-branch
Your branch is ahead of 'origin/nova-branch' by 1 commit.
(use "git push" to publish your local commits)
nothing to commit, working directory clean
Nesse ponto você pode usar o comando git push
para fazer o push das alterações para a branch atual do repositório que você fez o fork:
- git push --set-upstream origin nova-branch
O comando irá lhe fornecer uma saída para que você saiba do progresso e será semelhante ao seguinte:
OutputCounting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 336 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To https://github.com/seu-nome-de-usuário /repositório .git
a1f29a6..79c0e80 nova-branch -> <^>nova-branch<
Branch nova-branch set up to track remote branch nova-branch from origin.
Agora você pode navegar até o repositório que você fez o fork na sua página web do GitHub e alternar para a branch que você acabou de fazer push para ver as alterações que você fez diretamente no navegador.
Nesse ponto, é possível fazer um pull request para o repositório original, mas se ainda não o fez, certifique-se de que seu repositório local esteja atualizado com o repositório upstream.
Enquanto você estiver trabalhando em um projeto ao lado de outros colaboradores, é importante que você mantenha seu repositório local atualizado com o projeto, pois você não deseja fazer um pull request de um código que cause conflitos. Para manter sua cópia local da base de código atualizada, você precisará sincronizar as alterações.
Primeiro vamos passar pela configuração de um repositório remoto para o fork, e então, sincronizar o fork.
Repositórios remotos permitem que você colabore com outras pessoas em um projeto Git. Cada repositório remoto é uma versão do projeto que está hospedada na Internet ou em uma rede à qual você tem acesso. Cada repositório remoto deve ser acessível a você como somente leitura ou como leitura-gravação, dependendo dos seus privilégios de usuário.
Para poder sincronizar as alterações feitas em um fork com o repositório original com o qual você está trabalhando, você precisa configurar um repositório remoto que faça referência ao repositório upstream. Você deve configurar o repositório remoto para o repositório upstream apenas uma vez.
Primeiro, vamos verificar quais servidores remotos você configurou. O comando git remote
listará qualquer repositório remoto que você já tenha especificado, então se você clonou seu repositório como fizemos acima, você verá pelo menos o repositório origin, que é o nome padrão fornecido pelo Git para o diretório clonado.
A partir do diretório do repositório em nossa janela de terminal, vamos usar o comando git remote
juntamente com a flag -v
para exibir as URLs que o Git armazenou junto com os nomes curtos dos repositórios remotos relevantes (como em “origin”):
- git remote -v
Como clonamos um repositório, nossa saída deve ser semelhante a isso:
Output
origin https://github.com/seu-nome-de-usuário/repositório-forked.git (fetch)
origin https://github.com/seu-nome-de-usuário/repositório-forked.git (push)
Se você configurou anteriormente mais de um repositório remoto, o comando git remote -v
fornecerá uma lista de todos eles.
Em seguida, vamos especificar um novo repositório remoto upstream para sincronizarmos com o fork. Este será o repositório original do qual fizemos o fork. Faremos isso com o comando git remote add
.
- git remote add upstream https://github.com/nome-de-usuário-do-proprietário-original/repositório-original.git
Nesse exemplo, upstream
é o nome abreviado que fornecemos para o repositório remoto, já que em termos do Git, “Upstream” refere-se ao repositório do qual nós clonamos. Se quisermos adicionar um ponteiro remoto ao repositório de um colaborador, podemos fornecer o nome de usuário desse colaborador ou um apelido abreviado para o nome abreviado.
Podemos verificar que nosso ponteiro remoto para o repositório upstream foi adicionado corretamente usando o comando git remote -v
novamente a partir do diretório do repositório:
- git remote -v
Outputorigin https://github.com/seu-nome-de-usuário/repositório-forked.git (fetch)
origin https://github.com/seu-nome-de-usuário/repositório-forked.git (push)
upstream https://github.com/nome-de-usuário-do-proprietário-original/repositório-original.git (fetch)
upstream https://github.com/nome-de-usuário-do-proprietário-original/repositório-original.git (push)
Agora você pode se referir ao upstream
na linha de comando em vez de escrever a URL inteira, e você está pronto para sincronizar seu fork com o repositório original.
Depois de configurarmos um repositório remoto que faça referência ao upstream e ao repositório original no GitHub, estamos prontos para sincronizar nosso fork do repositório para mantê-lo atualizado.
Para sincronizar nosso fork, a partir do diretório do nosso repositório local em uma janela de terminal, vamos utilizar o comando git fetch
para buscar as branches juntamente com seus respectivos commits do repositório upstream. Como usamos o nome abreviado “upstream” para nos referirmos ao repositório upstream, passaremos o mesmo para o comando:
- git fetch upstream
Dependendo de quantas alterações foram feitas desde que fizemos o fork do repositório, sua saída pode ser diferente, e pode incluir algumas linhas de contagem, compactação e descompactação de objetos. Sua saída terminará de forma semelhante às seguintes linhas, mas pode variar dependendo de quantas branches fazem parte do projeto:
OutputFrom https://github.com/nome-de-usuário-do-proprietário-original/repositório-original
* [new branch] master -> upstream/master
Agora, os commits para o branch master serão armazenados em uma branch local chamada upstream/master
.
Vamos mudar para a branch master local do nosso repositório:
- git checkout master
OutputSwitched to branch 'master'
Agora mesclaremos todas as alterações feitas na branch master do repositório original, que vamos acessar através de nossa branch upstream/master local, com a nossa branch master local:
- git merge upstream/master
A saída aqui vai variar, mas começará com Updating
se tiverem sido feitas alterações, ou Already up-to-date
, se nenhuma alteração foi feita desde que você fez o fork do repositório.
A branch master do seu fork agora está em sincronia com o repositório upstream, e as alterações locais que você fez não foram perdidas.
Dependendo do seu fluxo de trabalho e da quantidade de tempo que você gasta para fazer alterações, você pode sincronizar seu fork com o código upstream do repositório original quantas vezes isso fizer sentido para você. No entanto, você certamente deve sincronizar seu fork antes de fazer um pull request para garantir que não contribuirá com código conflitante.
Neste ponto, você está pronto para fazer um pull request para o repositório original.
Você deve navegar até o seu repositório onde você fez o fork e pressionar o botão “New pull request” no lado esquerdo da página.
Você pode modificar a branch na próxima tela. Em qualquer site, você pode selecionar o repositório apropriado no menu suspenso e a branch apropriada.
Depois de ter escolhido, por exemplo, a branch master do repositório original no lado esquerdo, e a nova-branch do seu fork do lado direito, você deve ver uma tela assim:
O GitHub vai lhe alertar de que é possível mesclar as duas branches porque não há código concorrente. Você deve adicionar um título, um comentário e, em seguida, pressionar o botão “Create pull request”.
Neste ponto, os mantenedores do repositório original decidirão se aceitam ou não o seu pull request. Eles podem solicitar que você edite ou revise seu código antes de aceitar o pull request.
Neste ponto, você enviou com êxito um pull request para um repositório de software open-source. Depois disso, você deve se certificar de atualizar e fazer um rebase do seu código enquanto espera que ele seja revisado. Os mantenedores do projeto podem pedir que você refaça seu código, então você deve estar preparado para isso.
Contribuir para projetos de open-source - e se tornar um desenvolvedor ativo de open-source - pode ser uma experiência gratificante. Fazer contribuições regulares para o software que você usa com frequência lhe permite certificar-se de que esse software seja tão valioso para outros usuários finais quanto possível.
Se você estiver interessado em aprender mais sobre o Git e colaborar com open-source, leia nossa série de tutoriais intitulada An Introduction to Open Source. Se você já conhece o Git e gostaria de um guia de consulta rápida, consulte “Como Usar o Git: Um Guia de Consulta Rápida.”
Por Lisa Tagliaferri
]]>Equipes de desenvolvedores e mantenedores de software open-source geralmente gerenciam seus projetos através do Git, um sistema distribuído de controle de versão que suporta colaboração.
Este artigo no estilo de Guia de Consulta Rápida fornece uma referência de comandos que são úteis para o trabalho e colaboração em um repositório Git. Para instalar e configurar o Git, certifique-se de ler “How To Contribute to Open Source: Getting Started with Git.”
Como utilizar esse guia:
Este guia está no formato de Guia de Consulta Rápida com fragmentos de linha de comando autocontidos.
Pule para qualquer seção que seja relevante para a tarefa que você está tentando completar.
Quando você vir texto destacado nos comandos deste guia, tenha em mente que este texto deve se referir aos commits e arquivos em seu próprio repositório.
Verifique a versão do Git com o seguinte comando, que irá também confirmar que o git está instalado.
- git --version
Você pode inicializar seu diretório de trabalho atual como um repositório Git com o init
.
- git init
Para copiar um repositório Git existente hospedado remotamente, você irá utilizar git clone
com a URL do repositório ou a localização do servidor (no último caso você irá usar ssh
).
- git clone https://www.github.com/username/nome-do-repositório
Mostrar o repositório remoto do seu diretório Git atual.
- git remote
Para uma saída mais detalhada, use a flag -v
.
- git remote -v
Adicionar o Git upstream, que pode ser uma URL ou pode estar hospedado em um servidor (no último caso, conecte com ssh
).
- git remote add upstream https://www.github.com/username/nome-do-repositório
Quando você modificou um arquivo e o marcou para ir no próximo commit, ele é considerado um arquivo preparado ou staged.
Verifique o status do seu repositório Git, incluindo arquivos adicionados que não estão como staged, e arquivos que estão como staged.
- git status
Para colocar como staged os arquivos modificados, utilize o comando add
, que você pode executar diversas vezes antes de fazer um commit. Se você fizer alterações subsequentes que queira ver incluídas no próximo commit, você deve exwcutar add
novamente.
Você pode especificar o arquivo exato com o add
.
- git add meu_script.py
Com o .
você pode adicionar todos os arquivos no diretório atual incluindo arquivos que começam com um .
.
- git add .
Você pode remover um arquivo da área de staging enquanto mantém as alterações no seu diretório de trabalho com reset
.
- git reset meu_script.py
Um vez que você tenha colocado no stage a suas atualizações, você está pronto para fazer o commit delas, que irá gravar as alterações que você fez no repositório.
Para fazer commit dos arquivos em stage, você irá executar o comando commit
com sua mensagem de confirmação significativa para que você possa rastrear os commits.
- git commit -m "Mensagem de commit"
Você pode condensar o staging de todos os arquivos rastreados fazendo o commit deles em uma única etapa.
- git commit -am "Mensagem de commit"
Se você precisar modificar a sua mensagem de commit, você pode fazer isto com a flag --amend
.
- git commit --amend -m "Nova Mensagem de commit"
Uma branch ou ramificação é um ponteiro móvel para um dos commits no repositório. Ele lhe permite isolar o trabalho e gerenciar o desenvolvimento de recursos e integrações. Você pode aprender mais sobre branches através da leitura da documentação do Git.
Listar todas as branches atuais com o comando branch
. Um aterisco (*
) irá aparecer próximo à sua branch ativa atualmente.
- git branch
Criar uma nova branch. Você permanecerá na sua branch ativa até mudar para a nova.
- git branch nova-branch
Alternar para qualquer branch existente e fazer checkout em seu diretório de trabalho atual.
- git checkout outra-branch
Você pode consolidar a criação e o checkout de uma nova branch utilizando a flag -b
.
- git checkout -b nova-branch
Renomear a sua branch.
- git branch -m nome-da-branch-atual novo-nome-da-branch
Mesclar o histórico da branch especificada àquela em que você está trabalhando atualmente.
- git merge nome-da-branch
Abortar a mesclagem, no caso de existirem conflitos.
- git merge --abort
Você também pode selecionar um commit particular para mesclar com cherry-pick
e com a string que referencia o commit específico.
- git cherry-pick f7649d0
Quando você tiver mesclado uma branch e não precisar mais dela, poderá excluí-la.
- git branch -d nome-da-branch
Se você não tiver mesclado uma branch com o master, mas tiver certeza de que deseja excluí-la, poderá forçar a exclusão da branch.
- git branch -D nome-da-branch
Para baixar alterações de outro repositório, tal como o upstream remoto, você irá usar o fetch
.
- git fetch upstream
Mesclar os commits baixados.
- git merge upstream/master
Envie ou transmita seus commits na branch local para a branch do repositório remoto.
- git push origin master
Busque e mescle quaisquer commits da branch remota de rastreamento.
- git pull
Mostrar o histórico de commits para a branch ativa atualmente.
- git log
Mostrar os commits que alteraram um arquivo particular. Isso segue o arquivo, independentemente da renomeação do mesmo.
- git log --follow meu_script.py
Mostrar os commits que estão em uma branch e não estão em outra. Isto irá mostrar os commits em a-branch
que não estão em b-branch
.
- git log a-branch..b-branch
Observe os logs de referência (reflog
) para ver quando as dicas de branches e outras referências foram atualizadas pela última vez dentro do repositório.
- git reflog
Mostrar qualquer objeto no Git através da sua string de commit ou hash em um formato mais legível.
- git show de754f5
O comando git diff
mostra as alterações entre commits, branches, entre outras. Você pode ler mais detalhadamente sobre isso através da Documentação do Git.
Comparar arquivos modificados que estão na área de staging.
- git diff --staged
Exibe o diff do que está em a-branch mas não está em b-branch.
- git diff a-branch..b-branch
Mostrar o diff entre dois commits específicos.
- git diff 61ce3e6..e221d9c
Às vezes, você descobrirá que fez alterações em algum código, mas, antes de terminar, precisa começar a trabalhar em outra coisa. Você ainda não está pronto para fazer o commit das alterações que você fez até agora, mas não quer perder seu trabalho. O comando git stash
lhe permitirá salvar suas modificações locais e reverter para o diretório de trabalho que está alinhado com o commit mais recente do HEAD
.
Guarde (stash) seu trabalho atual.
- git stash
Veja o que você tem guardado atualmente.
- git stash list
Seus rascunhos serão nomeados stash@{0}
, stash@{1}
, e assim por diante.
Mostrar informações sobre um rascunho em particular.
- git stash show stash@{0}
Para trazer os arquivos de um rascunho atual enquanto mantém o rascunho guardado, utilize apply
.
- git stash apply stash@{0}
Se você quer trazer os arquivos de uma rascunho e não precisa mais do rascunho, utilize pop
.
- git stash pop stash@{0}
Se você não precisar mais dos arquivos salvos em um determinado rascunho ou stash, você pode descartar o rascunho com drop
.
- git stash drop stash@{0}
Se você tiver muitos rascunhos salvos e não precisar mais de nenhum deles, você pode utilizar clear
para removê-los.
- git stash clear
Se você quiser manter arquivos em seu diretório local do Git, mas não quer fazer o commit deles no projeto, você pode adicionar esses arquivos ao seu arquvo .gitignore
para que não causem conflitos.
Utilize um editor de textos como o nano para adicionar arquivos ao arquivo .gitignore
.
- nano .gitignore
Para ver exemplos de arquivos .gitignore
, você pode olhar o repositório de modelos .gitignore
do GitHub.
Um rebase nos permite mover as branches alterando o commit no qual elas são baseadas. Como o rebasing, você pode reescrever ou reformular os commits.
Você pode iniciar um rebase chamando o número de commits que você fez e que você quer fazer rebase (5
no caso abaixo).
- git rebase -i HEAD~5
Como alternativa, você pode fazer o rebase com base em uma determinada string de commit ou hash.
- git rebase -i 074a4e5
Depois de ter reescrito ou reformulado os commits, você pode concluir o rebase da sua branch em cima da versão mais recente do código upstream do projeto.
- git rebase upstream/master
Para aprender mais sobre rabase e atualização, você pode ler How To Rebase and Update a Pull Request, que também é aplicável a qualquer tipo de commit.
Às vezes, inclusive após um rebase, você precisa redefinir sua árvore de trabalho. Você pode redefinir ou resetar para um commit específico e excluir todas as alterações com o seguinte comando.
- git reset --hard 1fc6665
Para forçar a enviar seu último commit conhecido e não conflitante para o repositório de origem, você precisará usar o --force
.
Atenção: Forçar o envio ou pushing para o master não é muito aprovado a menos que haja uma razão realmente importante para fazê-lo. Use isso com moderação ao trabalhar em seus próprios repositórios e evite fazer isso quando estiver colaborando.
- git push --force origin master
Para remover arquivos e subdiretórios locais não rastreados do diretório Git para uma branch de trabalho limpa, você pode usar git clean
.
- git clean -f -d
Se você precisar modificar seu repositório local para que ele pareça com o upstream master atual (isto é, quando há muitos conflitos), você pode executar um hard reset.
Nota: Executar este comando fará com que seu repositório local fique exatamente igual ao upstream. Todos os commits que você fez, mas que não foram enviados para o upstream, serão destruídos.
- git reset --hard upstream/master
Este guia aborda alguns dos comandos mais comuns do Git que você pode usar ao gerenciar repositórios e colaborar em software.
Você pode aprender mais sobre software open-source e colaboração em nossa série de tutoriais Introduction to Open Source:
Existem muitos outros comandos e variações que você pode achar úteis como parte do seu trabalho com o Git. Para saber mais sobre todas as opções disponíveis, você pode executar o comando abaixo receber informações úteis:
- git --help
Você também pode ler mais sobre o Git e ver a documentação dele no website oficial do Git.
Por Lisa Tagliaferri
]]>O GitLab Community Edition é um provedor de repositório Git auto-hospedado com recursos adicionais para ajudar no gerenciamento de projetos e no desenvolvimento de software. Um dos recursos mais valiosos que o GitLab oferece é a ferramenta embutida de integração e entrega contínua chamada GitLab CI.
Neste guia, vamos demonstrar como configurar o GitLab CI para monitorar seus repositórios por mudanças e executar testes automatizados para validar código novo. Começaremos com uma instalação do GitLab em execução, na qual copiaremos um repositório de exemplo para uma aplicação básica em Node.js. Depois de configurarmos nosso processo de CI, quando um novo commit é enviado ao repositório o GitLab irá utilizar o CI runner para executar o conjunto de testes em cima do código em um container Docker isolado.
Antes de começarmos, você precisará configurar um ambiente inicial. Vamos precisar de um servidor GitLab seguro configurado para armazenar nosso código e gerenciar nosso processo de CI/CD. Adicionalmente, precisaremos de um local para executar os testes automatizados. Este pode ser o mesmo servidor em que o GitLab está instalado ou um host separado. As seções abaixo cobrem os requisitos em mais detalhes.
Para armazenar nosso código-fonte e configurar nossas tarefas de CI/CD, precisamos de uma instância do GitLab instalada em um servidor Ubuntu 16.04. Atualmente o GitLab recomenda um servidor com no mínimo 2 núcleos de CPU e 4GB de RAM. Para proteger seu código de ser exposto ou adulterado, a instância do GitLab será protegida com SSL usando o Let’s Encrypt. Seu servidor precisa ter um nome de domínio associado a ele para completar essa etapa.
Você pode atender esses requisitos usando os seguintes tutoriais:
Configuração Inicial de servidor com Ubuntu 16.04: Crie um usuário com privilégios sudo
e configure um firewall básico.
Como Instalar e Configurar o GitLab no Ubuntu 16.04: Instale o GitLab no servidor e proteja-o com um certificado Let’s Encrypt TLS/SSL.
Estaremos demonstrando como compartilhar CI/CD runners (os componentes que executam os testes automatizados). Se você deseja compartilhar CI runners entre projetos, recomendamos fortemente que você restrinja ou desative as inscrições públicas. Se você não modificou suas configurações durante a instalação, volte e siga a etapa opcional do artigo de instalação do GitLab sobre como restringir ou desabilitar as inscrições para evitar abusos por parte de terceiros.
GitLab CI Runners são os servidores que verificam o código e executam testes automatizados para validar novas alterações. Para isolar o ambiente de testes, estaremos executando todos os nossos testes automatizados em containers Docker. Para fazer isso, precisamos instalar o Docker no servidor ou servidores que irão executar os testes.
Esta etapa pode ser concluída no servidor GitLab ou em outro servidor Ubuntu 16.04 para fornecer isolamento adicional e evitar contenção de recursos. Os seguintes tutoriais instalarão o Docker no host que você deseja usar para executar seus testes:
Configuração Inicial de servidor com Ubuntu 16.04: Crie um usuário com privilégios sudo
e configure um firewall básico. (você não precisa completar isso novamente se estiver configurando o CI runner no servidor do GitLab).
Como Instalar e Usar o Docker no Ubuntu 16.04: Siga os passos 1 e 2 para instalar o Docker no servidor.
Quando estiver pronto para começar, continue com este guia.
Para começar, vamos criar um novo projeto no GitLab contendo a aplicação de exemplo em Node.js. Iremos importar o repositório original diretamente do GitHub para que não tenhamos que carregá-lo manualmente.
Efetue o login no GitLab e clique no ícone de adição no canto superior direito e selecione New project para adicionar um novo projeto:
Na página do novo projeto, clique na aba Import project:
A seguir, clique no botão Repo by URL. Embora exista uma opção de importação do GitHub, ela requer um token de acesso Pessoal e é usada para importar o repositório e informações adicionais. Estamos interessados apenas no código e no histórico do Git, portanto, importar pela URL é mais fácil.
No campo Git repository URL, insira a seguinte URL do repositório GitHub:
https://github.com/do-community/hello_hapi.git
Deve se parecer com isto:
Como esta é uma demonstração, provavelmente é melhor manter o repositório marcado como Private ou privado. Quando terminar, clique em Create project.
O novo projeto será criado baseado no repositório importado do Github.
O GitLab CI procura por um arquivo chamado .gitlab-ci.yml
dentro de cada repositório para determinar como ele deve testar o código. O repositório que importamos já tem um arquivo .gitlab-ci.yml
configurado para o projeto. Você pode aprender mais sobre o formato lendo a documentação de referência do .gitlab-ci.yml.
Clique no arquivo .gitlab-ci.yml
na interface do GitLab para o projeto que acabamos de criar. A configuração do CI deve ser algo assim:
image: node:latest
stages:
- build
- test
cache:
paths:
- node_modules/
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
test_with_lab:
stage: test
script: npm test
O arquivo utiliza a sintaxe de configuração YAML no GitLab CI para definir as ações que devem ser tomadas, a ordem na qual elas devem executar, sob quais condições elas devem ser executadas e os recursos necessários para concluir cada tarefa. Ao escrever seus próprios arquivos de CI do GitLab, você pode checar com um validador indo até /ci/lint
em sua instância GitLab para validar que seu arquivo está formatado corretamente.
O arquivo de configuração começa declarando uma image
ou imagem do Docker que deve ser usada para executar o conjunto de testes. Como o Hapi é um framework Node.js, estamos usando a imagem Node.js mais recente:
image: node:latest
Em seguida, definimos explicitamente os diferentes estágios de integração contínua que serão executados:
stages:
- build
- test
Os nomes que você escolhe aqui são arbitrários, mas a ordenação determina a ordem de execução dos passos que se seguirão. Stages ou estágios são tags que você pode aplicar a jobs individuais. O GitLab vai executar jobs do mesmo estágio em paralelo e vai esperar para executar o próximo estágio até que todos os jobs do estágio atual estejam completos. Se nenhum estágio for definido, o GitLab usará três estágios chamados build
, test
, e deploy
e atribuir todos os jobs ao estágio test
por padrão.
Após definir os estágios, a configuração inclui uma definição de cache
:
cache:
paths:
- node_modules/
Isso especifica arquivos ou diretórios que podem ser armazenados em cache (salvos para uso posterior) entre execuções ou estágios. Isso pode ajudar a diminuir o tempo necessário para executar tarefas que dependem de recursos que podem não ser alterados entre execuções. Aqui, estamos fazendo cache do diretório node_modules
, que é onde o npm
instala as dependências que ele baixa.
Nosso primeiro job é chamado install_dependencies
:
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
Os jobs podem ter qualquer nome, mas como os nomes serão usados na interface do usuário do GitLab, nomes descritivos são úteis. Normalmente, o npm install
pode ser combinado com os próximos estágios de teste, mas para melhor demonstrar a interação entre os estágios, estamos extraindo essa etapa para executar em seu próprio estágio.
Marcamos o estágio explicitamente como “build” com a diretiva stage
. Em seguida, especificamos os comandos reais a serem executados usando a diretiva script
. Você pode incluir vários comandos inserindo linhas adicionais dentro da seção script
.
A sub-seção artifacts
é utilizada para especificar caminhos de arquivo ou diretório para salvar e passar entre os estágios. Como o comando npm install
instala as dependências do projeto, nossa próxima etapa precisará de acesso aos arquivos baixados. A declaração do caminho node_modules
garante que o próximo estágio terá acesso aos arquivos. Estes estarão também disponíveis para visualizar ou baixar na interface de usuário do GitLab após o teste, assim isso é útil para construir artefatos como binários também. Se você quiser salvar tudo que foi produzido durante o estágio, substitua a seção path
inteira por untracked: true
.
Finalmente, o segundo job chamado test_with_lab
declara o comando que realmente executará o conjunto de testes:
test_with_lab:
stage: test
script: npm test
Colocamos isso no estágio test
. Como esse é o último estágio, ele tem acesso aos artefatos produzidos pelo estágio build
que são as dependências do projeto em nosso caso. Aqui, a seção script
demonstra a sintaxe YAML de linha única que pode ser usada quando há apenas um único item. Poderíamos ter usado essa mesma sintaxe no job anterior, já que apenas um comando foi especificado.
Agora que você tem uma ideia básica sobre como o arquivo .gitlab-ci.yml
define tarefas CI/CD, podemos definir um ou mais runners capazes de executar o plano de testes.
Como o nosso repositório inclui um arquivo .gitlab-ci.yml
, quaisquer novos commits irão disparar uma nova execução de CI. Se não houver runners disponíveis, a execução da CI será definida como “pending” ou pendente. Antes de definirmos um runner, vamos disparar uma execução de CI para ver como é um job no estado pendente. Uma vez que um runner esteja disponível, ele imediatamente pegará a execução pendente.
De volta à visão do repositório do projeto do GitLab hello_hapi
, clique no sinal de adição ao lado do branch e do nome do projeto e selecione New file no menu:
Na próxima página, insira dummy_file
no campo File name e insira algum texto na janela principal de edição:
Clique em commit changes na parte inferior quando terminar.
Agora, retorne à página principal do projeto. Um pequeno ícone de pausa será anexado ao commit mais recente. Se você passar o mouse sobre o ícone, ele irá exibir “Commit:pending”:
Isso significa que os testes que validam as alterações de código ainda não foram executados.
Para obter mais informações, vá para o topo da página e clique em Pipelines. Você será direcionado para a página de visão geral do pipeline, na qual é possível ver que a execução CI está marcada como pending e rotulada como “stuck”:
Nota: Do lado direito há um botão para a ferramenta CI Lint. É aqui que você pode verificar a sintaxe de qualquer arquivo gitlab-ci.yml
que você escreve.
A partir daqui, você pode clicar no status pending para obter mais detalhes sobre a execução. Esta visão mostra os diferentes estágios de nossa execução, bem como os jobs individuais associados a cada estágio:
Finalmente, clique no job install_dependencies. Isso lhe dará detalhes específicos sobre o que está atrasando a execução:
Aqui, a mensagem indica que o trabalho está preso devido à falta de runners. Isso é esperado, uma vez que ainda não configuramos nenhum. Quando um runner estiver disponível, essa mesma interface poderá ser usada para ver a saída. Este é também o local onde você pode baixar os artefatos produzidos durante o build.
Agora que sabemos como é um job pendente, podemos atribuir um runner de CI ao nosso projeto para pegar o job pendente.
Agora estamos prontos para configurar um CI Runner do GitLab. Para fazer isso, precisamos instalar o pacote CI runner do GitLab no sistema e iniciar o serviço do runner. O serviço pode executar várias instâncias do runner para projetos diferentes.
Como mencionado nos pré-requisitos, você pode completar estes passos no mesmo servidor que hospeda sua instância do GitLab ou em um servidor diferente se você quiser ter certeza de evitar a contenção de recursos. Lembre-se de que, seja qual for o host escolhido, você precisa do Docker instalado para a configuração que usaremos.
O processo de instalação do serviço CI runner do GitLab é similar ao processo usado para instalar o próprio GitLab. Iremos baixar um script para adicionar um repositório GitLab à nossa lista de fontes apt
. Depois de executar o script, faremos o download do pacote do runner. Podemos então configurá-lo para servir nossa instância do GitLab.
Comece baixando a versão mais recente do script de configuração do repositório do GitLab CI runner para o diretório /tmp
(este é um repositório diferente daquele usado pelo servidor GitLab):
- curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh -o /tmp/gl-runner.deb.sh
Sinta-se à vontade para examinar o script baixado para garantir que você está confortável com as ações que ele irá tomar. Você também pode encontrar uma versão hospedada do script aqui:
- less /tmp/gl-runner.deb.sh
Quando estiver satisfeito com a segurança do script, execute o instalador:
- sudo bash /tmp/gl-runner.deb.sh
O script irá configurar seu servidor para usar os repositórios mantidos pelo GitLab. Isso permite gerenciar os pacotes do runner do GitLab com as mesmas ferramentas de gerenciamento de pacotes que você usa para os outros pacotes do sistema. Quando isso estiver concluído, você pode prosseguir com a instalação usando apt-get
:
- sudo apt-get install gitlab-runner
Isso irá instalar o pacote CI runner do GitLab no sistema e iniciar o serviço GitLab runner.
Em seguida, precisamos configurar um CI runner do GitLab para que ele possa começar a aceitar trabalho.
Para fazer isso, precisamos de um token do GitLab runner para que o runner possa se autenticar com o servidor GitLab. O tipo de token que precisamos depende de como queremos usar esse runner.
Um runner específico do projeto é útil se você tiver requisitos específicos para o runner. Por exemplo, se seu arquivo gitlab-ci.yml
define tarefas de deployment que requeiram credenciais, um runner específico pode ser necessário para autenticar corretamente dentro do ambiente de deployment. Se o seu projeto tiver etapas com recursos intensivos no processo do CI, isso também pode ser uma boa ideia. Um runner específico do projeto não irá aceitar jobs de outros projetos.
Por outro lado, um runner compartilhado é um runner de propósito geral que pode ser utilizado por vários projetos. Os runners receberão jobs dos projetos de acordo com um algoritmo que contabiliza o número de jobs que estão sendo executados atualmente para cada projeto. Esse tipo de runner é mais flexível. Você precisará fazer login no GitLab com uma conta de administrador para configurar os runners compartilhados.
Vamos demonstrar como obter os tokens de runner para esses dois tipos de runner abaixo. Escolha o método que melhor lhe convier.
Se você quiser que o runner seja vinculado a um projeto específico, comece navegando até a página do projeto na interface do GitLab.
A partir daqui, clique no item Settings no menu à esquerda. Depois, clique no item CI/CD no submenu:
Nesta página, você verá uma seção Runners settings. Clique no botão Expand para ver mais detalhes. Na visão de detalhes, o lado esquerdo explicará como registrar um runner específico do projeto. Copie o token de registro exibido na etapa 4 das instruções:
Se você quiser desativar quaisquer runners compartilhados ativos para este projeto, você pode fazê-lo clicando no botão Disable shared Runners no lado direito. Isso é opcional.
Quando estiver pronto, avance para aprender como registrar seu runner usando as informações coletadas nesta página.
Para encontrar as informações necessárias para registrar um runner compartilhado, você precisa estar logado com uma conta administrativa.
Comece clicando no ícone de chave inglesa na barra de navegação superior para acessar a área administrativa. Na seção Overview do menu à esquerda, clique em Runners para acessar a página de configuração do runner compartilhado.
Copie o token de registro exibido na parte superior da página:
Usaremos esse token para registrar um runner do GitLab CI para o projeto.
Agora que você tem um token, volte para o servidor em que seu serviço do runner do GitLab CI está instalado.
Para registrar um novo runner, digite o seguinte comando:
- sudo gitlab-runner register
Você será solicitado a responder uma série de questões para configurar o runner:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/)
Insira o nome de domínio do seu servidor GitLab, usando https://
para especificar SSL. Você pode, opcionalmente, anexar /ci
ao final do seu domínio, mas as versões recentes serão redirecionadas automaticamente.
Please enter the gitlab-ci token for this runner
Insira o token que você copiou na última seção.
Please enter the gitlab-ci description for this runner
Insira um nome para esse runner particular. Isso será exibido na lista de runners do serviço, na linha de comando e na interface do GitLab.
Please enter the gitlab-ci tags for this runner (comma separated)
Estas são tags que você pode atribuir ao runner. Os jobs do GitLab podem expressar requisitos em termos dessas tags para garantir que eles sejam executados em um host com as dependências corretas.
Você pode deixar isso em branco neste caso.
Whether to lock Runner to current project [true/false]
Atribua o runner ao projeto específico. Ele não poderá ser utilizado por outro projeto.
Selecione “false” aqui.
Please enter the executor
Insira o método usado pelo runner para completar jobs.
Escolha “docker” aqui.
Please enter the default Docker image (e.g. ruby:2.1)
Insira a imagem padrão utilizada para executar jobs quando o arquivo .gitlab-ci.yml
não incluir uma especificação de imagem. É melhor especificar uma imagem geral aqui e definir imagens mais específicas em seu arquivo .gitlab-ci.yml
como fizemos.
Vamos inserir “alpine:latest” aqui como um padrão pequeno e seguro.
Depois de responder às questões, um novo runner será criado, capaz de executar os jobs de CI/CD do seu projeto.
Você pode ver os runners que o serviço de runner do GitLab CI tem atualmente disponíveis digitando:
- sudo gitlab-runner list
OutputListing configured runners ConfigFile=/etc/gitlab-runner/config.toml
example-runner Executor=docker Token=e746250e282d197baa83c67eda2c0b URL=https://example.com
Agora que temos um runner disponível, podemos retornar ao projeto no GitLab.
De volta ao seu navegador, retorne ao seu projeto no GitLab. Dependendo de quanto tempo passou desde o registro do seu runner, ele pode estar em execução no momento:
Ou ele já pode ter sido concluído:
Independentemente do estado, clique no ícone running ou passed (ou failed se você se deparou com um problema) para ver o estado atual da execução da CI. Você pode ter uma visualização semelhante clicando no menu superior Pipelines.
Você será direcionado para a página de visão geral do pipeline, na qual poderá ver o status da execução do GitLab CI:
No cabeçalho Stages, haverá um círculo indicando o status de cada um dos estágios da execução. Se você clicar no estágio, poderá ver os jobs individuais associados ao estágio:
Clique no job install_dependencies dentro do estágio build. Isso o levará para a página de visão geral do job:
Agora, em vez de exibir uma mensagem de nenhum runner estar disponível, a saída do job é exibida. Em nosso caso, isso significa que você pode ver os resultados do npm
instalando cada um dos pacotes.
Ao longo do lado direito, você pode ver alguns outros itens também. Você pode ver outros jobs alterando o estágio e clicando nas execuções abaixo. Você também pode visualizar ou baixar quaisquer artefatos produzidos pela execução.
Neste guia, adicionamos um projeto demonstrativo à instância do Gitlab para mostrar os recursos de integração contínua e de deployment do GitLab CI. Discutimos como definir um pipeline nos arquivos gitlab-ci.yml
para construir e testar suas aplicações e como atribuir jobs aos estágios para definir a relação um com o outro. Em seguida, configuramos um runner do GitLab CI para pegar tarefas de CI para nosso projeto e demonstramos como encontrar informações sobre execuções individuais da CI do GitLab.
Por Justin Ellingwood
]]>O GitLab CE, ou Community Edition, é uma aplicação open source usada principalmente para hospedar repositórios Git, com recursos adicionais relacionados ao desenvolvimento, como rastreamento de problemas. Ele é projetado para ser hospedado usando a sua própria infraestrutura, e fornece flexibilidade na implantação como um repositório interno para sua equipe de desenvolvimento, publicamente como uma forma de interagir com usuários, ou até mesmo aberto como forma de os colaboradores hospedarem seus próprios projetos.
O projeto do GitLab torna relativamente simples a configuração de uma instância do GitLab em seu próprio hardware com um mecanismo de fácil instalação. Neste guia vamos cobrir como instalar e configurar o GitLab em um servidor Ubuntu 16.04.
Este tutorial irá assumir que você tem acesso a um novo servidor Ubuntu 16.04. Os requisitos de hardware publicados do GitLab recomendam a utilização de um servidor com:
Embora você possa substituir algum espaço de swap por RAM, isso não é recomendado. Para este guia assumiremos que você tem os recursos acima, no mínimo.
Para começar, você vai precisar de um usuário não-root com acesso sudo
configurado no servidor. É também uma boa ideia configurar um firewall básico para fornecer uma camada adicional de segurança. Você pode seguir os passos em nosso tutorial Configuração Inicial de servidor com Ubuntu 16.04 para obter essa configuração.
Quando tiver satisfeito os pré-requisitos acima, continue para iniciar o procedimento de instalação.
Antes que possamos instalar o GitLab propriamente dito, é importante instalar alguns dos softwares que ele aproveita durante a instalação e que ele usa de forma contínua. Felizmente, todos os softwares necessários podem ser facilmente instalados a partir dos repositórios padrão do Ubuntu.
Já que esta é a nossa primeira vez usando o apt
durante esta sessão, podemos atualizar o índice de pacotes local e depois instalar as dependências digitando:
- sudo apt-get update
- sudo apt-get install ca-certificates curl openssh-server postfix
Você provavelmente já terá alguns desses softwares instalados. Para a instalação do postfix
, selecione Internet Site quando solicitado. Na próxima tela, entre com o nome de domínio do seu servidor ou seu endereço IP para configurar de que forma o sistema enviará e-mail.
Agora que as dependências estão instaladas, podemos instalar o GitLab. Este é um processo direto que utiliza um script de instalação para configurar seu sistema com os repositórios do GitLab.
Vá para o diretório /tmp
e então baixe o script de instalação:
- cd /tmp
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Fique à vontade para examinar o script baixado para assegurar-se de que você esteja confortável com as ações que ele irá tomar. Você pode encontrar uma versão hospedada do script aqui:.
- less /tmp/script.deb.sh
Quando estiver satisfeito com a segurança do script, execute o instalador:
- sudo bash /tmp/script.deb.sh
Este script irá configurar seu servidor para utilizar os repositórios mantidos pelo GitLab. Isso lhe permite gerenciar o GitLab com as mesmas ferramentas de gerenciamento de pacotes que você usa para seus outros pacotes de sistema. Quando estiver completo, você pode instalar a aplicação real do GitLab com o apt
:
- sudo apt-get install gitlab-ce
Isto irá instalar os componentes necessários em seu sistema.
Antes de configurar o GitLab, você precisará garantir que suas regras de firewall são permissivas o suficiente para permitir o tráfego web. Se você seguiu o guia vinculado nos pré-requisitos, você terá um firewall ufw
ativado.
Veja o status atual do seu firewall ativo digitando:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Como você pode ver, as regras atuais permitem o tráfego SSH, mas o acesso a outros serviços está restrito. Como o Gitlab é uma aplicação web, devemos permitir acesso HTTP entrante. Se você tiver um nome de domínio associado com o seu servidor GitLab, o GitLab pode também solicitar e ativar um certificado gratuito TLS/SSL a partir do projeto Let’s Encrypt para proteger a instalação. Também queremos permitir o acesso HTTPS nesse caso.
um vez que o protocolo de mapeamento de portas para HTTP e HTTPS está disponível no arquivo /etc/services
, podemos permitir tais tráfegos pelo nome. Se você ainda não ativou o tráfego do OpenSSH, permita também esse tráfego agora:
- sudo ufw allow http
- sudo ufw allow https
- sudo ufw allow OpenSSH
Se você verificar com o comando ufw status
novamente, você deverá ver o acesso configurado para no mínimo esses dois serviços:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80 ALLOW Anywhere
443 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
A saída acima indica que a interface web do GitLab estará acessível assim que configurarmos a aplicação.
Antes de você poder usar a aplicação, você precisa atualizar um arquivo de configuração e executar um comando de reconfiguração. Primeiro, abra o arquivo de configuração do GitLab:
- sudo nano /etc/gitlab/gitlab.rb
Perto do topo está a linha de configuração external_url
. Atualize-a para corresponder ao seu próprio domínio ou endereço IP. Se você tem um domínio, altere o http
para https
para que o GitLab redirecione automaticamente os usuários para o site protegido pelo certificado do Let´s Encrypt que estaremos solicitando.
# If your GitLab server does not have a domain name, you will need to use an IP
# address instead of a domain and keep the protocol as `http`.
external_url 'https://seu_dominio'
Em seguida, se seu servidor GitLab tem um nome de domínio, pesquise o arquivo pela configuração letsencrypt['enable']
. Descomente a linha e defina-a para true
. Isso dirá ao GitLab para solicitar um certificado Let´s Encrypt para seu domínio GitLab e configurar a aplicação para servir o tráfego com ele.
Abaixo disso, procure a configuração letsencrypt['contact_emails']
. Esta configuração define uma lista de endereços de e-mail que o projeto Let´s Encrypt pode utilizar para lhe contatar se houver problemas com seu domínio. É uma boa idéia descomentar e preencher isso também para que você saiba de quaisquer problemas:
letsencrypt['enable'] = true
letsencrypt['contact_emails'] = ['sammy@seu_domínio.com']
Salve e feche o arquivo. Agora, execute o seguinte comando para reconfigurar o GitLab:
- sudo gitlab-ctl reconfigure
Isso irá inicializar o GitLab utilizando as informações que ele pode encontrar sobre seu servidor. Esse é um processo completamente automatizado, portanto você não vai ter que responder a nenhuma solicitação. Se você ativou a integração com o Let´s Encrypt, um certificado deve ser configurado para o seu domínio.
Agora que o GitLab está executando e o acesso está permitido, podemos realizar algumas configurações iniciais da aplicação através da interface web.
Visite o nome de domínio do seu servidor GitLab em seu navegador:
http://domínio_gitlab_ou_IP
Se você ativou o Let’s Encrypt e usou https
em seu external_url
, você deve ser redirecionado para uma conexão HTTPS segura.
Em sua primeira visita, você deve ver uma solicitação inicial para definir uma senha para a conta administrativa:
Na solicitação inicial de senha, forneça e confirme uma senha segura para a conta administrativa. Clique no botão Change your password quando tiver terminado.
Você será redirecionado à página convencional de login do GitLab:
Aqui, você pode efetuar o login com a senha que você acabou de definir. As credenciais são:
Username: root
Password: [a senha que você definiu]
Entre com esses valores nos campos para os usuários existentes e clique no botão Sign in. Você será autenticado no aplicativo e levado a uma página de entrada que solicitará que você comece a adicionar projetos:
Agora você pode fazer algumas mudanças simples para configurar o GitLab da maneira que você quiser.
Uma das primeiras coisas que você deve fazer após uma nova instalação é colocar o seu perfil na melhor forma. O GitLab seleciona alguns padrões razoáveis, mas estes geralmente não são apropriados quando você começa a usar o software.
Para fazer as modificações necessárias, clique no ícone do usuário no canto superior direito da interface. No menu suspenso exibido, selecione Settings:
Você será levado para a seção de perfil ou Profile nas suas configurações:
Ajuste Name e Email trocando “Administrator” e “admin@example.com” para algo mais adequado. O nome que você selecionou será mostrado para os outros usuários, equanto o e-mail será utilizado para detecção padrão do avatar, notificações, ações no Git através da interface, etc.
Clique no botão Update Profile setting na parte inferior quando terminar:
Um e-mail de confirmação será enviado ao endereço que você forneceu. Siga as instruções no e-mail para confirmar sua conta para que você possa começar a utilização do GitLab.
A seguir, clique no item Account na barra de menu à esquerda:
Aqui, você pode encontrar seu token de API privada ou configurar a autenticação de dois fatores. Contudo, a funcionalidade a qual estamos interessados no momento é a seção Change username.
Por padrão, à primeira conta administrativa é dado o nome root. Como esse é um nome conhecido de conta, é mais seguro alterar esse nome para um diferente. Você ainda terá privilégios administrativos; a única coisa que irá mudar é o nome:
Clique no botão Update username para fazer a alteração:
Na próxima vez que você fizer o login no GitLab, lembre-se de utilizar seu novo nome de usuário.
Em muitos casos, você irá querer usar chaves SSH com Git para interagir com seus projetos GitLab. Para fazer isso, você precisa adicionar sua chave pública à sua conta do GitLab.
Se você já tem um par de chaves SSH criado em seu computador local, você geralmente pode ver a chave pública digitando:
- cat ~/.ssh/id_rsa.pub
Você deverá ver um grande pedaço de texto, como este:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copie esse texto e volte para a página de configurações do perfil na interface web do gitLab.
Se, em vez disso, você receber uma mensagem parecida como isto, você ainda não tem um par SSH configurado em sua máquina:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
Se for o caso, você pode criar um par de chaves SSH digitando:
- ssh-keygen
Aceite os padrões e, opcionalmente, forneça uma senha para proteger a chave localmente:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Depois de ter isso, você pode exibir sua chave pública como acima, digitando:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copie o bloco de texto exibido e volte para as configurações do perfil na interface web do GitLab.
Clique no item SSH keys no menu à esquerda:
No espaço fornecido cole a chave pública que você copiou da sua máquina local. Dê a ela um título descritivo, e clique no botão Add key:
Agora você deve conseguir gerenciar seus projetos e repositórios do GitLab a partir de sua máquina local sem ter que fornecer suas credenciais de conta do GitLab.
Você deve ter notado que é possível que alguém se inscreva para uma conta quando visitar a página de destino da sua instância do GitLab. Isso pode ser o que você deseja se estiver hospedando um projeto público. No entanto, muitas vezes, configurações mais restritivas são desejáveis.
Para começar, vá até a área administrativa clicando no ícone de chave inglesa na barra de menu principal na parte superior da página:
Na página seguinte, você pode ver uma visão geral da sua instância do GitLab como um todo. Para ajustar as configurações, clique no item Settings na parte inferior do menu à esquerda.
Você será levado para as configurações globais da sua instância do GitLab. Aqui, você pode ajustar várias configurações que afetam se novos usuários podem se inscrever e qual será o nível de acesso deles.
Se você deseja desabilitar completamente as inscrições (você ainda pode criar manualmente as contas para novos usuários), desça até a seção Sign-up Restrictions.
Desmarque a caixa de seleção Sign-up enabled:
Role para baixo até o final e clique no botão Save:
A seção de inscrição agora deve estar removida da página inicial do GitLab.
Se você estiver usando o GitLab como parte de uma organização que fornece endereços de e-mail associados a um domínio, poderá restringir as inscrições por domínio em vez de desativá-las completamente.
Na seção Sign-up Restrictions, primeiro selecione a caixa Send confirmation email on sign-up permitindo somente que os usuários façam login depois de confirmarem seus e-mails.
Em seguida, adicione o seu domínio ou domínios à caixa Whitelisted domains for sign-ups, uma por linha. Você pode utilizar o asterisco “*” para especificar domínios curinga.
Role para baixo até o final e clique no botão Salvar:
A seção de inscrição agora deve estar removida da página inicial do GitLab.
Por padrão, novos usuários podem criar até 10 projetos. Se você deseja permitir que novos usuários externos tenham visibilidade e participação, mas quer restringir seus acessos ao criar novos projetos, você pode fazer isto na seção Account and Limit Settings.
Lá dentro, você pode alterar o limite padrão de projetos em Default projects limit para 0 para desativar completamente a criação de projetos por novos usuários.
Novos usuários ainda podem ser adicionados a projetos manualmente e terão acesso a projetos internos ou públicos criados por outros usuários.
Role para baixo até o final e clique no botão Salvar:
Agora, novos usuários poderão criar contas, mas não poderão criar projetos.
Por padrão, os certificados Let’s Encrypt são válidos por 90 dias. Se você ativou o Let’s Encrypt para o seu domínio do GitLab anteriormente, você precisará garantir que seus certificados sejam renovados regularmente para evitar interrupções no serviço. O GitLab fornece o comando gitlab-ctl renew-le-certs
para solicitar novos certificados quando seus ativos atuais se aproximarem da expiração.
Para automatizar este processo, podemos criar um cron job para executar automaticamente este comando regularmente. O comando somente irá renovar o certificado quando ele estiver perto da expiração, para que possamos executá-lo com segurança regularmente
Para começar, crie e abra um arquivo em /etc/cron.daily/gitlab-le
em seu editor de textos:
- sudo nano /etc/cron.daily/gitlab-le
Dentro dele, cole o seguinte script:
#!/bin/bash
set -e
/usr/bin/gitlab-ctl renew-le-certs > /dev/null
Salve e feche o arquivo quando tiver terminado.
Marque o arquivo como executável digitando:
- sudo chmod +x /etc/cron.daily/gitlab-le
Agora, o GitLab deve verificar automaticamente todos os dias se seu certificado Let’s Encrypt precisa ser renovado. Em caso afirmativo, o comando renovará o certificado automaticamente.
Agora você deve ter uma instância do GitLab em funcionamento hospedada em seu próprio servidor. Você pode começar a importar ou criar novos projetos e configurar o nível apropriado de acesso para sua equipe. O GitLab está regularmente adicionando recursos e atualizando sua plataforma, por isso confira a página inicial do projeto para se manter atualizado sobre quaisquer melhorias ou avisos importantes.
Por Justin Ellingwood
]]>A containerização está rapidamente se tornando o método de empacotamento e deploy de aplicações mais aceito nos ambientes de nuvem. A padronização que ele fornece, juntamente com sua eficiência de recursos (quando comparado a máquinas virtuais completas) e flexibilidade, o tornam um grande facilitador da moderna mentalidade DevOps. Muitas estratégias interessantes de deployment, orquestração e monitoramento nativas para nuvem tornam-se possíveis quando suas aplicações e microsserviços são totalmente containerizados.
Os containers Docker são de longe os tipos mais comuns de container atualmente. Embora os repositórios públicos de imagem do Docker como o Docker Hub estejam repletos de imagens de software opensource containerizado que você pode fazer um docker pull
hoje, para código privado você precisará pagar um serviço para construir e armazenar suas imagens, ou executar seu próprio software para fazer isso.
O GitLab Community Edition é um pacote de software auto-hospedado que fornece hospedagem de repositório Git, acompanhamento de projetos, serviços de CI/CD, e um registro de imagem Docker, entre outros recursos. Neste tutorial vamos utilizar o serviço de integração contínua do GitLab para construir imagens Docker a partir de uma aplicação de exemplo em Node.js. Estas imagens serão então testadas e carregadas para o nosso próprio registro privado do Docker.
Antes de começarmos, precisamos configurar um servidor GitLab seguro, e um GitLab CI runner para executar tarefas de integração contínua. As seções abaixo fornecerão links e maiores detalhes.
Para armazenar nosso código fonte, executar tarefas de CI/CD, e hospedar um registro Docker, precisamos de uma instância do GitLab instalada em um servidor Ubuntu 16.04. Atualmente, o GitLab recomenda um servidor com pelo menos 2 núcleos de CPU e 4GB de RAM. Adicionalmente, iremos proteger o servidor com certificados SSL do Let’s Encrypt. Para fazer isto, precisaremos de um nome de domínio apontando para o servidor.
Você pode completar esses pré-requisitos com os seguintes tutoriais:
Como configurar um nome de host com a DigitalOcean mostrará como gerenciar um domínio com o painel de controle da DigitalOcean.
Configuração Inicial de servidor com Ubuntu 16.04 vai lhe fornecer um usuário não-root, habilitado para sudo, e habilitar o firewall ufw
do Ubuntu.
Como instalar e configurar o GitLab no Ubuntu 16.04 irá lhe mostrar como instalar o GitLab e configurá-lo com um certificado TLS/SSL gratuito do Let’s Encrypt
O tutorial Como configurar pipelines de integração contínua com o GitLab CI no Ubuntu 16.04 fornecerá uma visão geral do serviço de CI ou integração contínua do GitLab e mostrará como configurar um CI runner para processar jobs. Vamos construir isso em cima da aplicação de demonstração e da infraestrutura do runner criados neste tutorial.
No pré-requisito do tutorial de integração contínua com o GitLab, configuramos um GitLab runner utilizando sudo gitlab-runner register
e seu processo de configuração interativo. Este runner é capaz de executar builds e testes de software dentro de containers Docker isolados.
Entretanto, para se construir imagens Docker, nosso runner precisa de acesso total ao próprio serviço do Docker. A maneira recomendada de se configurar isto é utilizar a imagem docker-in-docker
oficial do Docker para executar os jobs. Isto requer conceder ao runner um modo de execução privileged
ou privilegiado. Portanto, criaremos um segundo runner com este modo ativado.
Nota: Conceder ao runner o modo privileged basicamente desativa todas as vantagens de segurança da utilização de containers. Infelizmente, os outros métodos de ativar runners compatíveis com o Docker também carregam implicações de segurança semelhantes. Por favor, veja a documentação oficial do GitLab no Docker Build para aprender mais sobre as diferentes opções de runners e qual é a melhor para a sua situação.
Como existem implicações de segurança para a utilização de runner privilegiado, vamos criar um runner específico do projeto que aceitará somente jobs de Docker em nosso projeto hello_hapi
(Os administradores de GitLab sempre podem adicionar manualmente esse runner a outros projetos posteriormente). A partir da página do nosso projeto hello_hapi
, clique em Settings na parte inferior do menu à esquerda, em seguida clique em CI/CD no sub-menu:
Agora, clique no botão Expand ao lado da seção de configurações de Runners:
Haverá algumas informações sobre como configurar um Specific Runner, incluindo um token de registro. Tome nota desse token. Quando o utilizamos para registrar um novo runner, o runner será bloqueado apenas para este projeto.
Estando nesta página, clique no botão Disable shared Runners. Queremos ter certeza de que nossos jobs de Docker sempre executarão em nosso runner privilegiado. Se um runner compartilhado não privilegiado estivesse disponível, o GitLab pode optar por utilizá-lo, o que resultaria em erros de build.
Faça o login no servidor que possui o seu CI runner atual. Se você não tiver uma máquina já configurada com os runners, volte e complete a seção Instalando o Serviço CI Runner do GitLab do tutorial de pré-requisitos antes de continuar.
Agora, execute o seguinte comando para configurar o runner privilegiado específico do projeto:
- sudo gitlab-runner register -n \
- --url https://gitlab.example.com/ \
- --registration-token seu-token \
- --executor docker \
- --description "docker-builder" \
- --docker-image "docker:latest" \
- --docker-privileged
OutputRegistering runner... succeeded runner=61SR6BwV
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Certifique-se de substituir suas próprias informações. Nós definimos todas as opções do nosso runner na linha de comando em vez de usar os prompts interativos, porque os prompts não nos permitem especificar o modo --docker-privileged
.
Agora o seu runner está configurado, registrado e executando. Para verificar, volte ao seu navegador. Clique no ícone de chave inglesa na barra de menu principal do GitLab, em seguida clique em Runners no menu à esquerda. Seus runners serão listados:
Agora que temos um runner capaz de criar imagens do Docker, vamos configurar um registro privado do Docker para carregar imagens para ele.
Configurar seu próprio registro do Docker permite que você envie e extraia imagens de seu próprio servidor privado, aumentando a segurança e reduzindo as dependências do seu fluxo de trabalho em serviços externos.
O GitLab irá configurar um registro Docker privado com apenas algumas atualizações de configuração. Primeiro vamos configurar a URL onde o registro irá residir. Depois, iremos (opcionalmente) configurar o registro para usar um serviço de armazenamento de objetos compatível com S3 para armazenar seus dados.
Faça SSH em seu servidor GitLab, depois abra o arquivo de configuração do GitLab:
- sudo nano /etc/gitlab/gitlab.rb
Role para baixo até a seção Container Registry settings. Vamos descomentar a linha registry_external_url
e configurá-la para o nosso host GitLab com a porta número 5555
:
registry_external_url 'https://gitlab.example.com:5555'
A seguir, adicione as duas linhas seguintes para dizer ao registro onde encontrar nossos certificados Let’s Encrypt:
registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem"
Salve e feche o arquivo, depois reconfigure o GitLab:
- sudo gitlab-ctl reconfigure
Output. . .
gitlab Reconfigured!
Atualize o firewall para pemitir tráfego para a porta do registro:
- sudo ufw allow 5555
Agora mude para outra máquina com o Docker instalado e efetue o login no registro Docker privado. Se você não tiver o Docker no seu computador de desenvolvimento local, você pode usar qualquer servidor configurado para executar seus jobs do GitLab CI, já que ele tem o Docker instalado:
- docker login gitlab.example.com:5555
Você será solicitado para inserir o seu nome de usuário e senha. Use suas credenciais do GitLab para efetuar login.
OutputLogin Succeeded
Sucesso! O registro está configurado e funcionando. Atualmente, ele armazenará arquivos no sistema de arquivos local do servidor GitLab. Se você quiser usar um serviço de armazenamento de objetos, continue com esta seção. Se não, pule para o Passo 3.
Para configurar um backend de armazenamento de objetos para o registro, precisamos saber as seguintes informações sobre o nosso serviço de armazenamento de objetos:
Access Key
Secret Key
Region (us-east-1
) por exemplo, se estiver usando Amazon S3, ou Region Endpoint se estiver usando um serviço compatível com S3 (https://nyc.digitaloceanspaces.com
)
Nome do Bucket
Se você estiver usando o DigitalOcean Spaces, você pode descobrir como configurar um novo Space e obter as informações acima lendo Como Criar um Space e uma Chave de API na DigitalOcean.
Quando você tiver suas informações sobre o amazenamento de objetos, abra o arquivo de configuração do GitLab:
- sudo nano /etc/gitlab/gitlab.rb
Novamente, role até a seção de registro do container. Procure pelo bloco registry['storage']
, descomente o bloco e atualize-o para o seguinte, novamente certificando-se de substituir suas próprias informações, quando apropriado:
registry['storage'] = {
's3' => {
'accesskey' => 'sua-key',
'secretkey' => 'seu-secret',
'bucket' => 'seu-bucket-name',
'region' => 'nyc3',
'regionendpoint' => 'https://nyc3.digitaloceanspaces.com'
}
}
Se você estiver uando Amazon S3, você precisa apenas da region
e não do regionendpoint
. Se estiver usando um serviço S3 compatível, como o Spaces, você irá precisar do regionendpoint
. Neste caso, region
na verdade não configura nada e o valor que você digita não importa, mas ainda precisa estar presente e não em branco.
Salve e feche o arquivo.
Nota: Atualmente, há um bug em que o registro será encerrado após trinta segundos se seu bucket de armazenamento de objetos estiver vazio. Para evitar isso, coloque um arquivo no seu bucket antes de executar a próxima etapa. Você poderá removê-lo mais tarde, após o registro ter adicionado seus próprios objetos.
Se você estiver usando o Spaces da DigitalOcean, você pode arrastar e soltar um arquivo para carregá-lo usando a interface do Painel de Controle.
Reconfigure o GitLab mais uma vez:
- sudo gitlab-ctl reconfigure
Em sua outra máquina Docker, efetue login no registro novamente para ter certeza de que tudo está bem:
- docker login gitlab.example.com:5555
Você deve receber uma mensagem de Login Succeeded
.
Agora que temos nosso registro do Docker configurado, vamos atualizar a configuração de CI da nossa aplicação para criar e testar nossa app, e enviar as imagens Docker para o nosso registro privado.
gitlab-ci.yaml
e Construindo uma Imagem DockerNota: Se você não concluiu o artigo de pré-requisito do GitLab CI você precisará copiar o repositório de exemplo para o seu servidor GitLab. Siga a seção Copiando o Repositório de Exemplo a partir do GitHub para fazer isto.
Para que possamos fazer o building de nossa app no Docker, precisamos atualizar o arquivo .gitlab-ci.yml
. Você pode editar este arquivo diretamente no GitLab clicando na página principal do projeto, e depois no botão Edit. Alternativamente, você poderia clonar o repositório para a sua máquina local, editar o arquivo, e então fazer um git push
nele de volta para o GitLab. Isso ficaria assim:
- git clone git@gitlab.example.com:sammy/hello_hapi.git
- cd hello_hapi
- # edit the file w/ your favorite editor
- git commit -am "updating ci configuration"
- git push
Primeiro, exclua tudo no arquivo, depois cole nele a seguinte configuração:
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
variables:
TEST_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:$CI_COMMIT_REF_NAME
RELEASE_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN gitlab.example.com:5555
build:
stage: build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
test:
stage: test
script:
- docker pull $TEST_IMAGE
- docker run $TEST_IMAGE npm test
release:
stage: release
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
only:
- master
Certifique-se de atualizar os URLs e nomes de usuários realçados com suas próprias informações e, em seguida, salve com o botão Commit changes no GitLab. Se você está atualizando o arquivo fora do GitLab, confirme as mudanças e faça git push
de volta no GitLab.
Este novo arquivo de configuração diz ao GitLab para usar a imagem mais recente do docker (image: docker:latest
) e vinculá-la ao serviço docker-in-docker (docker:dind). Então, ele define os estágios de build
, test
, e release
. O estágio de build
cria a imagem do Docker usando o Dockerfile
fornecido pelo repositório, em seguida o carrega para o nosso registro de imagens Docker. Se isso for bem sucedido, o estágio test
vai baixar a imagem que acabamos de construir e executar o comando npm test
dentro dele. Se o estágio test
for bem sucedido, o estágio release
irá lançar a imagem, irá colocar uma tag como hello_hapi:latest
e irá retorná-la ao registro.
Dependendo do seu fluxo de trabalho, você também pode adicionar mais estágios test
, ou mesmo estágios deploy
que levam o aplicativo para um ambiente de preparação ou produção.
A atualização do arquivo de configuração deve ter acionado um novo build. Volte ao projeto hello_hapi
no GitLab e clique no indicador de status do CI para o commit:
Na página resultante, você pode clicar em qualquer um dos estágios para ver seu progresso:
Eventualmente, todas as etapas devem indicar que eles foram bem sucedidos, mostrando ícones com a marca de verificação em verde. Podemos encontrar as imagens Docker que acabaram de ser construídas clicando no item Registry no menu à esquerda:
Se você clicar no pequeno ícone “document” ao lado do nome da imagem, ele copiará o comando apropriado docker pull ...
para a sua área de transferência. Você pode então baixar e executar sua imagem:
- docker pull gitlab.example.com:5555/sammy/hello_hapi:latest
- docker run -it --rm -p 3000:3000 gitlab.example.com:5555/sammy/hello_hapi:latest
Output> hello@1.0.0 start /usr/src/app
> node app.js
Server running at: http://56fd5df5ddd3:3000
A imagem foi baixada do registro e iniciada em um container. Mude para o seu navegador e conecte-se ao aplicativo na porta 3000 para testar. Neste caso, estamos executando o container em nossa máquina local, assim podemos acessá-la via localhost na seguinte URL:
http://localhost:3000/hello/test
OutputHello, test!
Sucesso! Você pode parar o container com CTRL-C
. A partir de agora, toda vez que enviarmos um novo código para a ramificação master do nosso repositório, vamos construir e testar automaticamente uma nova imagem hello_hapi: latest
.
Neste tutorial, configuramos um novo GitLab runner para criar imagens do Docker, criamos um regisro privado do Docker para armazená-las, e atualizamos um app Node.js para ser construído e testado dentro de containers Docker.
Para aprender mais sobre os vários componentes utilizados nesta configuração, você pode ler a documentação oficial do GitLab CE, GitLab Container Registry, e do Docker.
Por Brian Boucheron
]]>Software version control systems enable you to keep track of your software at the source level. With versioning tools, you can track changes, revert to previous stages, and branch to create alternate versions of files and directories.
Git is one of the most popular version control systems currently available. Many projects’ files are maintained in a Git repository, and sites like GitHub, GitLab, and Bitbucket help to facilitate software development project sharing and collaboration.
In this tutorial, we’ll install and configure Git on a Debian 9 server. We will cover how to install the software in two different ways, each of which have their own benefits depending on your specific needs.
In order to complete this tutorial, you should have a non-root user with sudo
privileges on an Debian 9 server. To learn how to achieve this setup, follow our Debian 9 initial server setup guide.
With your server and user set up, you are ready to begin.
Debian’s default repositories provide you with a fast method to install Git. Note that the version you install via these repositories may be older than the newest version currently available. If you need the latest release, consider moving to the next section of this tutorial to learn how to install and compile Git from source.
First, use the apt package management tools to update your local package index. With the update complete, you can download and install Git:
- sudo apt update
- sudo apt install git
You can confirm that you have installed Git correctly by running the following command:
- git --version
Outputgit version 2.11.0
With Git successfully installed, you can now move on to the Setting Up Git section of this tutorial to complete your setup.
A more flexible method of installing Git is to compile the software from source. This takes longer and will not be maintained through your package manager, but it will allow you to download the latest release and will give you some control over the options you include if you wish to customize.
Before you begin, you need to install the software that Git depends on. This is all available in the default repositories, so we can update our local package index and then install the packages.
- sudo apt update
- sudo apt install make libssl-dev libghc-zlib-dev libcurl4-gnutls-dev libexpat1-dev gettext unzip
After you have installed the necessary dependencies, you can go ahead and get the version of Git you want by visiting the Git project’s mirror on GitHub, available via the following URL:
https://github.com/git/git
From here, be sure that you are on the master
branch. Click on the Tags link and select your desired Git version. Unless you have a reason for downloading a release candidate (marked as rc) version, try to avoid these as they may be unstable.
Next, on the right side of the page, click on the Clone or download button, then right-click on Download ZIP and copy the link address that ends in .zip
.
Back on your Debian 9 server, move into the tmp
directory to download temporary files.
- cd /tmp
From there, you can use the wget
command to install the copied zip file link. We’ll specify a new name for the file: git.zip
.
- wget https://github.com/git/git/archive/v2.18.0.zip -O git.zip
Unzip the file that you downloaded and move into the resulting directory by typing:
- unzip git.zip
- cd git-*
Now, you can make the package and install it by typing these two commands:
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
To ensure that the install was successful, you can type git --version
and you should receive relevant output that specifies the current installed version of Git.
Now that you have Git installed, if you want to upgrade to a later version, you can clone the repository, and then build and install. To find the URL to use for the clone operation, navigate to the branch or tag that you want on the project’s GitHub page and then copy the clone URL on the right side:
At the time of writing, the relevant URL is:
https://github.com/git/git.git
Change to your home directory, and use git clone
on the URL you just copied:
- cd ~
- git clone https://github.com/git/git.git
This will create a new directory within your current directory where you can rebuild the package and reinstall the newer version, just like you did above. This will overwrite your older version with the new version:
- cd git
- make prefix=/usr/local all
- sudo make prefix=/usr/local install
With this complete, you can be sure that your version of Git is up to date.
Now that you have Git installed, you should configure it so that the generated commit messages will contain your correct information.
This can be achieved by using the git config
command. Specifically, we need to provide our name and email address because Git embeds this information into each commit we do. We can go ahead and add this information by typing:
- git config --global user.name "Sammy"
- git config --global user.email "sammy@domain.com"
We can see all of the configuration items that have been set by typing:
- git config --list
Outputuser.name=Sammy
user.email=sammy@domain.com
...
The information you enter is stored in your Git configuration file, which you can optionally edit by hand with a text editor like this:
- nano ~/.gitconfig
[user]
name = Sammy
email = sammy@domain.com
There are many other options that you can set, but these are the two essential ones needed. If you skip this step, you’ll likely see warnings when you commit to Git. This makes more work for you because you will then have to revise the commits you have done with the corrected information.
You should now have Git installed and ready to use on your system.
To learn more about how to use Git, check out these articles and series:
]]>GitLab CE, or Community Edition, is an open-source application primarily used to host Git repositories, with additional development-related features like issue tracking. It is designed to be hosted using your own infrastructure, and provides flexibility in deploying as an internal repository store for your development team, a public way to interface with users, or a means for contributors to host their own projects.
The GitLab project makes it relatively straightforward to set up a GitLab instance on your own hardware with an easy installation mechanism. In this guide, we will cover how to install and configure GitLab on a Debian 9 server.
For this tutorial, you will need:
sudo
user and basic firewall. To set this up, follow our Debian 9 initial server setup guide.The published GitLab hardware requirements recommend using a server with:
Although you may be able to get by with substituting some swap space for RAM, it is not recommended. For this guide we will assume that you have the above resources as a minimum.
Before we can install GitLab itself, it is important to install some of the software that it leverages during installation and on an ongoing basis. Fortunately, all of the required software can be easily installed from Debian’s default package repositories.
Since this is our first time using apt
during this session, we can refresh the local package index and then install the dependencies by typing:
- sudo apt update
- sudo apt install ca-certificates curl openssh-server postfix
You may have some of this software installed already. For the postfix
installation, select Internet Site when prompted. On the next screen, enter your server’s domain name to configure how the system will send mail.
Now that the dependencies are in place, we can install GitLab itself. This is a straightforward process that leverages an installation script to configure your system with the GitLab repositories.
Move into the /tmp
directory and then download the installation script:
- cd /tmp
- curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh
Feel free to examine the downloaded script to ensure that you are comfortable with the actions it will take. You can also find a hosted version of the script here:
- less /tmp/script.deb.sh
Once you are satisfied with the safety of the script, run the installer:
- sudo bash /tmp/script.deb.sh
The script will set up your server to use the GitLab maintained repositories. This lets you manage GitLab with the same package management tools you use for your other system packages. Once this is complete, you can install the actual GitLab application with apt
:
- sudo apt install gitlab-ce
This will install the necessary components on your system.
Before you configure GitLab, you will need to ensure that your firewall rules are permissive enough to allow web traffic. If you followed the guide linked in the prerequisites, you will have a ufw
firewall enabled.
View the current status of your active firewall by typing:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
As you can see, the current rules allow SSH traffic through, but access to other services is restricted. Since GitLab is a web application, we should allow HTTP access. Because we will be taking advantage of GitLab’s ability to request and enable a free TLS/SSL certificate from Let’s Encrypt, let’s also allow HTTPS access.
We can allow access to both HTTP and HTTPS by allowing the “WWW Full” app profile through our firewall. If you didn’t already have OpenSSH traffic enabled, you should allow that traffic now too:
- sudo ufw allow "WWW Full"
- sudo ufw allow OpenSSH
Check the ufw status
again, this time appending the verbose
flag; you should see access configured to at least these two services:
- sudo ufw status verbose
OutputStatus: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp (OpenSSH) ALLOW IN Anywhere
80,443/tcp (WWW Full) ALLOW IN Anywhere
22/tcp (OpenSSH (v6)) ALLOW IN Anywhere (v6)
80,443/tcp (WWW Full (v6)) ALLOW IN Anywhere (v6)
The above output indicates that the GitLab web interface will be accessible once we configure the application.
Before you can use the application, you need to update the configuration file and run a reconfiguration command. First, open Gitlab’s configuration file:
- sudo nano /etc/gitlab/gitlab.rb
Near the top is the external_url
configuration line. Update it to match your domain. Change http
to https
so that GitLab will automatically redirect users to the site protected by the Let’s Encrypt certificate:
##! For more details on configuring external_url see:
##! https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab
external_url 'https://example.com'
Next, look for the letsencrypt['contact_emails']
setting. This setting defines a list of email addresses that the Let’s Encrypt project can use to contact you if there are problems with your domain. It’s a good idea to uncomment and fill this out so that you will know of any issues:
letsencrypt['contact_emails'] = ['sammy@example.com']
Save and close the file. Run the following command to reconfigure Gitlab:
- sudo gitlab-ctl reconfigure
This will initialize GitLab using the information it can find about your server. This is a completely automated process, so you will not have to answer any prompts. The process will also configure a Let’s Encrypt certificate for your domain.
With GitLab running and access permitted, we can perform some initial configuration of the application through the web interface.
Visit the domain name of your GitLab server in your web browser:
https://example.com
On your first time visiting, you should see an initial prompt to set a password for the administrative account:
In the initial password prompt, supply and confirm a secure password for the administrative account. Click on the Change your password button when you are finished.
You will be redirected to the conventional GitLab login page:
Here, you can log in with the password you just set. The credentials are:
Enter these values into the fields for existing users and click the Sign in button. You will be signed into the application and taken to a landing page that prompts you to begin adding projects:
You can now make some simple changes to get GitLab set up the way you’d like.
One of the first things you should do after a fresh installation is get your profile into better shape. GitLab selects some reasonable defaults, but these are not usually appropriate once you start using the software.
To make the necessary modifications, click on the user icon in the upper-right hand corner of the interface. In the drop down menu that appears, select Settings:
You will be taken to the Profile section of your settings:
Adjust the Name and Email address from “Administrator” and “admin@example.com” to something more accurate. The name you select will be displayed to other users, while the email will be used for default avatar detection, notifications, Git actions through the interface, etc.
Click on the Update Profile settings button at the bottom when you are done:
A confirmation email will be sent to the address you provided. Follow the instructions in the email to confirm your account so that you can begin using it with GitLab.
Next, click on the Account item in the left-hand menu bar:
Here, you can find your private API token or configure two-factor authentication. However, the functionality we are interested in at the moment is the Change username section.
By default, the first administrative account is given the name root. Since this is a known account name, it is more secure to change this to a different name. You will still have administrative privileges; the only thing that will change is the name. Replace root with your preferred username:
Click on the Update username button to make the change:
Next time you log in to the GitLab, remember to use your new username.
In most cases, you will want to use SSH keys with Git to interact with your GitLab projects. To do this, you need to add your SSH public key to your GitLab account.
If you already have an SSH key pair created on your local computer, you can usually view the public key by typing:
- cat ~/.ssh/id_rsa.pub
You should see a large chunk of text, like this:
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copy this text and head back to the Settings page in GitLab’s web interface.
If, instead, you get a message that looks like this, you do not yet have an SSH key pair configured on your machine:
Outputcat: /home/sammy/.ssh/id_rsa.pub: No such file or directory
If this is the case, you can create an SSH key pair by typing:
- ssh-keygen
Accept the defaults and optionally provide a password to secure the key locally:
OutputGenerating public/private rsa key pair.
Enter file in which to save the key (/home/sammy/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/sammy/.ssh/id_rsa.
Your public key has been saved in /home/sammy/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I8v5/M5xOicZRZq/XRcSBNxTQV2BZszjlWaIHi5chc0 sammy@gitlab.docsthat.work
The key's randomart image is:
+---[RSA 2048]----+
| ..%o==B|
| *.E =.|
| . ++= B |
| ooo.o . |
| . S .o . .|
| . + .. . o|
| + .o.o ..|
| o .++o . |
| oo=+ |
+----[SHA256]-----+
Once you have this, you can display your public key as above by typing:
- cat ~/.ssh/id_rsa.pub
Outputssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMuyMtMl6aWwqBCvQx7YXvZd7bCFVDsyln3yh5/8Pu23LW88VXfJgsBvhZZ9W0rPBGYyzE/TDzwwITvVQcKrwQrvQlYxTVbqZQDlmsC41HnwDfGFXg+QouZemQ2YgMeHfBzy+w26/gg480nC2PPNd0OG79+e7gFVrTL79JA/MyePBugvYqOAbl30h7M1a7EHP3IV5DQUQg4YUq49v4d3AvM0aia4EUowJs0P/j83nsZt8yiE2JEYR03kDgT/qziPK7LnVFqpFDSPC3MR3b8B354E9Af4C/JHgvglv2tsxOyvKupyZonbyr68CqSorO2rAwY/jWFEiArIaVuDiR9YM5 sammy@mydesktop
Copy the block of text that’s displayed and head back to your Settings in GitLab’s web interface.
Click on the SSH Keys item in the left-hand menu:
In the provided space paste the public key you copied from your local machine. Give it a descriptive title, and click the Add key button:
You should now be able to manage your GitLab projects and repositories from your local machine without having to provide your GitLab account credentials.
You may have noticed that it is possible for anyone to sign up for an account when you visit your GitLab instance’s landing page. This may be what you want if you are looking to host public project. However, many times, more restrictive settings are desirable.
To begin, make your way to the administrative area by clicking on the wrench icon in the main menu bar at the top of the page:
On the page that follows, you can see an overview of your GitLab instance as a whole. To adjust the settings, click on the Settings item at the bottom of the left-hand menu:
You will be taken to the global settings for your GitLab instance. Here, you can adjust a number of settings that affect whether new users can sign up and their level of access.
If you wish to disable sign-ups completely (you can still manually create accounts for new users), scroll down to the Sign-up Restrictions section.
Deselect the Sign-up enabled check box:
Scroll down to the bottom and click on the Save changes button:
The sign-up section should now be removed from the GitLab landing page.
If you are using GitLab as part of an organization that provides email addresses associated with a domain, you can restrict sign-ups by domain instead of completely disabling them.
In the Sign-up Restrictions section, select the Send confirmation email on sign-up box, which will allow users to log in only after they’ve confirmed their email.
Next, add your domain or domains to the Whitelisted domains for sign-ups box, one domain per line. You can use the asterisk “*” to specify wildcard domains:
Scroll down to the bottom and click on the Save changes button:
The sign-up section should now be removed from the GitLab landing page.
By default, new users can create up to 10 projects. If you wish to allow new users from the outside for visibility and participation, but want to restrict their access to creating new projects, you can do so in the Account and Limit Settings section.
Inside, you can change the Default projects limit to 0 to completely disable new users from creating projects:
New users can still be added to projects manually and will have access to internal or public projects created by other users.
Scroll down to the bottom and click on the Save changes button:
New users will now be able to create accounts, but unable to create projects.
By default, GitLab has a scheduled task set up to renew Let’s Encrypt certificates after midnight every fourth day, with the exact minute based on your external_url
. You can modify these settings in the /etc/gitlab/gitlab.rb
file. For example, if you wanted to renew every 7th day at 12:30, you could configure this as follows:
letsencrypt['auto_renew_hour'] = "12"
letsencrypt['auto_renew_minute'] = "30"
letsencrypt['auto_renew_day_of_month'] = "*/7"
You can also disable auto-renewal by adding an additional setting to /etc/gitlab/gitlab.rb
:
letsencrypt['auto_renew'] = false
With auto-renewals in place, you will not need to worry about service interruptions.
You should now have a working GitLab instance hosted on your own server. You can begin to import or create new projects and configure the appropriate level of access for your team. GitLab is regularly adding features and making updates to their platform, so be sure to check out the project’s home page to stay up-to-date on any improvements or important notices.
]]>When working on a project with multiple developers, it can be frustrating when one person pushes to a repository and then another begins making changes on an outdated version of the code. Mistakes like these cost time, which makes it worthwhile to set up a script to keep your repositories in sync. You can also apply this method in a production environment to push hotfixes and other changes quickly.
While other solutions exist to complete this specific task, writing your own script is a flexible option that leaves room for customization in the future.
GitHub lets you configure webhooks for your repositories, which are events that send HTTP requests when events happen. For example, you can use a webhook to notify you when someone creates a pull request or pushes new code.
In this guide you will develop a Node.js server that listens for a GitHub webhook notification whenever you or someone else pushes code to GitHub. This script will automatically update a repository on a remote server with the most recent version of the code, eliminating the need to log in to a server to pull new commits.
To complete this tutorial, you will need:
sudo
privileges and a firewall.npm
installed on the remote server using the official PPA, as explained explained in How To Install Node.js on Ubuntu 16.04. Installing the distro-stable version is sufficient as it provides us with the recommended version without any additional configuration.We’ll start by configuring a webhook for your repository. This step is important because without it, Github doesn’t know what events to send when things happen, or where to send them. We’ll create the webhook first, and then create the server that will respond to its requests.
Sign in to your GitHub account and navigate to the repository you wish to monitor. Click on the Settings tab in the top menu bar on your repository’s page, then click Webhooks in the left navigation menu. Click Add Webhook in the right corner and enter your account password if prompted. You’ll see a page that looks like this:
http://your_server_ip:8080
. This is the address and port of the Node.js server we’ll write soon.application/json
. The script we will write will expect JSON data and won’t be able to understand other data types.The ping will fail at first, but rest assured your webhook is now configured. Now let’s get the repository cloned to the server.
Our script can update a repository, but it cannot handle setting up the repository initially, so we’ll do that now. Log in to your server:
- ssh sammy@your_server_ip
Ensure you’re in your home directory. Then use Git to clone your repository. Be sure to replace sammy
with your GitHub username and hello_hapi
with the name of your Github project.
- cd
- git clone https://github.com/sammy/hello_hapi.git
This will create a new directory containing your project. You’ll use this directory in the next step.
With your project cloned, you can create the webhook script.
Let’s create our server to listen for those webhook requests from GitHub. We’ll write a Node.js script that launches a web server on port 8080
. The server will listen for requests from the webhook, verify the secret we specified, and pull the latest version of the code from GitHub.
Navigate to your home directory:
- cd ~
Create a new directory for your webhook script called NodeWebhooks
:
- mkdir ~/NodeWebhooks
Then navigate to the new directory:
- cd ~/NodeWebhooks
Create a new file called webhook.js
inside of the NodeWebhooks
directory.
- nano webhook.js
Add these two lines to the script:
var secret = "your_secret_here";
var repo = "/home/sammy/hello_hapi";
The first line defines a variable to hold the secret you created in Step 1 which verifies that requests come from GitHub. The second line defines a variable that holds the full path to the repository you want to update on your local disk. This should point to the repository you checked out in Step 2.
Next, add these lines which import the http
and crypto
libaries into the script. We’ll use these to create our web server and hash the secret so we can compare it with what we receive from GitHub:
let http = require('http');
let crypto = require('crypto');
Next, include the child_process
library so you can execute shell commands from your script:
const exec = require('child_process').exec;
Next, add this code to define a new web server that handles GitHub webhook requests and pulls down the new version of the code if it’s an authentic request:
http.createServer(function (req, res) {
req.on('data', function(chunk) {
let sig = "sha1=" + crypto.createHmac('sha1', secret).update(chunk.toString()).digest('hex');
if (req.headers['x-hub-signature'] == sig) {
exec('cd ' + repo + ' && git pull');
}
});
res.end();
}).listen(8080);
The http.createServer()
function starts a web server on port 8080
which listens for incoming requests from Github. For security purposes, we validate that the secret included in the request matches the one we specified when creating the webhook in Step 1. The secret is passed in the x-hub-signature
header as an SHA1-hashed string, so we hash our secret and compare it to what GitHub sends us.
If the request is authentic, we execute a shell command to update our local repository using git pull
.
The completed script looks like this:
const secret = "your_secret_here";
const repo = "~/your_repo_path_here/";
const http = require('http');
const crypto = require('crypto');
const exec = require('child_process').exec;
http.createServer(function (req, res) {
req.on('data', function(chunk) {
let sig = "sha1=" + crypto.createHmac('sha1', secret).update(chunk.toString()).digest('hex');
if (req.headers['x-hub-signature'] == sig) {
exec('cd ' + repo + ' && git pull');
}
});
res.end();
}).listen(8080);
If you followed the initial server setup guide, you will need to allow this web server to communicate with the outside web by allowing traffic on port 8080
:
- sudo ufw allow 8080/tcp
Now that our script is in place, let’s make sure that it is working properly.
We can test our webhook by using node
to run it in the command line. Start the script and leave the process open in your terminal:
- cd ~/NodeWebhooks
- nodejs webhook.js
Return to your project’s page on Github.com. Click on the Settings tab in the top menu bar on your repository’s page, followed by clicking Webhooks in the left navigation menu. Click Edit next to the webhook you set up in Step 1. Scroll down until you see the Recent Deliveries section, as shown in the following image:
Press the three dots to the far right to reveal the Redeliver button. With the node server running, click Redeliver to send the request again. Once you confirm you want to send the request, you’ll see a successful response. This is indicated by a 200 OK
response code after redelivering the ping.
We can now move on to making sure our script runs in the background and starts at boot. Use CTRL+C
stops the node webhook server.
systemd is the task manager Ubuntu uses to control services. We will set up a service that will allow us to start our webhook script at boot and use systemd commands to manage it like we would with any other service.
Start by creating a new service file:
- sudo nano /etc/systemd/system/webhook.service
Add the following configuration to the service file which tells systemd how to run the script. This tells Systemd where to find our node script and describes our service.
Make sure to replace sammy
with your username.
[Unit]
Description=Github webhook
After=network.target
[Service]
Environment=NODE_PORT=8080
Type=simple
User=sammy
ExecStart=/usr/bin/nodejs /home/sammy/NodeWebhooks/webhook.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
Enable the new service so it starts when the system boots:
- sudo systemctl enable webhook.service
Now start the service:
- sudo systemctl start webhook
Ensure the service is started:
- sudo systemctl status webhook
You’ll see the following output indicating that the service is active:
Output● webhook.service - Github webhook
Loaded: loaded (/etc/systemd/system/webhook.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2018-08-17 19:28:41 UTC; 6s ago
Main PID: 9912 (nodejs)
Tasks: 6
Memory: 7.6M
CPU: 95ms
CGroup: /system.slice/webhook.service
└─9912 /usr/bin/nodejs /home/sammy/NodeWebhooks/webhook.js
You are now able to push new commits to your repository and see the changes on your server.
From your desktop machine, clone the repository:
- git clone https://github.com/sammy/hello_hapi.git
Make a change to one of the files in the repository. Then commit the file and push your code to GitHub.
- git add index.js
- git commit -m "Update index file"
- git push origin master
The webhook will fire and your changes will appear on your server.
You have set up a Node.js script which will automatically deploy new commits to a remote repository. You can use this process to set up additional repositories that you’d like to monitor. You could even configure it to deploy a website or application to production when you push your repository.
]]>public
directory. (I manage my server with ServerPilot, if that matters.)
This works fine on 5 or the 6 sites I run, but sometime in the last month, this fails on one of my sites. Specifically, the build script now lacks permission to move files into the public
directory for my app (the pull and build steps work fine).
As this only happens in one of my applications, I suspect there’s a permissions issue somewhere, and I’m not sure how to debug that or fix it. Any help would be much appreciated!
For context, here’s the deployment script.
<?php
/**
* Automated deploy from GitHub
*
* https://developer.github.com/webhooks/
* Template from ServerPilot (https://serverpilot.io/community/articles/how-to-automatically-deploy-a-git-repo-from-bitbucket.html)
* Hash validation from Craig Blanchette (http://isometriks.com/verify-github-webhooks-with-php)
*/
// Variables
$secret = getenv('GH_DEPLOY_SECRET');
$repo_dir = '/srv/users/serverpilot/apps/gomakethings/build';
$web_root_dir = '/srv/users/serverpilot/apps/gomakethings/public';
$rendered_dir = '/public';
$hugo_path = '/usr/local/bin/hugo';
// Validate hook secret
if ($secret !== NULL) {
// Get signature
$hub_signature = $_SERVER['HTTP_X_HUB_SIGNATURE'];
// Make sure signature is provided
if (!isset($hub_signature)) {
file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: HTTP header "X-Hub-Signature" is missing.' . "\n", FILE_APPEND);
die('HTTP header "X-Hub-Signature" is missing.');
} elseif (!extension_loaded('hash')) {
file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: Missing "hash" extension to check the secret code validity.' . "\n", FILE_APPEND);
die('Missing "hash" extension to check the secret code validity.');
}
// Split signature into algorithm and hash
list($algo, $hash) = explode('=', $hub_signature, 2);
// Get payload
$payload = file_get_contents('php://input');
// Calculate hash based on payload and the secret
$payload_hash = hash_hmac($algo, $payload, $secret);
// Check if hashes are equivalent
if (!hash_equals($hash, $payload_hash)) {
// Kill the script or do something else here.
file_put_contents('deploy.log', date('m/d/Y h:i:s a') . ' Error: Bad Secret' . "\n", FILE_APPEND);
die('Bad secret');
}
};
// Parse data from GitHub hook payload
$data = json_decode($_POST['payload']);
$commit_message;
if (empty($data->commits)){
// When merging and pushing to GitHub, the commits array will be empty.
// In this case there is no way to know what branch was pushed to, so we will do an update.
$commit_message .= 'true';
} else {
foreach ($data->commits as $commit) {
$commit_message .= $commit->message;
}
}
if (!empty($commit_message)) {
// Do a git checkout, run Hugo, and copy files to public directory
exec('cd ' . $repo_dir . ' && git fetch --all && git reset --hard origin/master');
exec('cd ' . $repo_dir . ' && ' . $hugo_path);
exec('cd ' . $repo_dir . ' && cp -r ' . $repo_dir . $rendered_dir . '/. ' . $web_root_dir);
// Log the deployment
file_put_contents('deploy.log', date('m/d/Y h:i:s a') . " Deployed branch: " . $branch . " Commit: " . $commit_message . "\n", FILE_APPEND);
}
]]>But when trying to push it just stops here:
RC@DESKTOP-QU1JT41 MINGW64 ~/Documents/Nodejs Course/server (master)
$ git push live master
The server's host key is not cached in the registry. You
have no guarantee that the server is the computer you
think it is.
The server's ssh-ed25519 key fingerprint is:
SERVER FINGER PRINT IS HERE
If you trust this host, enter "y" to add the key to
PuTTY's cache and carry on connecting.
If you want to carry on connecting just once, without
adding the key to the cache, enter "n".
If you do not trust this host, press Return to abandon the
connection.
Store key in cache? (y/n) y
]]>Version control systems help you share and collaborate on software development projects. Git is one of the most popular version control systems currently available.
This tutorial will walk you through installing and configuring Git on an Ubuntu 18.04 server. For a more detailed version of this tutorial, with better explanations of each step, please refer to How To Install Git on Ubuntu 18.04.
Logged into your Ubuntu 18.04 server as a sudo non-root user, first update your default packages.
- sudo apt update
- sudo apt install git
You can confirm that you have installed Git correctly by running this command and receiving output similar to the following:
- git --version
Outputgit version 2.17.1
Now that you have Git installed and to prevent warnings, you should configure it with your information.
- git config --global user.name "Your Name"
- git config --global user.email "youremail@domain.com"
If you need to edit this file, you can use a text editor such as nano:
- nano ~/.gitconfig
[user]
name = Your Name
email = youremail@domain.com
Here are links to more detailed tutorials that are related to this guide:
]]>GitLab is an open-source tool used by software teams to manage their complete development and delivery lifecycle. GitLab provides a broad set of functionality: issue tracking, git repositories, continuous integration, container registry, deployment, and monitoring. These features are all built from the ground up as a single application. You can host GitLab on your own servers or use GitLab.com, a cloud service where open-source projects get all the top-tier features for free.
GitLab’s continuous integration / continuous delivery (CI/CD) functionality is an effective way to build the habit of testing all code before it’s deployed. GitLab CI/CD is also highly scalable thanks to an additional tool, GitLab Runner, which automates scaling your build queue in order to avoid long wait times for development teams trying to release code.
In this guide, we will demonstrate how to configure a highly scalable GitLab infrastructure that manages its own costs, and automatically responds to load by increasing and decreasing available server capacity.
We’re going to build a scalable CI/CD process on DigitalOcean that automatically responds to demand by creating new servers on the platform and destroys them when the queue is empty.
These reusable servers are spawned by the GitLab Runner process and are automatically deleted when no jobs are running, reducing costs and administration overhead for your team.
As we’ll explain in this tutorial, you are in control of how many machines are created at any given time, as well as the length of time they’re retained before being destroyed.
We’ll be using three separate servers to build this project, so let’s go over terminology first:
GitLab: Your hosted GitLab instance or self-hosted instance where your code repositories are stored.
GitLab Bastion: The bastion server or Droplet is the core of what we’ll be configuring. It is the control instance that is used to interact with the DigitalOcean API to create Droplets and destroy them when necessary. No jobs are executed on this server.
GitLab Runners: Your runners are transient servers or Droplets that are created on the fly by the bastion server when needed to execute a CI/CD job in your build queue. These servers are disposable, and are where your code is executed or tested before your build is marked as passing or failing.
By leveraging each of the GitLab components, the CI/CD process will enable you to scale responsively based on demands. With these goals in mind, we are ready to begin setting up our continuous deployment with GitLab and DigitalOcean.
This tutorial will assume you have already configured GitLab on your own server or through the hosted service, and that you have an existing DigitalOcean account.
To set this up on an Ubuntu 16.04 Droplet, you can use the DigitalOcean one-click image, or follow our guide: “How To Install and Configure GitLab on Ubuntu 16.04.”
For the purposes of this tutorial, we assume you have private networking enabled on this Droplet, which you can achieve by following our guide on “How To Enable DigitalOcean Private Networking on Existing Droplets,” but it is not compulsory.
Throughout this tutorial, we’ll be using non-root users with admin privileges on our Droplets.
To begin, we will create a new example project in your existing GitLab instance containing a sample Node.js application.
Login to your GitLab instance and click the plus icon, then select New project from the dropdown menu.
On the new project screen, select the Import project tag, then click Repo by URL to import our example project directly from GitHub.
Paste the below clone URL into the Git repository URL:
https://github.com/do-community/hello_hapi.git
This repository is a basic JavaScript application for the purposes of demonstration, which we won’t be running in production. To complete the import, click the New Project button.
Your new project will now be in GitLab and we can get started setting up our CI pipeline.
Our GitLab Code Runner requires specific configuration as we’re planning to programmatically create Droplets to handle CI load as it grows and shrinks.
We will create two types of machines in this tutorial: a bastion instance, which controls and spawns new machines, and our runner instances, which are temporary servers spawned by the bastion Droplet to build code when required. The bastion instance uses Docker to create your runners.
Here are the DigitalOcean products we’ll use, and what each component is used for:
Flexible Droplets — We will create memory-optimized Droplets for our GitLab Runners as it’s a memory-intensive process which will run using Docker for containerization. You can shrink or grow this Droplet in the future as needed, however we recommend the flexible Droplet option as a starting point to understand how your pipeline will perform under load.
DigitalOcean Spaces (Object Storage) — We will use DigitalOcean Spaces to persist cached build components across your runners as they’re created and destroyed. This reduces the time required to set up a new runner when the CI pipeline is busy, and allows new runners to pick up where others left off immediately.
Private Networking — We will create a private network for your bastion Droplet and GitLab runners to ensure secure code compilation and to reduce firewall configuration required.
To start, we’ll create the bastion Droplet. Create a new Droplet, then under choose an image, select the One-click apps tab. From there, select Docker 17.12.0-ce on 16.04 (note that this version is current at the time of writing), then choose the smallest Droplet size available, as our bastion Droplet will manage the creation of other Droplets rather than actually perform tests.
It is recommended that you create your server in a data center that includes DigitalOcean Spaces in order to use the object storage caching features mentioned earlier.
Select both the Private networking and Monitoring options, then click Create Droplet.
We also need to set up our storage space which will be used for caching. Follow the steps in “How To Create a DigitalOcean Space and API Key” to create a new Space in the same or nearest data center as your hosted GitLab instance, along with an API Key.
Note this key down, as we’ll need it later in the tutorial.
Now it’s time to get our CI started!
With the fresh Droplet ready, we can now configure GitLab Runner. We’ll be installing scripts from GitLab and GitHub repositories.
As a best practice, be sure to inspect scripts to confirm what you will be installing prior to running the full commands below.
Connect to the Droplet using SSH, move into the /tmp
directory, then add the official GitLab Runner repository to Ubuntu’s package manager:
- cd /tmp
- curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | sudo bash
Once added, install the GitLab Runner application:
- sudo apt-get install gitlab-runner
We also need to install Docker Machine, which is an additional Docker tool that assists with automating the deployment of containers on cloud providers:
- curl -L https://github.com/docker/machine/releases/download/v0.14.0/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine && \
- sudo install /tmp/docker-machine /usr/local/bin/docker-machine
With these installations complete, we can move on to connecting our GitLab Runner to our GitLab install.
To link GitLab Runner to your existing GitLab install, we need to link the two instances together by obtaining a token that authenticates your runner to your code repositories.
Login to your existing GitLab instance as the admin user, then click the wrench icon to enter the admin settings area.
On the left of your screen, hover over Overview and select Runners from the list that appears.
On the Runners page under the How to setup a shared Runner for a new project section, copy the token shown in Step 3, and make a note of it along with the publicly accessible URL of your GitLab instance from Step 2. If you are using HTTPS for Gitlab, make sure it is not a self-signed certificate, or GitLab Runner will fail to start.
Back in your SSH connection with your bastion Droplet, run the following command:
- sudo gitlab-runner register
This will initiate the linking process, and you will be asked a series of questions.
On the next step, enter the GitLab instance URL from the previous step:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com)
https://example.digitalocean.com
Enter the token you obtained from your GitLab instance:
Please enter the gitlab-ci token for this runner
sample-gitlab-ci-token
Enter a description that will help you recognize it in the GitLab web interface. We recommend naming this instance something unique, like runner-bastion
for clarity.
Please enter the gitlab-ci description for this runner
[yourhostname] runner-bastion
If relevant, you may enter the tags for code you will build with your runner. However, we recommend this is left blank at this stage. This can easily be changed from the GitLab interface later.
Please enter the gitlab-ci tags for this runner (comma separated):
code-tag
Choose whether or not your runner should be able to run untagged jobs. This setting allows you to choose whether your runner should build repositories with no tags at all, or require specific tags. Select true in this case, so your runner can execute all repositories.
Whether to run untagged jobs [true/false]: true
Choose if this runner should be shared among your projects, or locked to the current one, which blocks it from building any code other than those specified. Select false for now, as this can be changed later in GitLab’s interface:
Whether to lock Runner to current project [true/false]: false
Choose the executor which will build your machines. Because we’ll be creating new Droplets using Docker, we’ll choose docker+machine
here, but you can read more about the advantages of each approach in this compatibility chart:
Please enter the executor: ssh, docker+machine, docker-ssh+machine, kubernetes, docker, parallels, virtualbox, docker-ssh, shell:
docker+machine
You’ll be asked which image to use for projects that don’t explicitly define one. We’ll choose a basic, secure default:
Please enter the Docker image (e.g. ruby:2.1):
alpine:latest
Now you’re done configuring the core bastion runner! At this point it should appear within the GitLab Runner page of your GitLab admin settings, which we accessed to obtain the token.
If you encounter any issues with these steps, the GitLab Runner documentation includes options for troubleshooting.
To speed up Droplet creation when the build queue is busy, we’ll leverage Docker’s caching tools on the Bastion Droplet to store the images for your commonly used containers on DigitalOcean Spaces.
To do so, upgrade Docker Machine on your SSH shell using the following command:
- curl -L https://github.com/docker/machine/releases/download/v0.14.0/docker-machine-`uname -s`-`uname -m` >/tmp/docker-machine && sudo install /tmp/docker-machine /usr/local/bin/docker-machine
With Docker Machine upgraded, we can move on to setting up our access tokens for GitLab Runner to use.
Now we need to create the credentials that GitLab Runner will use to create new Droplets using your DigitalOcean account.
Visit your DigitalOcean dashboard and click API. On the next screen, look for Personal access tokens and click Generate New Token.
Give the new token a name you will recognize such as GitLab Runner Access
and ensure that both the read and write scopes are enabled, as we need the Droplet to create new machines without human intervention.
Copy the token somewhere safe as we’ll use it in the next step. You can’t retrieve this token again without regenerating it, so be sure it’s stored securely.
To bring all of these components together, we need to finish configuring our bastion Droplet to communicate with your DigitalOcean account.
In your SSH connection to your bastion Droplet, use your favorite text editor, such as nano, to open the GitLab Runner configuration file for editing:
- nano /etc/gitlab-runner/config.toml
This configuration file is responsible for the rules your CI setup uses to scale up and down on demand. To configure the bastion to autoscale on demand, you need to add the following lines:
concurrent = 50 # All registered Runners can run up to 50 concurrent builds
[[runners]]
url = "https://example.digitalocean.com"
token = "existinggitlabtoken" # Note this is different from the registration token used by `gitlab-runner register`
name = "example-runner"
executor = "docker+machine" # This Runner is using the 'docker+machine' executor
limit = 10 # This Runner can execute up to 10 builds (created machines)
[runners.docker]
image = "alpine:latest" # Our secure image
[runners.machine]
IdleCount = 1 # The amount of idle machines we require for CI if build queue is empty
IdleTime = 600 # Each machine can be idle for up to 600 seconds, then destroyed
MachineName = "gitlab-runner-autoscale-%s" # Each machine will have a unique name ('%s' is required and generates a random number)
MachineDriver = "digitalocean" # Docker Machine is using the 'digitalocean' driver
MachineOptions = [
"digitalocean-image=coreos-stable", # The DigitalOcean system image to use by default
"digitalocean-ssh-user=core", # The default SSH user
"digitalocean-access-token=DO_ACCESS_TOKEN", # Access token from Step 7
"digitalocean-region=nyc3", # The data center to spawn runners in
"digitalocean-size=1gb", # The size (and price category) of your spawned runners
"digitalocean-private-networking" # Enable private networking on runners
]
[runners.cache]
Type = "s3" # The Runner is using a distributed cache with the S3-compatible Spaces service
ServerAddress = "nyc3.spaces.digitaloceanspaces.com"
AccessKey = "YOUR_SPACES_KEY"
SecretKey = "YOUR_SPACES_SECRET"
BucketName = "your_bucket_name"
Insecure = true # We do not have a SSL certificate, as we are only running locally
Once you’ve added the new lines, customize the access token, region and Droplet size based on your setup. For the purposes of this tutorial, we’ve used the smallest Droplet size of 1GB and created our Droplets in NYC3. Be sure to use the information that is relevant in your case.
You also need to customize the cache component, and enter your Space’s server address from the infrastructure configuration step, access key, secret key and the name of the Space that you created.
When completed, restart GitLab Runner to make sure the configuration is being used:
- gitlab-runner restart
If you would like to learn about more all available options, including off-peak hours, you can read GitLab’s advanced documentation.
At this point, our GitLab Runner bastion Droplet is configured and is able to create DigitalOcean Droplets on demand, as the CI queue fills up. We’ll need to test it to be sure it works by heading to your GitLab instance and the project we imported in Step 1.
To trigger a build, edit the readme.md
file by clicking on it, then clicking edit, and add any relevant testing text to the file, then click Commit changes.
Now a build will be automatically triggered, which can be found under the project’s CI/CD option in the left navigation.
On this page you should see a pipeline entry with the status of running. In your DigitalOcean account, you’ll see a number of Droplets automatically created by GitLab Runner in order to build this change.
Congratulations! Your CI pipeline is cloud scalable and now manages its own resource usage. After the specified idle time, the machines should be automatically destroyed, but we recommend verifying this manually to ensure you aren’t unexpectedly billed.
In some cases, GitLab may report that the runner is unreachable and as a result perform no actions, including deploying new runners. You can troubleshoot this by stopping GitLab Runner, then starting it again in debug mode:
- gitlab-runner stop
- gitlab-runner --debug start
The output should throw an error, which will be helpful in determining which configuration is causing the issue.
If your configuration creates too many machines, and you wish to remove them all at the same time, you can run this command to destroy them all:
- docker-machine rm $(docker-machine ls -q)
For more troubleshooting steps and additional configuration options, you can refer to GitLab’s documentation.
You’ve successfully set up an automated CI/CD pipeline using GitLab Runner and Docker. From here, you could configure higher levels of caching with Docker Registry to optimize performance or explore the use of tagging code builds to specific GitLab code runners.
For more on GitLab Runner, see the detailed documentation, or to learn more, you can read GitLab’s series of blog posts on how to make the most of your continuous integration pipeline.
This post also appears on the GitLab Blog.
]]>Is there are approach to exchange a repo and issues to gitlab facilitated servers?
]]>Jekyll is a static-site generator that provides some of the benefits of a Content Management System (CMS) while avoiding the performance and security issues introduced by such database-driven sites. It is “blog-aware” and includes special features to handle date-organized content, although its usefulness is not limited to blogging sites. Jekyll is well-suited for people who need to work off-line, prefer lightweight editors to web forms for content maintenance, and wish to use version control to track changes to their website.
In this tutorial, we’ll configure a production environment to use Nginx to host a Jekyll site, as well as Git to track changes and regenerate the site when you push changes to the site repository. We’ll also install and configure git-shell
to additionally protect your production server from unauthorized access. Finally, we will configure your local development machine to work with and push changes to the remote repository.
To follow this tutorial, you will need:
One Ubuntu 16.04 server for production, configured by following the Initial Server Setup with Ubuntu 16.04 tutorial, and including:
A development machine, with Git installed and a Jekyll site created by following the How to Set Up a Jekyll Development Site on Ubuntu 16.04 tutorial.
Optionally, if you want to learn more about Jekyll, you can check out these two tutorials:
For security purposes, we’ll begin by creating a user account that will host a Git repository for the Jekyll site. This user will execute the Git hooks script, which we will create to regenerate the site when changes are received. The following command will create a user named git:
- sudo adduser git
You will be asked to enter and repeat a password, and then to enter non-mandatory basic information about the user. At the end, you’ll be asked to confirm the information by typing in Y:
OutputAdding user `git' ...
Adding new group `git' (1001) ...
Adding new user `git' (1001) with group `git' ...
Creating home directory `/home/git' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for git
Enter the new value, or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Is the information correct? [Y/n]
We’ll also prepare the web root to hold the generated site. First, remove the default web page from the /var/www/html
directory:
- sudo rm /var/www/html/index.nginx-debian.html
Now, set ownership on the directory to the git user, so this user can update the site’s content when changes are received, and group ownership to the www-data
group. This group ensures that web servers can access and manage the files located in /var/www/html
:
- sudo chown git:www-data /var/www/html
Before continuing the tutorial, copy your SSH key to your newly-created git user, so you can safely access your production server using Git. You can do this by following step four of the Initial Server Setup with Ubuntu 16.04 tutorial. The simplest method is to use the ssh-copy-id
command, but you can also copy the key manually.
Now let’s create a Git repository for your Jekyll site and then configure Git hooks to rebuild it on update.
Your Git repository will contain data about your Git site, including a history of changes and commits. In this step, we’ll set up the Git repository on the production server with a post-receive hook that will regenerate your site.
The repository will be located in the home directory of the git user, so if you have logged out of this user account after previous step, use the su
command to switch roles:
- su - git
In the home directory, create a folder that will contain your Git repository. It’s required for the directory to be in the home directory and named using the repo-name.git
format, so git
commands can discover it. Usually, the repo-name
should be the name of your site, so git
can easily recognize sites and repositories. We will call our site sammy-blog
:
- mkdir ~/sammy-blog.git
Switch to the directory and initialize the Git repository using the git init
command. The --bare
flag sets up the repository for hosting on the server and enables collaboration between multiple users:
- cd ~/sammy-blog.git
- git init --bare
The output contains information about the successfully initialized repository:
OutputInitialized empty Git repository in /home/git/sammy-blog.git
If you don’t see such output, follow the on-screen logs to resolve the problem before continuing the tutorial.
The folder we’ve created contains the directories and files needed to host your repository. You can check its contents by typing the following:
- ls
Outputbranches config description HEAD hooks info objects refs
If you don’t see this type of output, make sure that you switched to the appropriate directory and successfully executed git init
.
The hooks directory contains scripts used for Git hooks. By default, it contains an example file for each type of Git hook so you can easily get started. For the purposes of this tutorial, we’ll use the post-receive hook to regenerate the site once the repository is updated with the latest changes.
Create the file named post-receive
in the hooks
directory and open it in the text editor of your choice:
- nano ~/sammy-blog.git/hooks/post-receive
We’ll configure the hook to clone the latest changes to the temporary directory and then to regenerate it and save the generated site to /var/www/html
so you can easily access it.
Copy the following content to the file:
#!/usr/bin/env bash
GIT_REPO=$HOME/sammy-blog.git
TMP_GIT_CLONE=/tmp/sammy-blog
PUBLIC_WWW=/var/www/html
git clone $GIT_REPO $TMP_GIT_CLONE
pushd $TMP_GIT_CLONE
bundle exec jekyll build -d $PUBLIC_WWW
popd
rm -rf $TMP_GIT_CLONE
exit
Once you’re done, save the file and close the text editor.
Make sure the script is executable, so the git user can execute it when changes are received:
- chmod +x ~/sammy-blog.git/hooks/post-receive
At this point, we have a fully-configured Git repository and a Git post-receive hook to update your site when changes are received. Before pushing the site to the repository, we’ll additionally secure our production server by configuring git-shell
, an interactive shell that can provide users with various Git commands when they connect over SSH.
Users can implement git-shell
in the following ways: as an interactive shell, providing them with various commands when they connect over SSH that enable them to create new repositories or add new SSH keys, or as a non-interactive shell, disabling access to the server’s console via SSH, but allowing them to use git
commands to manage existing repositories.
If you share the SSH key for the git user with anybody, they would have access to an interactive Bash session via SSH. This represents a security threat, as users could access other, non-site related data. We’ll configure git-shell
as a non-interactive shell, so you can’t start an interactive Bash session using the git user.
Make sure you’re logged in as the git user. If you exited the session after the previous step, you can use the same command as before to log in again:
- su - git
Start by creating a git-shell-commands
directory, needed for git-shell
to work:
- mkdir ~/git-shell-commands
The no-interactive-shell
file is used to define behavior if you don’t want to allow interactive shell access, so open it in the text editor of your choice:
- nano ~/git-shell-commands/no-interactive-login
Copy the following content to the file. It will ensure that the welcome message will be shown if you try to log in over SSH:
#!/usr/bin/env bash
printf '%s\n' "You've successfully authenticated to the server as $USER user, but interactive sessions are disabled."
exit 128
Once you’re done, save the file and close your text editor.
We need to make sure the file is executable, so git-shell
can execute it:
- chmod +x ~/git-shell-commands/no-interactive-login
Return back to your non-root sudo user, so you can modify the properties of our git user. If you used previous su
command, you can close the session using:
- exit
Lastly, we need to change the shell for the git user to the git-shell
:
- sudo usermod -s $(which git-shell) git
Verify that you can’t access the interactive shell by running SSH from the development machine:
- ssh git@production_server_ip
You should see a message like the one below. If you don’t, make sure you have the appropriate SSH keys in place and retrace the preceding steps to resolve the problem before continuing the tutorial.
OutputWelcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-109-generic x86_64)
...
You've successfully authenticated to the server as git user, but interactive sessions are disabled.
Connection to production_server_ip closed.
Next, you’ll configure your local development machine to use this Git repository and then we’ll push your site to the repository. Lastly, we’ll make sure your site is generated and you can access it from the web browser.
We have now initialized and configured a Git repository on the production server. On the development machine, we need to initialize a local repository that contains data about the remote repository and changes made in the local repository.
On your development machine, navigate to the directory containing the site:
- cd ~/www
We need to initialize a Git repository in the site’s root directory so we can push content to the remote repository:
- git init
The output contains a message about successful repository initialization:
OutputInitialized empty Git repository in /home/sammy/www
If you don’t see such output, follow the on-screen messages to resolve the problem before continuing.
Now, create a remote object, which represents the Git object used for tracking remote repositories and branches you work on. Usually, the default remote is called origin, so we’ll use it for purposes of this tutorial.
The following command will create an origin remote, which will track the sammy-blog repository on the production server using the git user:
- git remote add origin git@production_server_ip:sammy-blog.git
No output indicates successful operation. If you see an error message, make sure to resolve it before proceeding to the next step.
Every time you want to push changes to the remote repository you need to commit them and then push the commit to the remote repository. Once the remote repository receives the commit, your site will be regenerated with the latest changes in place.
Commits are used to track changes you make. They contain a commit message that’s used to describe changes made in that commit. It’s recommended to keep messages short but concise, including details about the most important changes made in the commit.
Before committing changes, we need to choose what files we want to commit. The following command marks all files for committing:
- git add .
No output indicates successful command execution. If you see any errors, make sure to resolve them before continuing.
Next, commit all the changes using the -m
flag, which will include the commit message. As this is our first commit, we’ll call it “Initial commit”:
- git commit -m "Initial commit."
The output contains a list of directories and files changed in that commit:
Commit output 10 files changed, 212 insertions(+)
create mode 100644 .gitignore
create mode 100644 404.html
create mode 100644 Gemfile
create mode 100644 Gemfile.lock
create mode 100644 _config.yml
create mode 100644 _posts/2017-09-04-link-test.md
create mode 100644 about.md
create mode 100644 assets/postcard.jpg
create mode 100644 contact.md
create mode 100644 index.md
If you see any errors, make sure to resolve them before continuing the tutorial.
Finally, use the following command to push committed changes to the remote repository:
- git push origin master
The output will contain information about the progress of the push. When it’s done, you will see information like the following:
Push outputCounting objects: 14, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (14/14), 110.80 KiB | 0 bytes/s, done.
Total 14 (delta 0), reused 0 (delta 0)
remote: Cloning into '/tmp/sammy-blog'...
remote: done.
remote: /tmp/sammy-blog ~/sammy-blog.git
remote: Configuration file: /tmp/sammy-blog/_config.yml
remote: Source: /tmp/sammy-blog
remote: Destination: /var/www/html
remote: Incremental build: disabled. Enable with --incremental
remote: Generating...
remote: done in 0.403 seconds.
remote: Auto-regeneration: disabled. Use --watch to enable.
remote: ~/sammy-blog.git
To git@188.166.57.145:sammy-blog.git
* [new branch] master -> master
If you don’t, follow the on-screen logs to resolve the problem before continuing the tutorial.
At this point, your site is uploaded to the server, and after a short period it’ll be regenerated. Navigate your web browser to http://production_server_ip
. You should see your site up and running. If you don’t, retrace the preceding steps to make sure you did everything as intended.
In order to regenerate your site when you change something, you need to add files to the commit, commit them, and then push changes, as you did with the initial commit.
Once you have made changes to your files, use the following commands to add all changed files to the commit. If you have created new files, you will also need to add them with git add
, as we did with the initial commit. When you are ready to commit your files, you will want to include another commit message describing your changes. We will call our message “updated files”:
- git commit -am "updated files"
Lastly, push changes to the remote repository.
- git push origin master
The output will look similar to what you saw with your initial push:
Push outputCounting objects: 14, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (12/12), done.
Writing objects: 100% (14/14), 110.80 KiB | 0 bytes/s, done.
Total 14 (delta 0), reused 0 (delta 0)
remote: Cloning into '/tmp/sammy-blog'...
remote: done.
remote: /tmp/sammy-blog ~/sammy-blog.git
remote: Configuration file: /tmp/sammy-blog/_config.yml
remote: Source: /tmp/sammy-blog
remote: Destination: /var/www/html
remote: Incremental build: disabled. Enable with --incremental
remote: Generating...
remote: done in 0.403 seconds.
remote: Auto-regeneration: disabled. Use --watch to enable.
remote: ~/sammy-blog.git
To git@188.166.57.145:sammy-blog.git
* [new branch] master -> master
At this point, your site is freshly generated and the latest changes are in the place.
In this tutorial, you learned how to deploy your website after pushing changes to your Git repository. If you want to learn more about Git, check out our Git tutorial series.
And if you want to learn more about other Git hooks, you can check out the How To Use Git Hooks To Automate Development and Deployment Tasks.
]]>Laravel is an open-source PHP web framework designed to make common web development tasks, such as authentication, routing, and caching, easier. Deployer is an open-source PHP deployment tool with out-of-the-box support for a number of popular frameworks, including Laravel, CodeIgniter, Symfony, and Zend Framework.
Deployer automates deployments by cloning an application from a Git repository to a server, installing dependencies with Composer, and configuring the application so you don’t have to do so manually. This allows you to spend more time on development, instead of uploads and configurations, and lets you deploy more frequently.
In this tutorial, you will deploy a Laravel application automatically without any downtime. To do this, you will prepare the local development environment from which you’ll deploy code and then configure a production server with Nginx and a MySQL database to serve the application.
Before you begin this guide you’ll need the following:
One Ubuntu 16.04 server with a non-root user with sudo privileges as described in the Initial Server Setup with Ubuntu 16.04 tutorial.
A LEMP stack installed as described in the How To Install Linux, Nginx, MySQL, PHP (LEMP stack) in Ubuntu 16.04 tutorial.
PHP, Composer, and Git installed on your server by following Steps 1 and 2 of How To Install and Use Composer on Ubuntu 16.04.
The php-xml
and php-mbstring
packages installed on your server. Install these by running: sudo apt-get install php7.0-mbstring php7.0-xml
.
A Git server. You can use services like GitLab, Bitbucket or GitHub. GitLab and Bitbucket offer private repositories for free, and GitHub offers private repositories starting at $7/month. Alternatively, you could set up a private Git server by following the tutorial How To Set Up a Private Git Server on a VPS.
A domain name that points to your server. The How To Set Up a Host Name with DigitalOcean tutorial can help you configure this.
Composer and Git installed on your local machine as well. The precise installation method depends on your local operating system. Instructions for installing Git are available on the Git project’s Downloads page and you can download Composer directly from the Composer project website.
Since you will be creating and deploying your application from your local machine, begin by configuring your local development environment. Deployer will control the entire deployment process from your local machine, so start off by installing it.
Note: If you use Windows on your local machine you should use a BASH emulator (like Git bash) to run all local commands.
On your local machine, open the terminal and download the Deployer installer using curl
:
- curl -LO https://deployer.org/deployer.phar
Next, run a short PHP script to verify that the installer matches the SHA-1 hash for the latest installer found on the Deployer - download page. Replace the highlighted value with the latest hash:
- php -r "if (hash_file('sha1', 'deployer.phar') === '35e8dcd50cf7186502f603676b972065cb68c129') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('deployer.phar'); } echo PHP_EOL;"
OutputInstaller verified
Make Deployer available system wide. Note that if you’re running Windows or macOS on your local machine, you may need to create the /usr/local/bin/dep
directory before running this command:
- sudo mv deployer.phar /usr/local/bin/dep
Make it executable:
- sudo chmod +x /usr/local/bin/dep
Next, create a Laravel project on your local machine:
- composer create-project --prefer-dist laravel/laravel laravel-app "5.5.*"
You have installed all the required software on your local machine. With that in place, we will move on to creating a Git repository for the application.
Deployer was designed to enable users to deploy code from anywhere. To allow this functionality, it requires users to push code to a repository on the Internet from which Deployer then copies the code over to the production server. We will use Git, an open-source version control system, to manage the source code of the Laravel application. You can connect to the Git server using SSH protocol, and to do this securely you need to generate SSH keys. This is more secure than password-based authentication and let’s you avoid typing the password before each deployment.
Run the following command on your local machine to generate the SSH key. Note that the -f
specifies the filename of the key file, and you can replace gitkey with your own filename. It will generate an SSH key pair (named gitkey
and gitkey.pub
) to the ~/.ssh/
folder.
- ssh-keygen -t rsa -b 4096 -f ~/.ssh/gitkey
It is possible that you have more SSH keys on your local machine, so configure the SSH client to know which SSH private key to use when it connects to your Git server.
Create an SSH config file on your local machine:
- touch ~/.ssh/config
Open the file and add a shortcut to your Git server. This should contain the HostName
directive (pointing to your Git server’s hostname) and the IdentityFile
directive (pointing to the file path of the SSH key you just created:
Host mygitserver.com
HostName mygitserver.com
IdentityFile ~/.ssh/gitkey
Save and close the file, and then restrict its permissions:
- chmod 600 ~/.ssh/config
Now your SSH client will know which private key use to connect to the Git server.
Display the content of your public key file with the following command:
- cat ~/.ssh/gitkey.pub
Copy the output and add the public key to your Git server.
If you use a Git hosting service, consult its documentation on how to add SSH keys to your account:
Now you will be able to connect to your Git server with your local machine. Test the connection with the following command:
- ssh -T git@mygitserver.com
If this command results in an error, check that you added your SSH keys correctly by referring to your Git hosting service’s documentation and try connecting again.
Before pushing the application to the remote Git repository and deploying it, let’s first configure the production server.
Deployer uses the SSH protocol to securely execute commands on the server. For this reason, the first step we will take toward configuring the production server will be to create a user which Deployer can use to log in and execute commands on your server via SSH.
Log in to your LEMP server with a sudo non-root user and create a new user called “deployer” with the following command:
- sudo adduser deployer
Laravel needs some writable directories to store cached files and uploads, so the directories created by the deployer user must be writable by the Nginx web server. Add the user to the www-data group to do this:
- sudo usermod -aG www-data deployer
The default permission for files created by the deployer user should be 644
for files and 755
for directories. This way, the deployer user will be able to read and write the files, while the group and other users will be able to read them.
Do this by setting deployer’s default umask to 022
:
- sudo chfn -o umask=022 deployer
We’ll store the application in the /var/www/html/
directory, so change the ownership of the directory to the deployer user and www-data group.
- sudo chown deployer:www-data /var/www/html
The deployer user needs to be able to modify files and folders within the /var/www/html
directory. Given that, all new files and subdirectories created within the /var/www/html
directory should inherit the folder’s group id (www-data). To achieve this, set the group id on this directory with the following command:
- sudo chmod g+s /var/www/html
Deployer will clone the Git repo to the production server using SSH, so you want to ensure that the connection between your LEMP server and the Git server is secure. We’ll use the same approach we used for our local machine, and we’ll generate an SSH key for the deployer user.
Switch to the deployer user on your server:
- su - deployer
Next, generate an SSH key pair as the deployer user. This time, you can accept the default filename of the SSH keys:
- ssh-keygen -t rsa -b 4096
Display the public key:
- cat ~/.ssh/id_rsa.pub
Copy the public key and add it to your Git server as you did in the previous step.
Your local machine will communicate with the server using SSH as well, so you should generate SSH keys for the deployer user on your local machine and add the public key to the server.
On your local machine run the following command. Feel free to replace deployerkey with a filename of your choice:
- ssh-keygen -t rsa -b 4096 -f ~/.ssh/deployerkey
Copy the following command’s output which contains the public key:
- cat ~/.ssh/deployerkey.pub
On your server as the deployer user run the following:
- nano ~/.ssh/authorized_keys
Paste the public key to the editor and hit CTRL-X
, Y
, then ENTER
to save and exit.
Restrict the permissions of the file:
- chmod 600 ~/.ssh/authorized_keys
Now switch back to the sudo user:
- exit
Now your server can connect to the Git server and you can log in to the server with the deployer user from your local machine.
Log in from your local machine to your server as the deployer user to test the connection:
- ssh deployer@your_server_ip -i ~/.ssh/deployerkey
After you have logged in as deployer, test the connection between your server and the Git server as well:
- ssh -T git@mygitserver.com
Finally, exit the server:
- exit
From here, we can move on to configuring Nginx and MySQL on our web server.
We’re now ready to configure the web server which will serve the application. This will involve configuring the document root and directory structure that we will use to hold the Laravel files. We will set up Nginx to serve our files from the /var/www/laravel
directory.
First, we need to create a server block configuration file for the new site.
Log in to the server as your sudo user and create a new config file. Remember to replace example.com with your own domain name:
- sudo nano /etc/nginx/sites-available/example.com
Add a server
block to the top of the configuration file:
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
}
The two listen
directives at the top tell Nginx which ports to listen to, and the root
directive defines the document root where Laravel will be installed. The current/public
in the path of the root directory is a symbolic link that points to the latest release of the application. By adding the index
directive, we are telling Nginx to serve any index.php
files first before looking for their HTML counterparts when requesting a directory location. The server_name
directive should be followed by your domain and any of its aliases.
We should also modify the way that Nginx will handle requests. This is done through the try_files
directive. We want it to try to serve the request as a file first and, if it cannot find a file with the correct name, it should attempt to serve the default index file for a directory that matches the request. Failing this, it should pass the request to the index.php
file as a query parameter.
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
Next, we need to create a block that handles the actual execution of any PHP files. This will apply to any files that end in .php. It will try the file itself and then try to pass it as a parameter to the index.php
file.
We will set the fastcgi
directives to tell Nginx to use the actual path of the application (resolved after following the symbolic link), instead of the symbolic link. If you don’t add these lines to the configuration, the path where the symbolic link points will be cached, meaning that an old version of your application will be loaded after the deployment. Without these directives, you would have to manually clear the cache after each deployment and requests to your application could potentially fail. Additionally, the fastcgi_pass
directive will make sure that Nginx uses the socket that php7-fpm is using for communication and that the index.php
file is used as the index for these operations.
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
Finally, we want to make sure that Nginx does not allow access to any hidden .htaccess
files. We will do this by adding one more location block called location ~ /\.ht
and, within that block, a directive specifying deny all;
.
After adding this last location block, the configuration file will look like this:
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
Save and close the file (CTRL-X
, Y
, then ENTER
), and then enable the new server block by creating a symbolic link to the sites-enabled
directory:
- sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
Test your configuration file for syntax errors:
- sudo nginx -t
If you see any errors, go back and recheck your file before continuing.
Restart Nginx to push the necessary changes:
- sudo systemctl restart nginx
The Nginx server is now configured. Next, we will configure the application’s MySQL database.
After the installation, MySQL creates a root user by default. This user has unlimited privileges, though, so it is a bad security practice to use the root user for your application’s database. Instead, we will create the database for the application with a dedicated user.
Log in to the MySQL console as root:
- mysql -u root -p
This will prompt you for the root password.
Next, create a new database for the application:
- CREATE DATABASE laravel_database DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Then, create a new database user. For the purposes of this tutorial, we will call this user laravel_user
with the password password
, although you should replace the password with a strong password of your choosing.
- CREATE USER 'laravel_user'@'localhost' IDENTIFIED BY 'password';
Grant privileges on the database to the user:
- GRANT ALL ON laravel_database.* TO 'laravel_user'@'localhost';
Next, reload the privileges:
- FLUSH PRIVILEGES;
And, finally, exit from the MySQL console:
- EXIT;
Your application’s database and user are now configured, and you’re almost ready to run your first deployment.
So far, you’ve configured all the tools and programs needed for Deployer to function. All that’s left to do before running your first deployment is to finish configuring your Laravel app and Deployer itself, and to initialize and push the app to your remote Git repository.
Open the terminal on your local machine and change the working directory to the application’s folder with the following command:
- cd /path/to/laravel-app
From this directory, run the following command which creates a file called deploy.php
within the laravel-app
folder, which will contain configuration information and tasks for deployment:
- dep init -t Laravel
Next, open the deploy.php
file with your preferred text editor or IDE. The third line includes a PHP script which contains the necessary tasks and configurations to deploy a Laravel application:
<?php
namespace Deployer;
require 'recipe/laravel.php';
. . .
Below this are some fields which you should edit to align with your configuration:
// Project Name
, add the name of your Laravel project.// Project Repository
, add the link to your Git repository.// Hosts
section, add your server’s IP address or domain name to the host()
directive, the name of your Deployer user (deployer in our examples) to the user()
directive. You should also add the SSH key you created in Step 3 to the identifyFile()
directive. Finally, you should add file path of the folder containing your application.When you’ve finished editing these fields, they should look like this:
...
// Project name
set('application', 'laravel-app');
// Project repository
set('repository', 'git@mygitserver.com:username/repository.git');
. . .
// Hosts
host('your_server_ip')
->user('deployer')
->identityFile('~/.ssh/deployerkey')
->set('deploy_path', '/var/www/html/laravel-app');
Next, comment out the last line of the file, before('deploy:symlink', 'artisan:migrate');
. This line instructs Deployer to run the database migrations automatically, and by commenting it out we are disabling it. If you don’t comment it out, the deployment will fail as this line requires appropriate database credentials to be on the server, which can only be added using a file which will be generated during the first deployment:
...
// Migrate database before symlink new release.
//before('deploy:symlink', 'artisan:migrate');
Before we can deploy the project, we must first push it to the remote Git repository.
On your local machine change the working directory to your application’s folder:
- cd /path/to/laravel-app
Run the following command in your laravel-app
directory to initialize a Git repository in the project folder:
- git init
Next, add all the project files to the repository:
- git add .
Commit the changes:
- git commit -m 'Initial commit for first deployment.'
Add your Git server to the local repository with the following command. Be sure to replace the highlighted text with your own remote repository’s URL:
- git remote add origin git@mygitserver.com:username/repository.git
Push the changes to the remote Git repository:
- git push origin master
Finally, run your first deployment using the dep
command:
- dep deploy
If everything goes well you should see an output like this with Successfully deployed!
at the end:
Deployer's output✈︎ Deploying master on your_server_ip
✔ Executing task deploy:prepare
✔ Executing task deploy:lock
✔ Executing task deploy:release
➤ Executing task deploy:update_code
✔ Ok
✔ Executing task deploy:shared
✔ Executing task deploy:vendors
✔ Executing task deploy:writable
✔ Executing task artisan:storage:link
✔ Executing task artisan:view:clear
✔ Executing task artisan:cache:clear
✔ Executing task artisan:config:cache
✔ Executing task artisan:optimize
✔ Executing task deploy:symlink
✔ Executing task deploy:unlock
✔ Executing task cleanup
Successfully deployed!
The following structure will be created on your server, inside the /var/www/html/laravel-app
directory:
├── .dep
├── current -> releases/1
├── releases
│ └── 1
└── shared
├── .env
└── storage
Verify this by running the following command on your server which will list the files and directories in the folder:
- ls /var/www/html/laravel-app
Outputcurrent .dep releases shared
Here’s what each of these files and directories contain:
The releases
directory contains deploy releases of the Laravel application.
current
is a symlink to the last release.
The .dep
directory contains special metadata for Deployer.
The shared
directory contains the .env
configuration file and the storage
directory which will be symlinked to each release.
However, the application will not work yet because the .env
file is empty. This file is used to hold important configurations like the application key — a random string used for encryptions. If it is not set, your user sessions and other encrypted data will not be secure. The app has a .env
file on your local machine, but Laravel’s .gitignore
file excludes it from the Git repo because storing sensitive data like passwords in a Git repository is not a good idea and, also, the application requires different settings on your server. The .env
file contains the database connection settings as well, which is why we disabled the database migrations for the first deployment.
Let’s configure the application on your server.
Log in to your server as the deployer user:
- ssh deployer@your_server_ip -i ~/.ssh/deployerkey
Run the following command on your server, and copy and paste your local .env
file to the editor:
- nano /var/www/html/laravel-app/shared/.env
Before you can save it, there are some changes that you should make. Set APP_ENV
to production
, APP_DEBUG
to false
, APP_LOG_LEVEL
to error
and don’t forget to replace the database, the database user, and password with your own. You should replace example.com
with your own domain as well:
APP_NAME=Laravel
APP_ENV=production
APP_KEY=base64:cA1hATAgR4BjdHJqI8aOj8jEjaaOM8gMNHXIP8d5IQg=
APP_DEBUG=false
APP_LOG_LEVEL=error
APP_URL=http://example.com
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel_database
DB_USERNAME=laravel_user
DB_PASSWORD=password
BROADCAST_DRIVER=log
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
Save the file and close the editor.
Now uncomment the last line of the deploy.php
file on your local machine:
...
// Migrate database before symlink new release.
before('deploy:symlink', 'artisan:migrate');
Warning: This will cause your database migrations to run automatically on every deployment. This will let you avoid migrating the databases manually, but don’t forget to back up your database before you deploy.
To check that this configuration is working, deploy the application once more. Run the following command on your local machine:
- dep deploy
Now, your application will work correctly. If you visit your server’s domain name(http://example.com) you will see the following landing page:
You don’t have to edit the .env
file on your server before all deployments. A typical deployment is not as complicated as the first and is done with just a few commands.
As a final step, this section will cover a simple deployment process you can use on a daily basis.
Start by modifying the application before you deploy again. For example, you can add a new route in the routes/web.php
file:
<?php
. . .
Route::get('/', function () {
return view('welcome');
});
Route::get('/greeting', function(){
return 'Welcome!';
});
Commit these changes:
- git commit -am 'Your commit message.'
Push the changes to the remote Git repository:
- git push origin master
And, finally, deploy the application:
- dep deploy
You have successfully deployed the application to your server.
You have configured your local computer and your server to easily deploy your Laravel application with zero downtime. The article covers only the basics of Deployer, and it has many useful functions. You can deploy to more servers at once and create tasks; for example, you can specify a task to back up the database before the migration. If you’d like to learn more about Deployer’s features, you can find more information in the Deployer documentation.
]]>I’ve referred following links, but I’m not clear since I’m new to this.
]]>I have a droplet set up for a while with Gitlab-CE from a one-click install. I believe Gitlab was at 9.2.5 when I created the droplet.
I have tried to keep the droplet up to date, but today I noticed that the site was not working. There was an error about unsecure connection. I have had this issue before, and it was easily fixed by updating my droplet. I went and did that but still could not get into my site.
I checked the Let’s Encrypt certificate to see if it needed to be renewed with sudo certbot renew --dry-run
but that showed these errors:
Attempting to renew cert (gitlab.devplateau.com) from /etc/letsencrypt/renewal/gitlab.devplateau.com.conf produced an unexpected error: Failed authorization procedure. gitlab.devplateau.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://gitlab.devplateau.com/.well-known/acme-challenge/HSNFfdwytBVlEdmalsrX1gGxfVn3WtNI0YK8Pm6JtPo: "<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p". Skipping.
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/gitlab.devplateau.com/fullchain.pem (failure)
-------------------------------------------------------------------------------
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
All renewal attempts failed. The following certs could not be renewed:
/etc/letsencrypt/live/gitlab.devplateau.com/fullchain.pem (failure)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates above have not been saved.)
-------------------------------------------------------------------------------
1 renew failure(s), 0 parse failure(s)
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: gitlab.devplateau.com
Type: unauthorized
Detail: Invalid response from
http://gitlab.devplateau.com/.well-known/acme-challenge/HSNFfdwytBVlEdmalsrX1gGxfVn3WtNI0YK8Pm6JtPo:
"<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
I have made sure that the A record for gitlab.devplateau.com did not get removed somehow and it is still there. I even removed it and created it again just to be safe.
Can someone please help me get back into my Gitlab site? I have important code saved and would prefer not to have to start the server over.
]]>Containerization is quickly becoming the most accepted method of packaging and deploying applications in cloud environments. The standardization it provides, along with its resource efficiency (when compared to full virtual machines) and flexibility, make it a great enabler of the modern DevOps mindset. Many interesting cloud native deployment, orchestration, and monitoring strategies become possible when your applications and microservices are fully containerized.
Docker containers are by far the most common container type today. Though public Docker image repositories like Docker Hub are full of containerized open source software images that you can docker pull
and use today, for private code you’ll need to either pay a service to build and store your images, or run your own software to do so.
GitLab Community Edition is a self-hosted software suite that provides Git repository hosting, project tracking, CI/CD services, and a Docker image registry, among other features. In this tutorial we will use GitLab’s continuous integration service to build Docker images from an example Node.js app. These images will then be tested and uploaded to our own private Docker registry.
Before we begin, we need to set up a secure GitLab server, and a GitLab CI runner to execute continuous integration tasks. The sections below will provide links and more details.
To store our source code, run CI/CD tasks, and host the Docker registry, we need a GitLab instance installed on an Ubuntu 16.04 server. GitLab currently recommends a server with at least 2 CPU cores and 4GB of RAM. Additionally, we’ll secure the server with SSL certificates from Let’s Encrypt. To do so, you’ll need a domain name pointed at the server.
You can complete these prerequisite requirements with the following tutorials:
ufw
firewallHow To Set Up Continuous Integration Pipelines with GitLab CI on Ubuntu 16.04 will give you an overview of GitLab’s CI service, and show you how to set up a CI runner to process jobs. We will build on top of the demo app and runner infrastructure created in this tutorial.
In the prerequisite GitLab continuous integration tutorial, we set up a GitLab runner using sudo gitlab-runner register
and its interactive configuration process. This runner is capable of running builds and tests of software inside of isolated Docker containers.
However, in order to build Docker images, our runner needs full access to a Docker service itself. The recommended way to configure this is to use Docker’s official docker-in-docker
image to run the jobs. This requires granting the runner a special privileged
execution mode, so we’ll create a second runner with this mode enabled.
Note: Granting the runner privileged mode basically disables all of the security advantages of using containers. Unfortunately, the other methods of enabling Docker-capable runners also carry similar security implications. Please look at the official GitLab documentation on Docker Build to learn more about the different runner options and which is best for your situation.
Because there are security implications to using a privileged runner, we are going to create a project-specific runner that will only accept Docker jobs on our hello_hapi
project (GitLab admins can always manually add this runner to other projects at a later time). From your hello_hapi
project page, click Settings at the bottom of the left-hand menu, then click CI/CD in the submenu:
Now click the Expand button next to the Runners settings section:
There will be some information about setting up a Specific Runner, including a registration token. Take note of this token. When we use it to register a new runner, the runner will be locked to this project only.
While we’re on this page, click the Disable shared Runners button. We want to make sure our Docker jobs always run on our privileged runner. If a non-privileged shared runner was available, GitLab might choose to use that one, which would result in build errors.
Log in to the server that has your current CI runner on it. If you don’t have a machine set up with runners already, go back and complete the Installing the GitLab CI Runner Service section of the prerequisite tutorial before proceeding.
Now, run the following command to set up the privileged project-specific runner:
- sudo gitlab-runner register -n \
- --url https://gitlab.example.com/ \
- --registration-token your-token \
- --executor docker \
- --description "docker-builder" \
- --docker-image "docker:latest" \
- --docker-privileged
OutputRegistering runner... succeeded runner=61SR6BwV
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Be sure to substitute your own information. We set all of our runner options on the command line instead of using the interactive prompts, because the prompts don’t allow us to specify --docker-privileged
mode.
Your runner is now set up, registered, and running. To verify, switch back to your browser. Click the wrench icon in the main GitLab menu bar, then click Runners in the left-hand menu. Your runners will be listed:
Now that we have a runner capable of building Docker images, let’s set up a private Docker registry for it to push images to.
Setting up your own Docker registry lets you push and pull images from your own private server, increasing security and reducing the dependencies your workflow has on outside services.
GitLab will set up a private Docker registry with just a few configuration updates. First we’ll set up the URL where the registry will reside. Then we will (optionally) configure the registry to use an S3-compatible object storage service to store its data.
SSH into your GitLab server, then open up the GitLab configuration file:
- sudo nano /etc/gitlab/gitlab.rb
Scroll down to the Container Registry settings section. We’re going to uncomment the registry_external_url
line and set it to our GitLab hostname with a port number of 5555
:
registry_external_url 'https://gitlab.example.com:5555'
Next, add the following two lines to tell the registry where to find our Let’s Encrypt certificates:
registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem"
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem"
Save and close the file, then reconfigure GitLab:
- sudo gitlab-ctl reconfigure
Output. . .
gitlab Reconfigured!
Update the firewall to allow traffic to the registry port:
- sudo ufw allow 5555
Now switch to another machine with Docker installed, and log in to the private Docker registry. If you don’t have Docker on your local development computer, you can use whichever server is set up to run your GitLab CI jobs, as it has Docker installed already:
- docker login gitlab.example.com:5555
You will be prompted for your username and password. Use your GitLab credentials to log in.
OutputLogin Succeeded
Success! The registry is set up and working. Currently it will store files on the GitLab server’s local filesystem. If you’d like to use an object storage service instead, continue with this section. If not, skip down to Step 3.
To set up an object storage backend for the registry, we need to know the following information about our object storage service:
us-east-1
) for example, if using Amazon S3, or Region Endpoint if using an S3-compatible service (https://nyc.digitaloceanspaces.com
)If you’re using DigitalOcean Spaces, you can find out how to set up a new Space and get the above information by reading How To Create a DigitalOcean Space and API Key.
When you have your object storage information, open the GitLab configuration file:
- sudo nano /etc/gitlab/gitlab.rb
Once again, scroll down to the container registry section. Look for the registry['storage']
block, uncomment it, and update it to the following, again making sure to substitute your own information where appropriate:
registry['storage'] = {
's3' => {
'accesskey' => 'your-key',
'secretkey' => 'your-secret',
'bucket' => 'your-bucket-name',
'region' => 'nyc3',
'regionendpoint' => 'https://nyc3.digitaloceanspaces.com'
}
}
If you’re using Amazon S3, you only need region
and not regionendpoint
. If you’re using an S3-compatible service like Spaces, you’ll need regionendpoint
. In this case region
doesn’t actually configure anything and the value you enter doesn’t matter, but it still needs to be present and not blank.
Save and close the file.
Note: There is currently a bug where the registry will shut down after thirty seconds if your object storage bucket is empty. To avoid this, put a file in your bucket before running the next step. You can remove it later, after the registry has added its own objects.
If you are using DigitalOcean Spaces, you can drag and drop to upload a file using the Control Panel interface.
Reconfigure GitLab one more time:
- sudo gitlab-ctl reconfigure
On your other Docker machine, log in to the registry again to make sure all is well:
- docker login gitlab.example.com:5555
You should get a Login Succeeded
message.
Now that we’ve got our Docker registry set up, let’s update our application’s CI configuration to build and test our app, and push Docker images to our private registry.
gitlab-ci.yaml
and Building a Docker ImageNote: If you didn’t complete the prerequisite article on GitLab CI you’ll need to copy over the example repository to your GitLab server. Follow the Copying the Example Repository From GitHub section to do so.
To get our app building in Docker, we need to update the .gitlab-ci.yml
file. You can edit this file right in GitLab by clicking on it from the main project page, then clicking the Edit button. Alternately, you could clone the repo to your local machine, edit the file, then git push
it back to GitLab. That would look like this:
- git clone git@gitlab.example.com:sammy/hello_hapi.git
- cd hello_hapi
- # edit the file w/ your favorite editor
- git commit -am "updating ci configuration"
- git push
First, delete everything in the file, then paste in the following configuration:
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- release
variables:
TEST_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:$CI_COMMIT_REF_NAME
RELEASE_IMAGE: gitlab.example.com:5555/sammy/hello_hapi:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN gitlab.example.com:5555
build:
stage: build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
test:
stage: test
script:
- docker pull $TEST_IMAGE
- docker run $TEST_IMAGE npm test
release:
stage: release
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
only:
- master
Be sure to update the highlighted URLs and usernames with your own information, then save with the Commit changes button in GitLab. If you’re updating the file outside of GitLab, commit the changes and git push
back to GitLab.
This new config file tells GitLab to use the latest docker image (image: docker:latest
) and link it to the docker-in-docker service (docker:dind). It then defines build
, test
, and release
stages. The build
stage builds the Docker image using the Dockerfile
provided in the repo, then uploads it to our Docker image registry. If that succeeds, the test
stage will download the image we just built and run the npm test
command inside it. If the test stage is successful, the release
stage will pull the image, tag it as hello_hapi:latest
and push it back to the registry.
Depending on your workflow, you could also add additional test
stages, or even deploy
stages that push the app to a staging or production environment.
Updating the configuration file should have triggered a new build. Return to the hello_hapi
project in GitLab and click on the CI status indicator for the commit:
On the resulting page you can then click on any of the stages to see their progress:
Eventually, all stages should indicate they were successful by showing green check mark icons. We can find the Docker images that were just built by clicking the Registry item in the left-hand menu:
If you click the little “document” icon next to the image name, it will copy the appropriate docker pull ...
command to your clipboard. You can then pull and run your image:
- docker pull gitlab.example.com:5555/sammy/hello_hapi:latest
- docker run -it --rm -p 3000:3000 gitlab.example.com:5555/sammy/hello_hapi:latest
Output> hello@1.0.0 start /usr/src/app
> node app.js
Server running at: http://56fd5df5ddd3:3000
The image has been pulled down from the registry and started in a container. Switch to your browser and connect to the app on port 3000 to test. In this case we’re running the container on our local machine, so we can access it via localhost at the following URL:
http://localhost:3000/hello/test
OutputHello, test!
Success! You can stop the container with CTRL-C
. From now on, every time we push new code to the master
branch of our repository, we’ll automatically build and test a new hello_hapi:latest
image.
In this tutorial we set up a new GitLab runner to build Docker images, created a private Docker registry to store them in, and updated a Node.js app to be built and tested inside of Docker containers.
To learn more about the various components used in this setup, you can read the official documentation of GitLab CE, GitLab Container Registry, and Docker.
]]>GitLab Community Edition is a self-hosted Git repository provider with additional features to help with project management and software development. One of the most valuable features that GitLab offers is the builtin continuous integration and delivery tool called GitLab CI.
In this guide, we will demonstrate how to set up GitLab CI to monitor your repositories for changes and run automated tests to validate new code. We will start with a running GitLab installation where we will copy an example repository for a basic Node.js application. After configuring our CI process, when a new commit is pushed to the repository GitLab will use CI runner to execute the test suite against the code in an isolated Docker container.
Before we begin, you’ll need to set up an initial environment. We need a secure GitLab server configured to store our code and manage our CI/CD processes. Additionally, we need a place to run the automated tests. This can either be the same server that GitLab is installed on or a separate host. The below sections cover the requirements in more detail.
To store the source code and configure our CI/CD tasks, we need a GitLab instance installed on an Ubuntu 16.04 server. GitLab currently recommends a server with at least 2 CPU cores and 4GB of RAM. To protect your code from being exposed or tampered with, the GitLab instance will be protected with SSL using Let’s Encrypt. Your server needs to have a domain name or a subdomain associated with it in order to complete this step.
You can complete these requirements using the following tutorials:
sudo
user and configure a basic firewallWe will be demonstrating how to share CI/CD runners (the components that run the automated tests) between projects and how to lock them to single projects. If you wish to share CI runners between projects, we strongly recommend that you restrict or disable public sign-ups. If you didn’t modify your settings during installation, go back and follow the optional step from the GitLab installation article on restricting or disabling sign-ups to prevent abuse by outside parties.
GitLab CI Runners are the servers that check out the code and run automated tests to validate new changes. To isolate the testing environment, we will be running all of our automated tests within Docker containers. To do this, we need to install Docker on the server or servers that will be running the tests.
This step can be completed on the GitLab server or on a different Ubuntu 16.04 server to provide additional isolation and avoid resource contention. The following tutorials will install Docker on the host you wish to use to run your tests:
sudo
user and configure a basic firewall (you do not have to complete this again if you are setting up the CI runner on the GitLab server)When you are ready to begin, continue with this guide.
To begin, we will create a new project in GitLab containing the example Node.js application. We will import the original repository directly from GitHub so that we do not have to upload it manually.
Log into GitLab and click the plus icon in the upper-right corner and select New project to add a new project:
On the new project page, click on the Import project tab:
Next, click on the Repo by URL button. Although there is a GitHub import option, it requires a Personal access token and is used to import the repository and additional information. We are only interested in the code and the Git history, so importing by URL is easier.
In the Git repository URL field, enter the following GitHub repository URL:
https://github.com/do-community/hello_hapi.git
It should look like this:
Since this is a demonstration, it’s probably best to keep the repository marked Private. When you are finished, click Create project.
The new project will be created based on the repository imported from GitHub.
GitLab CI looks for a file called .gitlab-ci.yml
within each repository to determine how it should test the code. The repository we imported has a gitlab-ci.yml
file already configured for the project. You can learn more about the format by reading the .gitlab-ci.yml reference documentation
Click on the .gitlab-ci.yml
file in the GitLab interface for the project we just created. The CI configuration should look like this:
image: node:latest
stages:
- build
- test
cache:
paths:
- node_modules/
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
test_with_lab:
stage: test
script: npm test
The file uses the GitLab CI YAML configuration syntax to define the actions that should be taken, the order they should execute, under what conditions they should be run, and the resources necessary to complete each task. When writing your own GitLab CI files, you can visit a syntax linter by going to /ci/lint
in your GitLab instance to validate that your file is formatted correctly.
The configuration file starts off by declaring a Docker image
that should be used to run the test suite. Since Hapi is a Node.js framework, we are using the latest Node.js image:
image: node:latest
Next, we explicitly define different continuous integration stages that will run:
stages:
- build
- test
The names you choose here are arbitrary, but the ordering determines the order of execution for the steps that will follow. Stages are tags that you can apply to individual jobs. GitLab will run jobs of the same stage in parallel and will wait to execute the next stage until all jobs at the current stage are complete. If no stages are defined, GitLab will use three stages called build
, test
, and deploy
and assign all jobs to the test
stage by default.
After defining the stages, the configuration includes a cache
definition:
cache:
paths:
- node_modules/
This specifies files or directories that can be cached (saved for later use) between runs or stages. This can help decrease the amount of time that it takes to run jobs that rely on resources that might not change between runs. Here, we are caching the node_modules
directory, which is where npm
will install the dependencies it downloads.
Our first job is called install_dependencies
:
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
Jobs can be named anything, but because the names will be used in the GitLab UI, descriptive names are helpful. Usually, npm install
can be combined with the next testing stages, but to better demonstrate the interaction between stages, we are extracting this step to run in its own stage.
We mark the stage explicitly as “build” with the stage
directive. Next, we specify the actual commands to run using the script
directive. You can include multiple commands by adding additional lines within the script
section.
The artifacts
subsection is used to specify file or directory paths to save and pass between stages. Because the npm install
command installs the dependencies for the project, our next step will need access to the downloaded files. Declaring the node_modules
path ensures that the next stage will have access to the files. These will also be available to view or download in the GitLab UI after the test, so this is useful for build artifacts like binaries as well. If you want to save everything produced during the stage, replace the entire paths
section with untracked: true
.
Finally, the second job called test_with_lab
declares the command that will actually run the test suite:
test_with_lab:
stage: test
script: npm test
We place this in the test
stage. Since this is a later stage, it has access to the artifacts produced by the build
stage, which are the project dependencies in our case. Here, the script
section demonstrates the single-line YAML syntax that can be used when there’s only a single item. We could have used this same syntax in the previous job as well since only one command was specified.
Now that you have a basic idea of how the .gitlab-ci.yml
file defines CI/CD tasks, we can define one or more runners capable of executing the testing plan.
Since our repository includes a .gitlab-ci.yml
file, any new commits will trigger a new CI run. If no runners are available, the CI run will be set to “pending”. Before we define a runner, let’s trigger a CI run to see what a job looks like in the pending state. Once a runner is available, it will immediately pick up the pending run.
Back in the hello_hapi
GitLab project repository view, click on the plus sign next to the branch and project name and select New file from the menu:
On the next page, enter dummy_file
in the File name field and enter some text in the main editing window:
Click Commit changes at the bottom when you are finished.
Now, return to the main project page. A small paused icon will be attached to the most recent commit. If you mouse over the icon, it will display “Commit:pending”:
This means that the tests that validate code changes have not been run yet.
To get more information, go to the top of the page and click Pipelines. You will be taken to the pipeline overview page, where you can see that the CI run is marked as pending and labeled as “stuck”:
Note: Along the right-hand side is a button for the CI Lint tool. This is where you can check the syntax of any gitlab-ci.yml
files you write.
From here, you can click the pending status to get more details about the run. This view displays the different stages of our run, as well as the individual jobs associated with each stage:
Finally, click on the install_dependencies job. This will give you the specific details about what is delaying the run:
Here, the message indicates that the job is stuck because of a lack of runners. This is expected since we haven’t configured any yet. Once a runner is available, this same interface can be used to see the output. This is also the location where you can download artifacts produced during the build.
Now that we know what a pending job looks like, we can assign a CI runner to our project to pick up the pending job.
We’re now ready to set up a GitLab CI runner. To do this, we need to install the GitLab CI runner package on the system and start the GitLab runner service. The service can run multiple runner instances for different projects.
As mentioned in the prerequisites, you can complete these steps on the same server that hosts your GitLab instance or a different server if you want to be sure to avoid resource contention. Remember that whichever host you choose, you need Docker installed for the configuration we will be using.
The process of installing the GitLab CI runner service is similar to the process used to install GitLab itself. We will download a script to add a GitLab repository to our apt
source list. After running the script, we will download the runner package. We can then configure it to serve our GitLab instance.
Start by downloading the latest version of the GitLab CI runner repository configuration script to the /tmp
directory (this is a different repository than the one used by the GitLab server):
- curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh -o /tmp/gl-runner.deb.sh
Feel free to examine the downloaded script to ensure that you are comfortable with the actions that it will take. You can also find a hosted version of the script here:
- less /tmp/gl-runner.deb.sh
Once you are satisfied with the safety of the script, run the installer:
- sudo bash /tmp/gl-runner.deb.sh
The script will set up your server to use the GitLab maintained repositories. This lets you manage GitLab runner packages with the same package management tools you use for your other system packages. Once this is complete, you can proceed with the installation using apt-get
:
- sudo apt-get install gitlab-runner
This will install the GitLab CI runner package on the system and start the GitLab runner service.
Next, we need to set up a GitLab CI runner so that it can begin accepting work.
To do this, we need a GitLab runner token so that the runner can authenticate with the GitLab server. The type of token we need depends on how we want to use this runner.
A project specific runner is useful if you have specific requirements for the runner. For instance, if your gitlab-ci.yml
file defines deployment tasks that require credentials, a specific runner may be required to authenticate correctly into the deployment environment. If your project has resource intensive steps in the CI process, this might also be a good idea. A project specific runner will not accept jobs from other projects.
On the other hand, a shared runner is a general purpose runner that can be used by multiple projects. Runners will take jobs from the projects according to an algorithm that accounts for the number of jobs currently being run for each project. This type of runner is more flexible. You will need to log into GitLab with an admin account to set up shared runners.
We will demonstrate how to get the runner tokens for both of these runner types below. Choose the method that suits you best.
If you would like the runner to be tied to a specific project, begin by navigating to the project’s page in the GitLab interface.
From here, click the Settings item in the left-hand menu. Afterwards, click the CI/CD item in the submenu:
On this page, you will see a Runners settings section. Click the Expand button to see more details. In the detail view, the left-hand side will explain how to register a project-specific runner. Copy the registration token displayed in step 4 of the instructions:
If you wish to disable any active shared runners for this project, you can do so by clicking the Disable shared Runners button on the right-hand side. This is optional.
When you are ready, skip ahead to learn how to register your runner using the pieces of information you collected from this page.
To find the information required to register a shared runner, you will need to be logged in with an administrative account.
Begin by clicking the wrench icon in the top navigation bar to access the admin area. In the Overview section of the left-hand menu, click Runners to access the shared runner configuration page:
Copy the registration token displayed towards the top of the page:
We will use this token to register a GitLab CI runner for the project.
Now that you have a token, go back to the server where your GitLab CI runner service is installed.
To register a new runner, type the following command:
- sudo gitlab-runner register
You will be asked a series of questions to configure the runner:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/)
Enter your GitLab server’s domain name, using https://
to specify SSL. You can optionally append /ci
to the end of your domain, but recent versions will redirect automatically.
Please enter the gitlab-ci token for this runner
The token you copied in the last section.
Please enter the gitlab-ci description for this runner
A name for this particular runner. This will show up in the runner service’s list of runners on the command line and in the GitLab interface.
Please enter the gitlab-ci tags for this runner (comma separated)
These are tags that you can assign to the runner. GitLab jobs can express requirements in terms of these tags to make sure they are run on a host with the correct dependencies.
You can leave this blank in this case.
Whether to lock Runner to current project [true/false]
Assigns the runner to the specific project. It cannot be used by other projects.
Select “false” here.
Please enter the executor
The method used by the runner to complete jobs.
Choose “docker” here.
Please enter the default Docker image (e.g. ruby:2.1)
The default image used to run jobs when the .gitlab-ci.yml
file does not include an image specification. It’s best to specify a general image here and define more specific images in your .gitlab-ci.yml
file as we have done.
We will enter “alpine:latest” here as a small, secure default.
After answering the prompts, a new runner will be created capable of running your project’s CI/CD tasks.
You can see the runners that the GitLab CI runner service currently has available by typing:
- sudo gitlab-runner list
OutputListing configured runners ConfigFile=/etc/gitlab-runner/config.toml
example-runner Executor=docker Token=e746250e282d197baa83c67eda2c0b URL=https://example.com
Now that we have a runner available, we can return to the project in GitLab.
Back in your web browser, return to your project in GitLab. Depending on how long it has been since registering your runner, the runner may be currently running:
Or it might have completed already:
Regardless of the state, click on the running or passed icon (or failed if you ran into a problem) to view the current state of the CI run. You can get a similar view by clicking the top Pipelines menu.
You will be taken to the pipeline overview page where you can see the status of the GitLab CI run:
Under the Stages header, there will be a circle indicating the status of each of the stages in the run. If you click on the stage, you can see the individual jobs associated with the stage:
Click on the install_dependencies job within the build stage. This will take you to the job overview page:
Now, instead of displaying a message about no runners being available, the output of the job is displayed. In our case, this means that you can see the results of npm
installing each of the packages.
Along the right-hand side, you can see some other items as well. You can view other jobs by changing the Stage and clicking the runs below. You can also view or download any artifacts produced by the run.
In this guide, we’ve added a demonstration project to a GitLab instance to showcase the continuous integration and deployment capabilities of GitLab CI. We discussed how to define a pipeline in gitlab-ci.yml
files to build and test your applications and how to assign jobs to stages to define their relationship to one another. We then set up a GitLab CI runner to pick up CI jobs for our project and demonstrated how to find information about individual GitLab CI runs.
This article covers an older method of configuring GitLab with Let’s Encrypt manually. As of GitLab version 10.5, Let’s Encrypt support is available natively within Gitlab.
Our guide on How To Install and Configure GitLab on Ubuntu 16.04 has been updated to include the relevant configuration settings within GitLab. We recommend referring to that guide moving forward.
GitLab, specifically GitLab CE (Community Edition), is an open source application primarily used to host Git repositories, with additional development-related features like issue tracking. The GitLab project makes it relatively straight forward to set up a GitLab instance on your own hardware with an easy installation mechanism.
By default, GitLab serves pages over plain, unencrypted HTTP. Like any web application that handles sensitive information like login credentials, GitLab should be configured to serve pages over TLS/SSL to encrypt data in transit. This is extremely important with GitLab since your project’s code base could be altered by someone able to intercept your login credentials.
The Let’s Encrypt project can be used to easily obtain trusted SSL certificates for any website or web application. Let’s Encrypt offers certificates signed by their certificate authority, which is trusted by all modern web browsers, if you can prove that you own the domain you are requesting a certificate for.
In this guide, we will demonstrate how to configure a GitLab instance installed on Ubuntu 16.04 to use a trusted SSL certificate obtained from Let’s Encrypt. This will secure all outgoing communication to users and ensure that passwords, code, and any other communications are protected from being read or tampered with by outside parties.
To complete this guide, you will need to have a GitLab instance installed on an Ubuntu 16.04 server. We will assume that you have followed our how to install and configure GitLab on Ubuntu 16.04 guide to get this set up.
In order to obtain a certificate from Let’s Encrypt, your server must be configured with a fully qualified domain name (FQDN). If you do not already have a registered domain name, you may register one with one of the many domain name registrars out there (e.g. Namecheap, GoDaddy, etc.).
If you haven’t already, be sure to create an A Record that points your domain to the public IP address of your server. This is required because of how Let’s Encrypt validates that you own the domain it is issuing a certificate for. For example, if you want to obtain a certificate for gitlab.example.com
, that domain must resolve to your server for the validation process to work. You will need a real domain with valid DNS records pointing to your server to successfully complete this guide.
Before we can obtain an SSL certificate for our GitLab installation, we will need to download and install Certbot, the official Let’s Encrypt client.
The Certbot developers maintain their own Ubuntu software repository with up-to-date versions of the software. Because Certbot is in such active development it’s worth using this repository to install a newer Certbot than provided by Ubuntu.
First, add the repository:
- sudo add-apt-repository ppa:certbot/certbot
You’ll need to press ENTER
to accept. Afterwards, update the package list to pick up the new repository’s package information:
- sudo apt-get update
And finally, install Certbot with apt-get
:
- sudo apt-get install certbot
Now that Certbot is installed, we can can prepare our server so that it can respond successfully to the domain ownership verification tests that Let’s Encrypt requires before issuing a certificate.
In order to receive an SSL certificate from the Let’s Encrypt certificate authority, we must prove that we own the domain that the certificate will be provided for. There are multiple methods of proving domain ownership, each of which require root or administrator access to the server.
GitLab contains an internally managed Nginx web server for serving the application itself. This makes the installation rather self-contained, but it does add an additional layer of complexity when attempting to modify the web server itself.
Since the embedded Nginx is currently being utilized to serve GitLab itself, the best domain validation method is the web root method. Certbot will use the existing web server to serve a known file from the server on port 80. This proves to the certificate authority that the person requesting the certificate has administrative control over the web server, which effectively proves ownership over the server and domain.
To set up web root domain validation for GitLab, our first step will be to create a dummy document root:
- sudo mkdir -p /var/www/letsencrypt
This will be unused by normal Nginx operations, but will be used by Certbot for domain verification.
Next, we need to adjust GitLab’s Nginx configuration to use this directory. Open up the main GitLab configuration file by typing:
- sudo nano /etc/gitlab/gitlab.rb
Inside, we need to add a line that will inject a custom directive into GitLab’s Nginx configuration file. It’s probably best to scroll down to the GitLab Nginx section of the file, but the line can be placed anywhere.
Paste in the following line:
. . .
nginx['custom_gitlab_server_config'] = "location ^~ /.well-known { root /var/www/letsencrypt; }"
. . .
The Let’s Encrypt web root verification method places a file within a .well-known
directory in a document root so that the certificate authority can validate it. This line tells Nginx to serve requests for /.well-known
from the web root we created a moment ago.
When you are finished, save and close the file.
Next, apply the changes to GitLab’s Nginx configuration by reconfiguring the application again:
- sudo gitlab-ctl reconfigure
The server should now be set up to successfully validate your domain.
Now that GitLab’s Nginx instance is configured with the necessary location block, we can use Certbot to validate our domain name and request a certificate.
Because we only want a certificate and do not wish to automatically reconfigure the web server, we will use the certonly
subcommand. We will specify three options. We need choose the web root authenticator (--webroot
), pass in the document root (--webroot-path=/var/www/letsencrypt
), and use the -d
command to pass our domain name:
- sudo certbot certonly --webroot --webroot-path=/var/www/letsencrypt -d your_domain
You will be asked to provide an email address. It is important to include a valid email address as this is the only way to reliably receive emails about certificate expirations and other important information. You will also be prompted to accept the Let’s Encrypt terms of service.
Once you are finished, Let’s Encrypt should issue you a certificate for the domain if it was able to correctly validate ownership. You should see output that looks similar to this:
OutputIMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at
/etc/letsencrypt/live/gitlab.example.com/fullchain.pem. Your cert
will expire on 2017-07-26. To obtain a new or tweaked version of
this certificate in the future, simply run certbot again. To
non-interactively renew *all* of your certificates, run "certbot
renew"
- If you lose your account credentials, you can recover through
e-mails sent to sammy@example.com.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
You can find all of the certificates and keys that were created by looking at the /etc/letsencrypt/live/your_domain
directory with sudo
privileges:
- sudo ls /etc/letsencrypt/live/your_domain
Outputcert.pem chain.pem fullchain.pem privkey.pem
For our configuration, we will only need to know the full path to the fullchain.pem
and privkey.pem
files.
Now that we have obtained trusted certificates from Let’s Encrypt, we can configure GitLab to use TLS/SSL for all of its traffic.
Start by opening up the GitLab configuration file again:
- sudo nano /etc/gitlab/gitlab.rb
At the top, change the external_url
. Currently, it is likely pointing to http://your_domain
. We just need to change the http
to https
:
. . .
external_url 'https://your_domain'
. . .
Next, scroll back down to the GitLab Nginx section. Uncomment and modify, or simply add, the following lines.
The redirect line tells Nginx to automatically redirect requests made to the HTTP port 80 to the HTTPS port 443. The ssl_certificate
line should point to the full path of the fullchain.pem
file, while the ssl_certificate_key
line should point to the full path of the privkey.pem
file:
. . .
nginx['redirect_http_to_https'] = true
. . .
nginx['ssl_certificate'] = "/etc/letsencrypt/live/your_domain/fullchain.pem"
nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/your_domain/privkey.pem"
. . .
Save and close the file when you are finished.
Next, before reloading GitLab’s Nginx configuration, make sure that HTTPS traffic is allowed through your server’s firewall. You can open up port 443 for this purpose by typing:
- sudo ufw allow https
OutputRule added
Rule added (v6)
Check that port 443 is open by typing:
- sudo ufw status
OutputStatus: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
80 ALLOW Anywhere
443 ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
As you can see, port 443 is now exposed.
Now, reconfigure GitLab again to implement your changes:
- sudo gitlab-ctl reconfigure
Your GitLab instance should now be accessible over HTTPS using your trusted Let’s Encrypt certificate. You can test this by visiting your GitLab server’s domain name. Since we redirect HTTP to HTTPS, this should work without explicitly specifying a protocol:
http://your_domain
Your browser should automatically redirect you to use HTTPS. You should see some indication that the site is secured in the address bar:
Your GitLab installation is now protected with a TLS/SSL certificate.
Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot
package we installed takes care of this for us by running ‘certbot renew’ twice a day via a systemd timer. On non-systemd distributions this functionality is provided by a script placed in /etc/cron.d
. This task runs twice a day and will renew any certificate that’s within thirty days of expiration.
To test the renewal process, you can do a dry run with certbot
:
- sudo certbot renew --dry-run
If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.
Your GitLab instance should now be protected by a secure TLS/SSL certificate that is trusted by all modern browsers. While configuring the embedded Nginx instance is a bit more complex than setting up a stand alone Nginx web server, because GitLab exposes the functionality to customize location blocks in its configuration file, it is easy to work around.
Now that your GitLab instance is secure, it can be safely used to manage projects, host code repositories, and configure continuous integration. You can learn about using GitLab to automatically test each commit to your repository in our article on setting up continuous integration pipelines with GitLab CI.
]]>Firstly I am currently running an Ubuntu droplet with Apache 2 installed and I’m not sure what would be better; either renting a second droplet and setting up a private git server on that or just storing all our files on GitHub. I get that using GitHub would be cheaper but I’d prefer to not have all of my work out on the public domain.
Secondly if I went down the road of using git how would I be able to only allow users access to edit certain files at a time rather than have access to edit every file in the repository.
]]>Relying on a source code repository for versioning is a best practice that can get us back up and running when a code change causes our application to crash or to behave erratically. However, in case of a catastrophic event like a full branch getting accidentally deleted or losing access to a repository, we should leverage additional disaster recovery strategies.
Backing up our code repository into an object storage infrastructure provides us with an off-site copy of our data that we can recover when needed. Spaces is DigitalOcean’s object storage solution that offers a destination for users to store backups of digital assets, documents, and code.
Compatible with the S3 API, Spaces allows us to use S3 tools like S3cmd to interface with it. S3cmd is a client tool that we can use for uploading, retrieving, and managing data from object storage through the command line or through scripting.
In this tutorial we will demonstrate how to back up a remote Git repository into a DigitalOcean Space using S3cmd. To achieve this goal, we will install and configure Git, install S3cmd, and create scripts to back up the Git repository into our Space.
In order to work with Spaces, you’ll need a DigitalOcean account. If you don’t already have one, you can register on the signup page.
From there, you’ll need to set up your DigitalOcean Space and create an API key, which you can achieve by following our tutorial How To Create a DigitalOcean Space and API Key.
Once created, you’ll need to keep the following details about your Space handy:
Additionally, you should have an Ubuntu 16.04 server set up with a sudo non-root user. You can get guidance for setting this up by following this Ubuntu 16.04 initial server setup tutorial.
Once you have your Spaces information and server set up, proceed to the next section to install Git.
In this tutorial, we’ll be working with a remote Git repository that we’ll clone to our server. Ubuntu has Git installed and ready to use in its default repositories, but this version may be older than the most recent available release.
We can use the apt
package management tools to update the local package index and to download and install the most recent available version of Git.
- sudo apt-get update
- sudo apt-get install git
For a more flexible way to install Git and to ensure that you have the latest release, you can consider installing Git from Source.
We’ll be backing up from a Git repository’s URL, so we will not need to configure Git in this tutorial. For guidance on configuring Git, read this section on How To Set Up Git.
Now we’ll move on to cloning our remote Git repository.
In order to clone our Git repository, we’ll create a script to perform the task. Creating a script allows us to use variables and helps ensure that we do not make errors on the command line.
To write our executable script, we’ll create a new shell script file called cloneremote.sh
with the text editor nano.
- nano cloneremote.sh
Within this blank file, let’s write the following script.
#!/bin/bash
remoterepo=your_remote_repository_url
localclonedir=repos
clonefilename=demoprojectlocal.git
git clone --mirror $remoterepo $localclonedir/$clonefilename
Let’s walk through each element of this script.
The first line — #!/bin/bash
— indicates that the script will be run by the Bash shell. From there, we define the variables that will be used in the command, which will run once we execute the script. These variables define the following pieces of configuration:
remoterepo
is being assigned the remote Git repository URL that we will be backing up fromlocalclonedir
refers to the server directory or folder that we will be cloning the remote repository into, in this case we have called it repos
clonefilename
refers to the filename we will provide to the local cloned repository, in this case we have called it demoprojectlocal.git
Each of these variables are then called directly in the command at the end of the script.
The last line of the script uses the Git command line client beginning with the git
command. From there, we are requesting to clone a repository with clone
, and executing it as a mirror version of the repository with the --mirror
tag. This means that the cloned repository will be exactly the same as the original one. The three variables that we defined above are called with $
.
When you are satisfied that the script you have written is accurate, you can exit nano by typing the CTRL
+ x
keys, and when prompted to save the file press y
.
At this point we can run the shell script with the following command.
- sh cloneremote.sh
Once you run the command, you’ll receive output similar to the following.
OutputCloning into bare repository './repos/demoprojectlocal.git'...
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3/3), done.
Checking connectivity... done.
At this point, if you list the items in your current directory, you should see your backup directory there, and if you move into that directory you’ll see the sub-folder with the filename that you provided in the script. That subdirectory is the clone of the Git repository.
With our remote Git repository cloned, we can now move on to installing S3cmd, which we can use to back up the repository into object storage.
The S3cmd tool allows us to connect to the Spaces environment from the command line. We’ll download the latest version of S3cmd from its public GitHub repository and follow the recommended guidelines for installing it.
Before installing S3cmd, we need to install Python’s Setuptools, as it will help with our installation (S3cmd is written in Python).
- sudo apt-get install python-setuptools
Press y
to continue.
With this downloaded, we can now download the S3cmd tar.gz
file with curl
.
- cd /tmp
- curl -LO https://github.com/s3tools/s3cmd/releases/download/v2.0.1/s3cmd-2.0.1.tar.gz
Note that we are downloading the file into our tmp
directory. This a common practice when downloading files onto our server.
You can check to see if there is a newer version of S3cmd available by visiting the Releases page of the tool’s GitHub repository. If you find a newer version, you can copy the tar.gz
URL and substitute it into the curl
command above.
When the download has completed, unzip and unpack the file using the tar utility:
- cd ~
- tar xf /tmp/s3cmd-*.tar.gz
In the commands above, we changed back to our home directory then executed the tar
command. We used two flags with the command, the x
indicates that we want to extract from a tar file and, and the f
indicates that the immediately adjacent string will be the full path name of the file that we want to expand from. In the file path of the tar file, we also indicate that it is in the tmp
directory.
Once the file is extracted, change into the resulting directory and install the software using sudo:
- cd s3cmd-*
- sudo python setup.py install
For the above command to run, we need to use sudo
. The python
command is a call to the Python interpreter to install the setup.py
Python script.
Test the install by asking S3cmd for its version information:
- s3cmd --version
Outputs3cmd version 2.0.1
If you see similar output, S3cmd has been successfully installed. Next, we’ll configure S3cmd to connect to our object storage service.
S3cmd has an interactive configuration process that can create the configuration file we need to connect to our object storage server. During the configuration process, you will be asked for your Access Key and Secret Key, so have them readily available.
Let’s start the configuration process by typing the following command:
- s3cmd --configure
We are prompted to enter our keys, so let’s paste them in and then accept US
for the Default Region. It is worth noting that being able to modify the Default Region is relevant for the AWS infrastructure that the S3cmd tool was originally created to work with. Because DigitalOcean requires fewer pieces of information for configuration, this is not relevant so we accept the default.
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key []: EXAMPLE7UQOTHDTF3GK4
Secret Key []: b8e1ec97b97bff326955375c5example
Default Region [US]:
Next, we’ll enter the DigitalOcean endpoint, nyc3.digitaloceanspaces.com
.
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: nyc3.digitaloceanspaces.com
Because Spaces supports DNS-based buckets, at the next prompt we’ll supply the bucket value in the required format:
%(bucket)s.nyc3.digitaloceanspaces.com
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars c
an be used if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket []: %(bucket)s.nyc3.digitaloceanspaces.com
At this point, we’re asked to supply an encryption password. We’ll enter a password so it will be available in the event we want to use encryption.
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: secure_password
Path to GPG program [/usr/bin/gpg]:
We’re next prompted to connect via HTTPS, but DigitalOcean Spaces does not support unencrypted transfer, so we’ll press ENTER
to accept the default, Yes
.
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]:
Since we aren’t using an HTTP Proxy server, we’ll leave the next prompt blank and press ENTER
.
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
After the prompt for the HTTP Proxy server name, the configuration script presents a summary of the values it will use, followed by the opportunity to test them. When the test completes successfully, enter Y
to save the settings.
Once you save the configuration, you’ll receive confirmation of its location.
When you have completed all the installation steps, you can double-check that your setup is correct by running the following command.
- s3cmd ls
This command should output a list of Spaces that you have available under the credentials you provided.
Output2017-12-15 02:52 s3://demospace
This confirms that we have successfully connected to our DigitalOcean Spaces. We can now move on to backing up our Git repository into object storage.
With all of our tools installed and configured, we are now going to create a script that will zip the local repository and push it into our DigitalOcean Space.
From our home directory, let’s call our script movetospaces.sh
and open it in nano.
- cd ~
- nano movetospaces.sh
We’ll write our script as follows.
#!/bin/sh
tar -zcvf archivedemoproject.tar.gz /repos/demoprojectlocal.git
./s3cmd-2.0.1/s3cmd put archivedemoproject.tar.gz s3://demospace
Earlier in this tutorial, we’ve used tar
to unzip s3cmd
, we are now using tar
to zip the Git repository before sending it to Spaces. In the tar
command, we specify four flags:
z
compresses using the gzip methodc
creates a new file instead of using an existing onev
indicates that we are being verbose about the files being included in the compressed filef
names the resulting file with the name defined in the next stringAfter the flags, we are providing a file name for the compressed file, in this case archivedemoproject.tar.gz
. We are also providing the name of the directory that we want to zip /repos/demoprojectlocal.git
.
The script then executes s3cmd put
to send archivedemoproject.tar.gz
to our destination Space s3://demospace
.
Among the commands you may commonly use with S3cmd, the put
command sends files to Spaces. Other commands that may be useful include the get
command to download files from the Space, and the delete
command to delete files. You can obtain a list of all commands accepted by S3cmd by executing s3cmd
with no options.
To copy your backup into your Space, we’ll execute the script.
- sh movetospaces.sh
You will see the following output:
Outputdemoprojectlocal.git/
...
demoprojectlocal.git/packed-refs
upload: 'archivedemoproject.tar.gz' -> 's3://demobucket/archivedemoproject.tar.gz' [1 of 1]
6866 of 6866 100% in 0s 89.77 kB/s done
You can check that the process worked correctly by running the following command:
- s3cmd ls s3://demospace
You’ll see the following output, indicating that the file is in your Space.
Output2017-12-18 20:31 6866 s3://demospace/archivedemoproject.tar.gz
We now have successfully backed up our Git repository into our DigitalOcean Space.
To ensure that code can be quickly recovered if needed, it is important to maintain backups. In this tutorial, we covered how to back up a remote Git repository into a DigitalOcean Space through using Git, the S3cmd client, and shell scripts. This is just one method of dozens of possible scenarios in which you can use Spaces to help with your disaster recovery and data consistency strategies.
You can learn more about what we can store in object storage by reading the following tutorials:
]]>It is my first time deploying by myself. So I am following this tutorial: https://devmarketer.io/learn/deploy-laravel-5-app-lemp-stack-ubuntu-nginx/. I am having an issue when pushing to the server. I receive the error below when I run “git push production master”. Any idea?
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Thank you in advance. Shola
]]>The problem comes with setting up SSL. You see, I followed this guide here to generate an SSL certificate which worked when I was setting up NextCloud.
Thing is, majority of the guides tell me how to set up Gitlab with Lets Encrypt (such as here) when I want to set up a self-signed SSL (which I did in that guide).
The thing is, the files I get when making the self-signed certificates (apache-selfsigned.key, apache-selfsigned.crt, dhparam.pem) are not the same of the ones I need (cert.pem, chain.pem, fullchain.pem, privkey.pem)
Anyone can helo me setup Gitlab using a self-signed ceritificate? I can make another self-signed certificate if needed
]]>Version control software (VCS) is an essential part of most modern software development practices. Among other benefits, software like Git, Mercurial, Bazaar, Perforce, CVS, and Subversion allow developers to save snapshots of their project history to enable better collaboration, revert to previous states and recover from unintended code changes, and manage multiple versions of the same codebase. These tools allow multiple developers to safely work on the same project and provide significant benefits even if you do not plan to share your work with others.
Although it is important to save your code in source control, it it is equally important for some project assets to be kept out of your repository. Certain data like binary blobs and configuration files are best left out of source control for performance and usability reasons. But more importantly, sensitive data like passwords, secrets, and private keys should never be checked into a repository unprotected for security reasons.
In this guide, we will first talk about how to check for sensitive data already committed to your repository and introduce some mitigation strategies if any material is found. Afterwards, we will cover some tools and techniques for preventing the addition of secrets to repositories, ways to encrypt sensitive data before committing, and alternatives for secure secret storage.
Before setting up a system to manage your sensitive data, it’s a good idea to check whether any secret material is already present in your project files.
If you know an exact string that you want to search for, you can try using your VCS tool’s native search functionality to check whether the provided value is present in any commits. For example, with git
, a command like this can search for a specific password:
- git grep my_secret $(git rev-list --all)
This will search your entire project history for the specified string.
A number of dedicated tools can help surface secrets more broadly. Tools like gitrob can scan each repository in a GitHub organization for filenames matching those in a predefined list. The git-secrets project can scan repositories locally for defined secrets, based on patterns in both the file paths and content. The truffleHog tool uses a different approach by searching repositories for high entropy strings which likely represent generated secrets used by applications. To combine some of this functionality into a single tool, git-all-secrets glues together or reimplements the above tools in a unified interface.
If you discover files or data that should not have been committed, it’s important to respond appropriately to mitigate the impact of the leaked data. The right course of action will depend on how widely the repository is shared, the nature of the exposed material, and whether you wish to scrub all mention of the leaked content or just invalidate it.
If credentials are committed to your project repository, your first step should be to immediately change the password or secret to invalidate the previous value. This step should be completed regardless of if or how widely the repository is shared for a few reasons. Collaboration requirements can change over the life of a project leading to greater exposure than previously anticipated. Even if you know you will never intentionally share your project, security incidents can leak data to unintended parties, so it’s best to be proactive in changing the current values.
While you should rotate your compromised credentials in all cases, you may wish to remove the leaked credentials or file from your VCS history entirely as well. This is especially important for sensitive data that cannot be changed, like any user data that was unintentionally committed. Removing the data from your repositories involves rewriting the VCS history to remove the file from previous commits. This can be done using native git
commands or with the help of some dedicated tools. It is important to note that even if you remove all record of the data in the repository, anyone who had previously copied the codebase may still have access to the sensitive material. Keep this in mind when assessing the extent of the impact.
If you suspect that secrets were compromised, it is a good idea to review the log data associated with those programs or services to try to determine if there has been unusual access or behavior. This may take the form of unusual activity or requests that usually originate within your internal network coming from addresses you do not control. This investigation will help you determine appropriate next steps for protecting your infrastructure and data.
Before looking at external tools, it is a good idea to familiarize yourself with some of the features and abilities native to your VCS tools to help prevent committing unwanted data to your repository.
The most basic way to keep files with sensitive data out of your repository is to leverage your VCS’s ignore functionality from the very beginning. VCS “ignore” files (like .gitignore
) define patterns, directories, or files that should be excluded from the repository. These are a good first line of defense against accidentally exposing data. This strategy is useful because it does not rely on external tooling, the list of excluded items is automatically configured for collaborators, and it is easy to set up.
While VCS ignore functionality is useful as a baseline, it relies on keeping the ignore definitions up-to-date. It is easy to commit sensitive data accidentally prior to updating or implementing the ignore file. Ignore patterns only have file-level granularity, so you may have to refactor some parts of your project if secrets are mixed in with code or other data that should be committed.
Most modern VCS implementations include a system called “hooks” for executing scripts before or after certain actions are taken within the repository. This functionality can be used to execute a script to check the contents of pending changes for sensitive material. The previously mentioned git-secrets tool has the ability to install pre-commit
hooks that implement automatic checking for the type of content it evaluates. You can add your own custom scripts to check for whatever patterns you’d like to guard against.
Repository hooks provide a much more flexible mechanism for searching for and guarding against the addition of sensitive data at the time of commit. This increased flexibility comes at the cost of having to script all of the behavior you’d like to implement, which can potentially be a difficult process depending on the type of data you want to check. An additional consideration is that hooks are not shared as easily as ignore files, as they are not part of the repository that other developers copy. Each contributor will need to set up the hooks on their own machine, which makes enforcement a more difficult problem.
While more localized in scope, one simple strategy that may help you to be more mindful of your commits is to only add items to the VCS staging area explicitly by name. While adding files by wildcard or expansion can save some time, being intentional about each file you want to add can help prevent accidental additions that might otherwise be included. A beneficial side effect of this is that it generally allows you to create more focused and consistent commits, which helps with many other aspects of collaborative work.
While in many circumstances it is recommended to remove sensitive data entirely from your code repository, sometimes it is necessary or useful to include some sensitive data within a repository for other privileged users to access. To do so, various tools allow you to encrypt sensitive files within a repository while leaving the majority of files accessible to everyone.
There are a number of different pieces of software that simplify partial repository encryption. Most work from the same basic principles, but each offers a unique implementation that may offer some compelling advantages depending on your project needs.
A project called git-secret (not to be confused with the git-secrets
tool mentioned earlier) can encrypt the contents of secret files with the GPG keys of trusted collaborators. By leveraging an existing web of trust, git-secret
users can manage access to files by specifying the users that should be able to decrypt each item. If the user has published their public key to a key server, you can provide them access to encrypted contents without ever asking them for their key directly.
The git-crypt tool works similarly to git-secret
in that it allows you to encrypt and commit portions of your repository and regulate access to other contributors using their GPG keys. The git-crypt
project can alternatively use symmetric key encryption if your team doesn’t use GPG or if that management pattern is too complex for your use case. Additionally, git-crypt
will automatically encrypt at the time of commit and decrypt on clone using the git
filter and diff attributes, which simplifies management.
The BlackBox project is yet another solution that relies on GPG to collaboratively encrypt content. Unlike the previous tools, BlackBox works with many different version control systems so that it can be used across different projects. Originally designed as a tool for the Puppet ecosystem, it was refactored to support a more open plugin-based system. BlackBox can encrypt and decrypt individual files at will, but also provides a mechanism to call a text editor transparently, which decrypts the file, opens an editor, and then re-encrypts upon saving.
Outside of the general solutions above, there are also some solutions built to work with specific types of repositories. For example, starting with version 5.1, Ruby on Rails projects can include encrypted secrets within the repository using a system that sets up a master key outside of the repository.
Encrypting and committing your secret data to your repository can help keep your credentials up-to-date and in sync with the way the code uses them. This can avoid drift between changes in the confidential data format or labelling and the way that the code uses or accesses it. Changes can be made to the codebase without referencing an external resource.
Additionally, keeping your secrets with your code can simplify deployment considerably. Rather than pulling down information from multiple locations to get a fully functional system, the information is all packaged in a single unit, with some components requiring decryption. This can be very helpful if you do not have the infrastructure set up to support an external secret store or if you want to minimize the amount of coordination necessary to deploy your project.
The overall advantage of using a tool to encrypt sensitive information within a repository is that encryption is easy to implement without additional infrastructure or planning. Users can transition from storing secrets as plain text data to a secure, encrypted system in a few minutes. For projects with a single developer or a small, static team, these tools likely fill all secret management requirements without adding extensive complexity.
As with any solution, there are some trade-offs to this style of secret management.
Fundamentally, secrets are configuration data, not code. While the code deployed in various environments is likely the same, the configuration can vary quite a lot. By keeping secrets with the code in your repository, it becomes more difficult to maintain configuration across different environments and encourages credential reuse in ways that negatively impact security.
Similarly, configuring granular, multi-level access to encrypted secrets within a repository is often difficult. The required level of access control is often much more complex than what is easily modeled by the tools used to encrypt secrets in VCS, especially for large teams and projects. Bringing on collaborators or removing contributors from the project involves re-encrypting all of the files with sensitive data within the repository. While these utilities usually make it easy to change the encryption used to protect the files, the secrets within those files should also be rotated in these circumstances, which can be a difficult, manual process.
An important point that is often overlooked is that the keys used to decrypt the data are often stored alongside the encrypted content. On a developer’s laptop, the GPG keys that can decrypt sensitive data are often present and usable without any further input. You can mitigate this somewhat by using a GPG passphrase, but this is difficult to enforce for a large team. If a team member’s laptop is compromised, access to the most sensitive data in your project may be accessible as if it were in plain text.
In general, protecting secrets within a repository over a long period of time can be difficult. Simple operations like rolling back code changes can accidentally reintroduce access that was previously removed. If a private key is exposed, historical values may be recovered and decrypted from the repository history. Although the VCS history provides a log of encryption changes, there is no method of auditing secret access to help determine unusual access.
Many users’ first experience with more centralized secret management is with configuration management tools. Because these tools are responsible for coordinating the configuration of many different machines from a centralized location, some level of secret management is necessary to ensure that nodes can only access the values they require.
Chef encrypted data bags and chef-vault provide some integrated secret management features for infrastructure managed by Chef. Encrypted data bags are used to protect sensitive values from appearing in revision history or to other machines using shared secrets. Chef-vault allows secrets to be encrypted using the target machine’s public key instead, offering further security that isolates decryption capabilities to the intended recipients.
Similarly, Puppet’s Hiera key-value storage system can be used with Hiera eyaml to manage secrets securely for specific infrastructure components. Unlike some other systems, Hiera eyaml is aware of the syntax and structure of YAML, the data serialization format that Hiera uses, allowing it to encrypt just the sensitive values instead of the entire file. This makes it possible to work with files that contain encrypted data using normal tools for most tasks. Since the backends are pluggable, teams can implement GPG encryption to easily manage access.
Saltstack uses Pillars to store data designated for certain machines. To protect these items, users can encrypt the YAML values using GPG and then configure the GPG renderer to allow Salt to decrypt the values at runtime. Like Hiera eyaml, this system involves encrypting only the sensitive data rather than the full file, allowing normal file editing and diff tools to operate correctly.
Ansible includes Ansible Vault, an encryption system and command line tool to encrypt sensitive YAML files within a playbook structure. Ansible can then transparently decrypt the secret files at runtime to combine the secret and non-secret data necessary to carry out given tasks. Ansible vault encrypts the entire file rather than the values, so editing requires decryption and diff tools cannot show accurate change information. However, as of Ansible 2.3, single variables can be encrypted in variable files, giving users a choice in how they want to encrypt sensitive values.
These solutions are well-suited for some of the challenges involved with managing secrets in configuration management contexts. They are able to orchestrate access to secrets by leveraging the existing infrastructure inventory system and role designations which define the type of access each machine requires. The same mechanisms that ensure that each machine gets the correct configuration can ensure that secrets are only delivered to hosts that require them.
Using tools native to your existing infrastructure management and deployment systems minimizes the operational costs of implementing encryption. It’s easier to migrate secrets to encryption using tooling native to your environment and it’s simpler to incorporate runtime decryption of secrets without additional steps. If you are already using a configuration management system, using their included secret management mechanisms will probably be the easiest first step towards protecting your sensitive data.
The tight integration means that users can use their existing systems to manage their secrets, but it does mean that these solutions are locked to their respective configuration management tools. Using most of these strategies in other contexts would be difficult or impossible, which means that you are adding a dependency on the configuration management tools themselves. The tight integration to a single platform might also make it problematic for external systems requiring access to the data. Without an external API or callable command in some cases, the secrets can be effectively “trapped” unless accessed through the configuration management system, which can be limiting.
Many of the disadvantages of storing encrypted secrets in your application repository also apply when storing secrets with your configuration management system. Instead of having laptops with your application repositories being a vector for compromise, any laptop or computer with the configuration management repository will likewise be vulnerable. Fundamentally, any system that has both the encrypted values and the decryption key will be vulnerable to this type of compromise.
A related concern is that, while the configuration management system is able to ensure secrets are only accessible to the correct machines, defining fine grained access controls to restrict team members is often more difficult. Some systems can only encrypt with a single password or key, limiting the ability to partition team members’ access to secrets.
An alternative to storing encrypted secrets alongside the code or in your configuration management system is to use a dedicated service to manage sensitive data for your infrastructure. These services encrypt and store sensitive data and respond to authorized requests with the decrypted values. This allows developers to move their sensitive material out of their repositories and into a system designed to orchestrate encryption, authorization, and authentication for both human users and applications.
Dedicated secret management services like HashiCorp’s Vault offer great flexibility and powerful features to protect sensitive material while not sacrificing usability. Vault protects data at rest and in transit and is designed to use various “backends” to expose different functionality and manage the complexities of encryption, storage, and authentication. Several key features include the ability to configure dynamic secrets (short term credentials for connected services, created on the fly), data encryption as a service (encrypting and storing data from external services and serving the decrypted content again when requested to do so by an authorized party), and lease-based secret management (providing access for a given amount of time, after which access is automatically revoked). Vault’s pluggable architecture means that storage backends, authentication mechanisms, etc. are all swappable as business needs change.
Square’s Keywhiz secret management system is another dedicated service used to provide general security for sensitive data. Like Vault, Keywhiz exposes APIs that clients and users can use to store and access secrets. One unique feature that Keywhiz offers is the ability to expose secrets using a FUSE filesystem, a virtual filesystem that clients can mount to access the sensitive data as pseudo-files. This mechanism allows many different types of programs to access the data they need without the help of an agent or wrapper and it allows administrators to lock down access using normal Unix filesystem permissions.
Pinterest’s Knox is another service for managing secrets. It provides many of the same features as Vault and Keywhiz. One feature not found in the other systems is the ability to rotate keys over time by providing explicit states for key versions. A key version can be marked as primary to indicate that it is the current preferred secret, active to indicate that it is the version can still be used, or inactive to disable the version. This system lets administrators roll keys across a fleet of machines over time without disrupting services.
Dedicated secret management services have many compelling advantages over other systems. Offloading the complexity of securing and managing sensitive data to a standalone system removes the need to address those concerns within application and configuration management repositories. This separation of responsibilities simplifies the operational security model by centralizing secret storage and governing access through strictly controlled interfaces. By providing generic interfaces for interacting with the system, authorized users or clients can access their secrets regardless of the configuration management system or VCS used.
From an administrative perspective, secret management systems provide many unique features not available in other tools. Easy rotation of encryption keys as well as the underlying secrets they protect is incredibly useful for large deployments and complex systems that require coordinating many different sensitive values. Access can be regulated and revoked easily without deploying code or making any fleet-wide changes. Features like dynamic secrets give secret management servers access to external services like databases to create per-use credentials on demand. Short term lease-based access to secrets function as an automatic mechanism for limiting or expiring access without requiring explicit revocation.
One of the most important improvements that centralized secret management provides is auditability. Each of the systems mentioned above maintain extensive records of when secrets are added, requested, accessed, or modified. This can be helpful to spot anomalies and detect suspicious behavior, and can also help assess the extent of any access in the event of a compromise. Having a holistic view of your organization’s sensitive data, the policies set to control access, and information about every successful and attempted change or retrieval puts teams in a good position to make informed decisions about infrastructure security.
The main disadvantage of a centralized secret management system is the additional overhead it requires, both in terms of infrastructure and management.
Setting up a centralized system requires a good deal of planning, testing, and coordination prior to deployment into a production environment. Once the infrastructure is up and running, clients must be updated to query the secret management server’s APIs or an agent process must be configured to obtain secrets on behalf of the processes that require it. Policies must be established to dictate which applications, infrastructure, and team members should have access to each protected value.
Due to the value of the data it protects, the secret management server becomes one of the most important security environments to manage. While centralization minimizes the surface area you need to protect, it makes the system itself a high-value target for malicious actors. While many solutions include features like lock-down modes, key-based restarts, and audit logs, unauthorized access to an active, decrypted secret store would require extensive remediation.
Beyond the initial cost of configuration and the security elements, serving all sensitive data from a single service introduces an additional mission critical component to your infrastructure. Since secrets are often required for for bootstrapping new applications and for routine operations, secret management downtime could cause major interruptions that may not be resolvable until the service can be restored. Availability is crucial for a system responsible for coordinating between so many different components.
As you evaluate different methods of protecting sensitive data and coordinating the necessary access during deployments, it’s important to consider the balance between security, usability, and needs of your project. The solutions described above span a wide range of use cases and offer varying degrees scalability and protection.
The best choice for your project or organization will likely depend on the amount of sensitive data you have to protect, the size of your team, and the resources available to manage different solutions. In most cases, it makes sense to start small and to reassess your secret management needs as your circumstances change. While you may only need to protect a few secrets and collaborate with a small team now, in the future the trade-offs for dedicated solutions might become more compelling.
]]>In this case, I am installing on a personal VM. I want to try this locally before deploying to DigitalOcean.
Firstly, I installed the LAMP stack from this guide here. Here is all the commands I entered in the exact order:
1. "sudo apt-get update"
2. "sudo apt-get install apache2 -y" (Installed Apache)
3. Added "ServerName" to the apache2.conf (apache2.conf looks like this: https://pastebin.com/fyBLLbEw)
4. "sudo apache2ctl configtest" - Got "Syntax OK"
5. "sudo systemctl restart apache2"
6. "sudo ufw allow in "Apache Full"
7. "sudo apt-get install mysql-server -y" (Root password is "root")
8. "sudo apt-get install php libapache2-mod-php php-mcrypt php-mysql"
9. "sudo nano /etc/apache2/mods-enabled/dir.conf"
10. "sudo systemctl status apache2" - Result: https://pastebin.com/6rFTjVnN
I can successfully see my webserver along with my PHP info page.
Then I installed NextCloud from the page here.
1. "cd /tmp"
2. "curl -LO https://download.nextcloud.com/server/releases/nextcloud-12.0.0.tar.bz2"
3. "curl -LO https://download.nextcloud.com/server/releases/nextcloud-12.0.0.tar.bz2.sha256"
4. "shasum -a 256 -c nextcloud-12.0.0.tar.bz2.sha256 < nextcloud-12.0.0.tar.bz2" - Came back "OK"
5. "rm nextcloud-12.0.0.tar.bz2.sha256"
6. "sudo tar -C /var/www -xvjf /tmp/nextcloud-12.0.0.tar.bz2"
7. "nano /tmp/nextcloud.sh" - Created a Script for the Install, Here it is: https://pastebin.com/zCEeCNVG
8. "sudo nano /etc/apache2/sites-available/nextcloud.conf"
9. "sudo a2ensite nextcloud"
10. "sudo a2enmod rewrite"
11. "sudo apt-get update"
12. "sudo apt-get install php-bz2 php-curl php-gd php-imagick php-intl php-mbstring php-xml php-zip"
13. "sudo systemctl reload apache2"
14. "mysql -u root -p"
15. "GRANT ALL ON nextcloud.* to 'nextcloud'@'localhost' IDENTIFIED BY 'nextcloud';"
16. "FLUSH PRIVILEGES;"
17. "exit"
I can see NextCloud installation at http://IP_ADDRESS/nextcloud/ I did not set up Nextcloud at this point, I just visit the page to see if it works
Then I installed Gitlab from this guide here
1. "sudo apt-get update"
2. "sudo apt-get install ca-certificates curl openssh-server postfix"
3. "cd /tmp"
4. "curl -LO https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh"
5. "sudo bash /tmp/script.deb.sh"
6. "sudo apt-get install gitlab-ce"
7. "sudo gitlab-ctl reconfigure"
8. "sudo ufw allow http"
9. "sudo ufw allow OpenSSH"
10. "sudo reboot"
Before, when I used to visit http://IP_ADDRESS/, I get the default Apache page as shown here. Now when I visit, I get the Gitlab page as shown here.
Next, I used this guide here by Gitlab to try to set up Gitlab with Apache
Gitlab Config (gitlab.rb) before any edits: https://pastebin.com/uH1Bqza1
1. "sudo nano /etc/gitlab/gitlab.rb"
* Line 13 - Changed "external_url" to "http://gitpage.mysite.com"
* Line 766 - Changed to: web_server['external_users'] = ['www-data']
* Line 779 - Changed to: nginx['enable'] = false
* Line 818 - Changed to: nginx['listen_port'] = 8181
* Line 100 - Changed to: gitlab_rails['trusted_proxies'] = [ '192.168.1.0/24', '192.168.2.1', '2001:0db8::/32' ]
2. "sudo gitlab-ctl reconfigure"
At this point, when visiting http://IP_ADDRESS/, I get this error here.
3. Created a file called "gitlab-apache24.conf" in /etc/apache2/sites-available (using "sudo nano /etc/apache2/sites-available/gitlab-apache24.conf")
* New Server Configuration: https://pastebin.com/htnLUq7u
4. "sudo a2ensite gitlab-apache24.conf"
5. "sudo a2enmod proxy"
6. "sudo a2enmod proxy_http"
Now I am completely stuck. If i visit any page other than the actual subdomain (like gitlab.mysite.com, whatever.mysite.com), I get the standard “Apache2 Installed” page (and can visit nextcloud by adding “/nextcloud” to the URL). But if I try to visit the actual domain “gitpage.mysite.com”, I get a “Unable to connect” error as shown here.
I HAVE BEEN TRYING TO SOLVE THIS PROBLEM FOR A LONG TIME BUT I KEEP HITTING A BRICK WALL! PLEASE
]]>I’m using digital ocean droplet ubuntu server with nginx Php Laravel and have configured SSH access from server to bitbucket.
Does anyone here know a solution for my problem?
]]>Concourse CI is a modern, scalable continuous integration system designed to automate testing pipelines with a composable, declarative syntax. In previous guides, we installed Concourse on an Ubuntu 16.04 server and secured the web UI with an SSL certificate from Let’s Encrypt.
In this guide, we will demonstrate how to use Concourse to automatically run your project’s test suite when new changes are committed to the repository. To demonstrate, we will configure a continuous integration pipeline for a “hello world” application written with Hapi.js, a Node.js web framework.
To make sure the build and testing procedures are always kept in sync with the code they are associated with, we will add the CI definitions to the application repository itself. Afterwards, we will use Concourse’s fly
command line tool to load the pipeline into Concourse. Finally, we will push our changes back up to the repository to both save them more permanently and to kick off a new test in the new CI workflow.
Before you begin, you will need an Ubuntu 16.04 server with at least 1G of RAM. Complete the following guides to set up a non-root user, install and configure Concourse, install Nginx, obtain a TLS/SSL certificate, and set up a secure reverse proxy to the Concourse web UI. You will need a domain name pointed at your Concourse server to properly secure it:
In this tutorial, most of the work will be completed on your local computer rather than the Concourse server. As such, you will also need to make sure a few tools are available on your local machine. You will need a text editor (some examples you might find across various operating systems are nano
, vim
, TextEdit, Sublime Text, Atom, or Notepad) to create and modify files in the repository. You will also need to install and set up Git on your local system, which you can do by following our Contributing to Open Source: Getting Started with Git guide.
When you have set up your Concourse server and installed Git and a text editor on your local computer, continue below.
When we installed Concourse on the server in the prerequisites, we installed the fly
command line tool onto the server so that we could manage the Concourse instance from the command line. However, for daily use it is more convenient to install a copy of the fly
binary on your local system where your usual development tools and source code are available.
To get a local copy of fly
that matches your server version, visit your Concourse instance in your web browser:
https://your_concourse_url
If you are logged out or if you do not have a pipeline currently configured, links to download fly
for various platforms will be displayed in the center of the window:
If you are logged in and have a pipeline configured, download links for fly
will be available in the lower-right corner of the screen:
Click on the icon representing your local computer’s operating system to download the fly
binary.
Next, follow the platform specific instructions to set up fly
on your local system.
If your local computer runs Linux or macOS, follow these instructions after downloading the appropriate binary.
First, mark the downloaded binary as executable. We will assume that you’ve downloaded the file to your ~/Downloads
directory, so adjust the download location if necessary:
- chmod +x ~/Downloads/fly
Next, install the binary to a location in your PATH by typing:
- sudo install ~/Downloads/fly /usr/local/bin
You can verify that the executable is available by typing:
- fly --version
Output3.3.1
If you are able to display the version, fly
was installed successfully.
If your local computer runs Windows, hit the Windows key on your keyboard, type powershell, and hit ENTER.
In the window that appears, make a bin
folder by typing:
- mkdir bin
Next, move the fly.exe
file from your Downloads
folder to the new bin
folder by typing:
- mv Downloads/fly.exe bin
Check whether you have a PowerShell profile already available:
- Test-Path $profile
If the response is True
, you already have a profile.
If the response is False
, you will need to create one by typing:
- New-Item -path $profile -type file -force
Output
Directory: C:\User\Sammy\Documents\WindowsPowerShell
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 7/9/2017 5:46 PM 0 Microsoft.PowerShell_profile.ps1
Once you have a profile, edit it with your editor:
- notepad.exe $profile
In the editor window (which will be blank if you had to create your profile), add the following line:
$env:path += ";C:\Users\Sammy\bin"
Save and close the file when you are finished.
Next, set the execution policy to “RemoteSigned” for the current user to allow PowerShell to read the profile:
- Set-ExecutionPolicy -scope CurrentUser RemoteSigned
Finally, source the PowerShell profile by typing:
- . $profile
You should now be able to call the fly.exe
executable from any location. Test this by having the binary print its version:
- fly.exe --version
Output3.3.1
Throughout this guide, you will need to replace each instance of the fly
command with fly.exe
to match the Windows command.
After installing fly
, log into your remote Concourse server so that you can manage your CI environment locally. A single fly
binary can be used to contact and manage multiple Concourse servers, so the command uses a concept called “targets” as a label to identify the server you want to send commands to.
We are using main as the target name for our Concourse server in this guide, but you can substitute whatever target name you want. Enter your Concourse server’s domain name complete with the https://
protocol specification after the -c
option to indicate your server location:
- fly -t main login -c https://example.com
You will be prompted to enter the username and password that you configured in the /etc/concourse/web_environment
file on the Concourse server:
Outputlogging in to team 'main'
username: sammy
password:
target saved
Once you’ve authenticated, the fly
tool will create a configuration file called ~/.flyrc
to store your credentials for future commands.
Note: If you upgrade the version of Concourse later on, you can install the matching version of fly
command by typing:
- fly -t main sync
This will update the fly
binary on your system while leaving your configuration intact.
Now that you have fly
set up on your system, we can move on to setting up the repository we will be using to demonstrate Concourse pipelines.
In your web browser, visit the “hello hapi” application on GitHub that we will serve as our example. This application is a simple “hello world” program with a few unit and integration tests, written with Hapi.js, a Node.js web framework.
Since this example is used to demonstrate a variety of continuous integration systems, you may notice some files used to define pipelines for other systems. For Concourse, we will be creating the continuous integration pipeline in our own fork of the repository.
To create your fork of the repository, log in to GitHub and navigate to the project repository. Click the Fork button in the upper-right corner to make a copy of the repository in your account:
If you are a member of a GitHub organization, you may be asked where you would like to fork the repository. Once you select an account or organization, a copy of the repository will be added to your account.
Next, in a terminal on your local computer, move to your home directory:
- cd $HOME
Clone the repository to your local computer using the following command, substituting your own GitHub username:
- git clone git@github.com:your_github_user/hello_hapi
A new directory called hello_hapi
will be created in your home directory. Enter the new directory to get started:
- cd hello_hapi
We will be defining a continuous integration pipeline for the example project inside this repository. Before making any changes, it’s a good idea to create and switch to a new branch in Git to isolate our changes:
- git checkout -b pipeline
OutputSwitched to a new branch 'pipeline'
Now that we have a new branch to work in, we can begin defining our continuous integration pipeline.
We will be defining our pipeline and all of its associated files within the project repository itself. This helps ensure that the continuous integration processes are always kept in sync with the code it tests.
The test suite is already defined within a directory called test
. It includes one unit test and two basic integration tests. The command to run the tests is defined in the package.json
file under the name test
within the scripts
object. In an environment with npm
and Node.js installed, you can run the tests by typing npm test
(after installing the project dependencies with npm install
). These are the procedures we will need to replicate in our pipeline.
To get started, create a directory called ci
within the repository to house the continuous integration assets for the project. We will also create two subdirectories called ci/tasks
and ci/scripts
to hold the individual task definitions that the pipeline references and the scripts that the tasks call.
Create the necessary directory structure by typing:
- mkdir -p ci/{tasks,scripts}
Next, we can begin to create the individual files that Concourse will use.
Create and open a file called pipeline.yml
within the ci
directory with your text editor (we will show the nano
editor in this guide, but you should substitute the text editor for your system). As the extension indicates, Concourse files are defined using the YAML data serialization format:
- nano ci/pipeline.yml
We can now start setting up our pipeline.
Inside the file, we will begin by defining a new resource type:
---
resource_types:
- name: npm-cache
type: docker-image
source:
repository: ymedlop/npm-cache-resource
tag: latest
To separate the processes in continuous integration from the data that passes through the system, Concourse offloads all state information to abstractions called resources. Resources are external sources of data that Concourse can use to pull information from or push information to. This is how all data enters the continuous integration system and how all data is shared between jobs. Concourse does not provide any mechanism for storing or passing state internally between jobs.
The resource_types heading allows you to define new kinds of resources that you can use in your pipeline such as email notifications, Twitter integrations, or RSS feeds. The new resource type we are defining tells Concourse how to use npm-cache-resource, a resource provided as a Docker image that allows Concourse to install the dependencies of a Node.js project and share them share between jobs.
Next, we need to define the actual resources for the pipeline:
. . .
resources:
- name: hello_hapi
type: git
source: &repo-source
uri: https://github.com/your_github_user/hello_hapi
branch: master
- name: dependency-cache
type: npm-cache
source:
<<: *repo-source
paths:
- package.json
This section defines two resources that the Concourse CI jobs need to complete their tasks. Concourse uses resource definitions to watch upstream systems for changes and to understand how to pull down the resource when jobs require them. By default, Concourse checks each resource for new versions once per minute. Jobs requiring the resource that have the “trigger” option set will automatically kick off a new build when a new version is available.
The first resource represents your fork of the hello_hapi
repository on GitHub. The “source” line contains a YAML anchor called “repo-source” which labels the element for future reference. This lets us to include the content of the element (the “uri” and “branch” definitions) in a different location later in the document.
The second resource, called “dependency-cache”, uses the “npm-cache” resource type we defined to download the project’s dependencies. In the “source” specification of this resource, we use the <<: *repo-source
line to reference and extend the elements pointed to by the &repo-source
anchor. This inserts the uri and branch settings from our application repository resource into this second resource. An additional element called “paths” points to the package.json
file where the project dependencies are defined.
Finally, we define the actual continuous integration processes using Concourse jobs:
. . .
jobs:
- name: Install dependencies
plan:
- get: hello_hapi
trigger: true
- get: dependency-cache
- name: Run tests
plan:
- get: hello_hapi
trigger: true
passed: [Install dependencies]
- get: dependency-cache
passed: [Install dependencies]
- task: run the test suite
file: hello_hapi/ci/tasks/run_tests.yml
In this section, we define two jobs, each of which consist of a name and a plan. Each of our plans, in turn, contain “get” and “task” elements. The task items specify how to execute an action while the get items indicate the resource dependencies of the task.
The first job does not have any task statements. This is a bit unusual, but makes sense when we look at what it is doing and how it can be used. The first get statement requires the hello_hapi
resource and specifies the trigger: true
option. This tells Concourse to automatically fetch the repository and begin a new build of this job every time a new commit is detected in the hello_hapi
repository.
The second get statement in the first job (get: dependency-cache
) requires the resource we defined that downloads and caches the project’s Node.js dependencies. This statement evaluates the requirements found in the package.json
file and downloads them. With no tasks defined for this job, no other actions are taken, but the downloaded dependencies will be available to subsequent jobs.
Note: In this specific example, there is only a single additional job, so the benefits of caching the Node.js dependencies as an independent step aren’t fully realized (adding the get statements to the testing job that follows would be enough to download the dependencies). However, almost all work with Node.js require the project dependencies, so if you had separate jobs that could potentially be done in parallel, the benefits of a separate dependency cache would become more clear.
The second job (name: Run tests
) starts off by declaring the same dependencies with one notable difference. The “passed” constraint causes the get statements to only match resources that have successfully traversed previous steps in the pipeline. This is how dependencies between jobs are formed to chain together pipeline processes.
After the get statements, a task called “run the test suite” is defined. Rather than defining the steps to complete inline, it tells Concourse to pull the definition from a file in the repository it fetched. We will create this file next.
When you are finished, the complete pipeline should look like this:
---
resource_types:
- name: npm-cache
type: docker-image
source:
repository: ymedlop/npm-cache-resource
tag: latest
resources:
- name: hello_hapi
type: git
source: &repo-source
uri: https://github.com/your_github_user/hello_hapi
branch: master
- name: dependency-cache
type: npm-cache
source:
<<: *repo-source
paths:
- package.json
jobs:
- name: Install dependencies
plan:
- get: hello_hapi
trigger: true
- get: dependency-cache
- name: Run tests
plan:
- get: hello_hapi
trigger: true
passed: [Install dependencies]
- get: dependency-cache
passed: [Install dependencies]
- task: run the test suite
file: hello_hapi/ci/tasks/run_tests.yml
Save and close the file when you are finished.
While the pipeline definition outlined the structure of our continuous integration process, it deferred defining the actual testing task to another file. Extracting tasks help keep the pipeline definition concise and easier to read, but does require you to read multiple files to understand the entire process.
Open a new file under the ci/tasks
directory called run_tests.yml
:
- nano ci/tasks/run_tests.yml
To define a task, you need to specify the type of operating system the worker needs to have, define the image used to run the tasks, name any input or output the task will use, and specify the command to run.
Paste the following contents to set up our testing task:
---
platform: linux
image_resource:
type: docker-image
source:
repository: node
tag: latest
inputs:
- name: hello_hapi
- name: dependency-cache
run:
path: hello_hapi/ci/scripts/run_tests.sh
In the above configuration, we specify that this task requires a Linux worker. The Concourse server itself can satisfy this requirement with no additional configuration.
Next, we indicate an image that will be used by the worker to run the task. Although you can create and use your own image types, in practice, this will almost always be a Docker image. Since our repository is a Node.js application, we select the latest “node” image to run our tests since it has the appropriate tooling already installed.
Concourse tasks can specify inputs and outputs to indicate the resources it needs access to and the artifacts it will produce. The inputs correspond to the resources pulled down at the “job” level earlier. The contents of these resources are made available to the task environment as a top level directory that can be manipulated during the task run. Here, the application repository will be available under the hello_hapi
directory and the Node.js dependencies will be available under a directory called dependency-cache
. Your execution step may need to move files or directories to their expected location at the start of tasks and place artifacts in output locations at the end of tasks.
Finally, the run item lists the path to the command to run. Each task can only be a single command with arguments, so while it’s possible to construct a command inline by composing a bash string, it’s more common to point the task to a script file. In this case, we point to a script in the hello_hapi
input directory located at hello_hapi/ci/scripts/run_tests.sh
. We will create this script next.
Save and close the file when you are finished.
Finally, we need to create the script that the task will execute. Open a new file called run_tests.sh
located at ci/scripts/run_tests.sh
:
- nano ci/scripts/run_tests.sh
This script will manipulate the inputs of the testing environment to move items to their correct location. It will then run the test suite defined in the repository by running npm test
.
Paste the following into the new file:
#!/usr/bin/env bash
set -e -u -x
mv dependency-cache/node_modules hello_hapi
cd hello_hapi && npm test
First, we indicate that this script should be executed by the Docker container’s bash
interpreter. The set
options modify the shell’s default behavior to cause any errors or unset variables to stop script execution and to print each command as they are executed. These help make the script safer and give greater visibility for debugging purposes.
The first command that we run moves the cached dependencies, located in the node_modules
directory, from within the dependency-cache
directory to the hello_hapi
directory. Remember, both of these directories are available because we specified them as inputs in the task definition. This new location is where npm
will look for the downloaded dependencies it requires.
Afterwards, we move into the application repository and run npm test
to execute the defined test suite.
When you are finished, save and close the file.
Before moving on, mark the new script as executable so that it can be run directly:
- chmod +x ci/scripts/run_tests.sh
Our pipeline and all of the associated files have now been defined.
Before we merge the pipeline
branch back into main
and push it up to GitHub, we should go ahead and load our pipeline into Concourse. Concourse will watch our repository for new commits and run our continuous integration procedures when changes are detected.
While we need to load the pipeline manually, as Concourse executes the pipeline, it will read the tasks and scripts from the directories within the repository. Any changes to the pipeline itself will need to be reloaded into Concourse to take effect, but because we didn’t define everything inline, changes to tasks or scripts will be automatically noticed when they are uploaded as part of a commit.
To set up a new pipeline, target your Concourse server with the fly command using the set-pipeline
action. We need to pass the name of the new pipeline with -p
option and pass the pipeline configuration file with the -c
option:
- fly -t main set-pipeline -p hello_hapi -c ci/pipeline.yml
You will be prompted to confirm the configuration before continuing. Type y and hit ENTER:
Output. . .
apply configuration? [yN]: y
pipeline created!
you can view your pipeline here: https://example.com/teams/main/pipelines/hello_hapi
the pipeline is currently paused. to unpause, either:
- run the unpause-pipeline command
- click play next to the pipeline in the web ui
As the output indicates, the pipeline has been accepted but is currently paused. You can unpause the pipeline with either fly
or the web UI. We will use the web UI.
In your web browser, visit your Concourse server and log in. You should see your new pipeline defined visually:
The pending jobs are represented by grey boxes and the resources are smaller, dark blocks. Jobs triggered by resource changes are connected by solid lines while non-triggering resources use broken lines. Resources flowing out of jobs indicate that a passed
constraint has been set on the next job.
The blue header indicates that the pipeline is currently paused. Click the menu icon (three stacked horizontal lines) in the upper-left corner to open the menu. You should see an entry for your pipeline (you may need log out and back in if the pipeline isn’t visible). Click the blue play icon next to the pipeline to unpause:
The pipeline should now be unpaused and will begin to operate.
At the very beginning, various resources and jobs may turn orange, indicating that errors occurred. This happens because various Docker images need to be downloaded and the pipeline
branch still needs to be merged into the main
branch of our repository to make the task and script files available.
Now that the continuous integration process is defined, we can commit it to our git
repository and add it to Concourse.
Add the new ci
directory to the staging area by typing:
- git add ci
Verify the files to be committed by checking the status:
- git status
OutputOn branch pipeline
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
new file: ci/pipeline.yml
new file: ci/scripts/run_tests.sh
new file: ci/tasks/run_tests.yml
Commit the changes by typing:
- git commit -m 'Add Concourse pipeline'
The changes are now committed to our pipeline
branch. We can merge the branch back into the master
branch by switching branches and merging:
- git checkout master
- git merge pipeline
Now, push the master
branch with the new changes back up to GitHub:
- git push origin master
The commit will kick off a new build within sixty seconds and Concourse will have access to the pipeline tasks and scripts after pulling down the changes.
Back in the Concourse web UI, a new build will begin progressing through the pipeline within the next minute:
The yellow outline indicates that the job is currently in progress. To monitor the progress, click on the Run tests job to see the current output. Once the job is complete, the complete output will be available and the job should turn green:
Click the home icon to go back to the main pipeline screen. The green status of each job indicates that the latest commit has passed all stages of the pipeline:
The pipeline will continue to monitor the repository and automatically run new tests as changes are committed.
In this guide, we set up a Concourse pipeline to automatically monitor a repository for changes. When changes are detected, Concourse pulls down the latest version of the repository and uses a Docker container to install and cache the project dependencies. The build then progresses to the testing stage where the dependencies are copied over and the repository’s test suite is run to check whether any breaking changes were introduced.
Concourse provides a lot of flexibility and power to define isolated testing procedures and store them within the repository itself. If you’d like to learn more about how to leverage Concourse for your own projects, check out the official documentation.
]]>Drone is a continuous integration and delivery platform written in Go. Through integrations with many popular version control services, you can use it to build, test, and deliver software automatically whenever your code is updated.
In this tutorial, we will fork an example GitHub repository and use Drone to build and test the project.
Before starting this tutorial, you’ll need Drone installed, configured, and linked to your GitHub account. The following tutorials will get you there:
When complete, you should be logged in to Drone, at a screen similar to the following:
This is Drone’s dashboard. It shows that we’re logged in, but have no repositories set up in Drone. Let’s create a repository now.
First, we’ll need a GitHub repository with with some code to build and test. You can use Drone with many different version control repositories, but in the prerequisites we linked Drone with a GitHub account, so we’ll use that throughout this tutorial. Log in to GitHub and navigate to the following repo:
https://github.com/do-community/hello_hapi
Click the Fork button in the upper-right corner to copy this repository to your own account. If you have access to multiple GitHub organizations, you may be asked to choose where to fork the repository to. Choose your normal user account. After a few moments, you’ll be taken to the hello_hapi repository that has been copied to your account.
Next, we’ll take a look at how we configure Drone to build and test our code.
Drone looks for a configuration file named .drone.yml
in your repository to determine how it should handle your code. This file is already included in the repository we just forked:
pipeline:
build:
image: node:latest
commands: npm install
test:
image: node:latest
commands: npm run test
This is a YAML file that defines a pipeline. A pipeline is a continuous integration process that runs multiple steps, one after the other. In our case, we have a two-step pipeline.
The first step, called build
will use the node:latest
Docker image to run npm install
in our repository. This will download and install all of the libraries needed to run the tests.
The next step is called test
. It uses the same Docker image to run our test suite. Often, you would run both the build
and test
commands in one step, but we’ve split them up to better demonstrate pipelines.
Note that the steps in a pipeline all share the same workspace, so files created in the first step will be available in later steps. Drone has many more options that can be configured with .drone.yml
, which you can read about in the Drone documentation. Some of these features include:
Next, we’ll tell Drone to watch for changes to our repository, and then trigger a build.
Log in to Drone, if you aren’t already. The home page will look fairly sparse until we set it up. The empty sidebar prompts us to Activate your repositories to get started.
Click the Activate link to show a list of all your GitHub repositories:
Find the hello_hapi repo and click the gray toggle in the right-hand column to activate it. The toggle will flip and turn green. Behind the scenes, Drone will use GitHub’s API to make sure it receives notifications whenever our code changes.
Return to the home page dashboard by clicking the Drone logo in the upper-left corner of the screen, or by using the menu in the upper-right corner next to your user icon:
The dashboard will now have our new repository listed in the left-hand column. There’s no status information yet, because we haven’t run a build:
Click the hello_hapi repository name to enter a detailed view for the repository. It will have some tabs where we can update settings, add secrets like tokens and passwords, and get embeddable build status badges. By default we’re on the Builds tab, and no builds are listed yet.
Let’s trigger a build now.
Leave your Drone page open, and navigate to the hello_hapi GitHub repository in another tab or window. We’re going to add a file to the project in order to trigger a build. Any file will do. Click the Create new file button up towards the top of the file list in your repo:
Choose any filename. In this case we chose trigger-file
. Enter any content:
Then, scroll down to the bottom of the content editor and click the Commit new file button:
Upon commit, GitHub will notify our Drone install of the change. Drone will then start a new build. Switch back to your Drone browser window. The interface should update fairly quickly, and a spinning arrow will indicate that a build is happening.
It may already be finished if you took a few moments to switch back to Drone. Let’s look at the build details next.
Click on the build to enter a detailed view. If the build is still in progress, you’ll be able to observe each pipeline step in real-time.
You can click the disclosure arrows for each build step to show more details. Here is the output of our test step:
If the step is still in progress, clicking the Follow button will show the output as it happens.
Note that there is a clone stage we didn’t define in our .drone.yml
file. This is always present and gives details on how Drone fetched your source code before the build.
In this tutorial, we forked a demonstration repository, explored the .drone.yml
configuration file, and built and tested our repository with Drone.
For more information on configuring Drone to build, test, and deploy your project, refer to the Drone documentation.
]]>Cachet is a self-hosted status page alternative to hosted services such as StatusPage.io and Status.io. It helps you communicate the uptime and downtime of your applications and share information about any outages.
It is written in PHP, so if you already have a LAMP or LEMP server, it is easy to install. It has a clean interface and is designed to be responsive so it can work on all devices. In this tutorial, we’ll set up a status page with Cachet on Debian. The software stack we’ll use is:
Note that Cachet does not monitor your websites or servers for downtime; Cachet records incidents, which can be updated manually via the web interface or with Cachet’s API. If you are looking for monitoring solutions, check out the Building for Production: Web Applications – Monitoring tutorial.
To follow this tutorial, you will need:
One Debian 8 server set up by following the Initial Server Setup with Debian 8 tutorial, including a sudo non-root user. Cachet will work with 512MB of memory, but 1GB or more will give the best performance.
A fully qualified domain name (FQDN) with an A record pointing your domain to your server’s IPv4 address. You can purchase a FQDN on Namecheap or get one for free on Freenom, and you can follow this hostname tutorial for details on how to set up DNS records.
Nginx installed and set up with Let’s Encrypt. You can install Nginx by following this How To Install Nginx on Debian 8 tutorial, then set up Let’s Encrypt by following the first two steps of How To Secure Nginx with Let’s Encrypt on Debian 8. The rest of the steps can be skipped because we will create our own configuration file for Cachet.
Composer installed by following steps 1 and 2 of How To Install and Use Composer on Debian 8.
Git installed by following step 1 of How To Install Git on Debian 8, so you can pull Cachet’s source from GitHub.
An SMTP server, so Cachet can send emails for incidents to subscribers and password reminders to users created in Cachet’s interface. You can use Postfix as a Send-Only SMTP Server, for example, or use a third party provider like Mailgun.
The very first thing to do is create a separate user account to run Cachet. This will have the added benefit of security and isolation.
- sudo useradd --create-home --shell /bin/bash cachet
This command will create a user named cachet with a home directory in /home/cachet
, whose shell will be set to /bin/bash
. The default is /bin/sh
, but it doesn’t provide enough information in its prompt. It will be a passwordless user that will have privileges exclusively to the components Cachet will use.
Now that the user is created, let’s install the PHP dependencies.
Next, we need to install Cachet’s dependencies, which are a number of PHP packges as well as wget
and unzip
, which Composer uses to download and decompress PHP libraries.
- sudo apt-get install \
- php5-fpm php5-curl php5-apcu php5-readline \
- php5-mcrypt php5-apcu php5-cli php5-gd php5-sqlite\
- wget unzip
You can learn more about any individual package from the official PHP Extensions List.
Let’s now configure php-fpm
, the FastCGI Process Manager. Nginx will use this to proxy requests to Cachet.
First, create the file that will host the information for Cachet that php-fpm
needs. Open /etc/php5/fpm/pool.d/cachet.conf
with nano
or your favorite editor.
- sudo nano /etc/php5/fpm/pool.d/cachet.conf
Paste in the following:
[cachet]
user = cachet
group = cachet
listen.owner = www-data
listen.group = www-data
listen = /var/run/php5-fpm-cachet.sock
php_admin_value[disable_functions] = exec,passthru,shell_exec,system
php_admin_flag[allow_url_fopen] = off
request_terminate_timeout = 120s
pm = ondemand
pm.max_children = 5
pm.process_idle_timeout = 10s
pm.max_requests = 500
chdir = /
Save and close the file.
You can read more about these settings in the article on How To Host Multiple Websites Securely With Nginx And Php-fpm, but here’s what each line in this file is for:
[cachet]
is the name of the pool. Each pool must have a unique nameuser
and group
are Linux user and the group under which the new pool will be running. It’s the same as the user we created in Step 1.listen.owner
and listen.group
define the ownership of the listener, i.e. the socket of the new php-fpm
pool. Nginx must be able to read this socket, so we’re using theb www-data user and group.listen
specifies a unique location of the socket file for each pool.php_admin_value
allows you to set custom PHP configuration values. Here, we’re using it disable functions which can run Linux commands (exec,passthru,shell_exec,system
).php_admin_flag
is similar to php_admin_value
, but it is just a switch for boolean values, i.e. on
and off
. We’ll disable the PHP function allow_url_fopen
which allows a PHP script to open remote files and could be used by an attacker.pm
option allows you to configure the performance of the pool. We’ve set it to ondemand
which provides a balance to keep memory usage low and is a reasonable default. If you have plenty of memory, then you can set it to static
. If you have a lot of CPU threads to work with, then dynamic
might be a better choice.chdir
option should be /
which is the root of the filesystem. This shouldn’t be changed unless you use another important option (chroot
).Restart php-fpm
for the changes to take effect.
- sudo systemctl restart php5-fpm
If you haven’t done already, enable the php-fpm
service so that it starts automatically when the server is rebooted:
- sudo systemctl enable php5-fpm
Now that the general PHP packages are installed, let’s download Cachet.
Cachet’s source code is hosted on GitHub. That makes it easy to use Git in order to download, install, and — as we’ll see later — upgrade it.
The next few steps should be followed as the cachet user, so switch to it.
- sudo su - cachet
Clone Cachet’s source code into a new directory called www
.
- git clone https://github.com/cachethq/Cachet.git www
Once that’s done, navigate into the new directory where Cachet’s source code lives.
- cd www
From this point on, you have all the history of Cachet’s development, including Git branches and tags. You can see the latest stable release from Cachet’s releases page, but you can also view the Git tags in this directory.
At publication time, the latest stable version of Cachet was v2.3.11. Use Git to check out that version:
- git checkout v2.3.11
Next, let’s get familiar with Cachet’s configuration file.
Cachet requires a configuration file called .env
, which must be present for Cachet to start. In it, you can configure the environment variables that Cachet uses for its setup.
Let’s copy the configuration example that comes with Cachet for a backup.
- cp .env.example .env
There are two bits of configuration we’ll add here: one to configure the database and one to configure a mail server.
For the database, we will use SQLite. It’s easy to configure and doesn’t require installation of any additional server components.
First, create the empty file that will host our database:
- touch ./database/database.sqlite
Next, open .env
with nano
or your favorite editor in order to configure the database settings.
- nano .env
Because we’ll be using SQLite, we’ll need to remove a lot of settings. Locate the block of settings that begin with DB_
:
. . .
DB_DRIVER=mysql
DB_HOST=localhost
DB_DATABASE=cachet
DB_USERNAME=homestead
DB_PASSWORD=secret
DB_PORT=null
DB_PREFIX=null
. . .
Delete everything except for the DB_DRIVER
line, and change it from mysql
to sqlite
.
. . .
DB_DRIVER=sqlite
. . .
Note: You can check Cachet’s database options for all the possible database driver names if you are using another database, like MySQL or PostgreSQL.
Next, you’ll need to fill in your SMTP server details for the MAIL_*
settings:
. . .
MAIL_HOST=smtp.example.com
MAIL_PORT=25
MAIL_USERNAME=smtp_username
MAIL_PASSWORD=smtp_password
MAIL_ADDRESS=notifications@example.com
MAIL_NAME="Status Page"
. . .
Where:
MAIL_HOST
should be your mail server’s URL.MAIL_PORT
should be the port which the mail server listens on (usually 25
).MAIL_USERNAME
should be the username for the SMTP account setup (usually the whole email address).MAIL_PASSWORD
should be the password for the SMTP account setup.MAIL_ADDRESS
should be the email address from which the notifications to the subscribers will be sent.MAIL_NAME
is the name that will appear in the emails sent to the subscribers. Note that any values with spaces in them should be contained within double quotes.You can learn more about Cachet’s mail drivers in the mail.php source code and the corresponding mail documentation from Laravel.
After you finish editing the file, save and exit. Next, you need to set up Cachet’s database.
The PHP libraries that Cachet depends on are handled by Composer. First, make sure you are in the right directory.
- cd /home/cachet/www
Then run Composer and install the dependencies, excluding the ones used for development purposes. Depending on the speed of your Internet connection, this may take a moment.
- composer install --no-interaction --no-dev -o --no-scripts
Create the database schema and run the migrations.
- php artisan migrate
Note: In the latest stable version (2.3.11
), there is a bug when using SQLite which requires you to run the migrate
command before anything else.
Type yes
when asked. You’ll see output like this:
Output**************************************
* Application In Production! *
**************************************
Do you really wish to run this command? (yes/no) [no]:
> yes
Migration table created successfully.
Migrated: 2015_01_05_201324_CreateComponentGroupsTable
...
Migrated: 2016_06_02_075012_AlterTableMetricsAddOrderColumn
Migrated: 2016_06_05_091615_create_cache_table
The next command, php artisan app:install
, takes a backup of the database, runs the migrations, and automatically generates the application key (i.e. the APP_KEY
value in .env
) which Cachet uses for all of its encryption.
Warning: Never change the APP_KEY
value that is in the .env
file after you have installed and started using Cachet in a production environment. This will result in all of your encrypted/hashed data being lost. Use the php artisan app:install
command only once. For this reason, it’s a good idea to keep a backup of .env
.
Complete the installation.
- php artisan app:install
The output will look like this:
OutputClearing settings cache...
Settings cache cleared!
. . .
Clearing cache...
Application cache cleared!
Cache cleared!
As a last proactive step, remove Cachet’s cache to avoid 500 errors.
- rm -rf bootstrap/cache/*
Now that the database is ready, we can configure Cachet’s task queue.
Cachet uses a queue to schedule tasks that need to run asynchronously, such as sending emails. The recommended way is to use Supervisor, a process manager which provides a consistent interface through which processes can be monitored and controlled.
First, make sure you log out of the cachet user’s session and switch back to your sudo non-root user.
- exit
Install Supervisor.
- sudo apt-get install supervisor
Then create the file that will contain information that Supervisor needs from Cachet. Open /etc/supervisor/conf.d/cachet.conf
.
- sudo nano /etc/supervisor/conf.d/cachet.conf
This file tells Supervisor how to run and manage its process. You can read more about Supervisor in the article How To Install and Manage Supervisor on Ubuntu and Debian VPS.
And add the following contents. Make sure to update Cachet’s directory and username if you’ve used different onces.
[program:cachet-queue]
command=php artisan queue:work --daemon --delay=1 --sleep=1 --tries=3
directory=/home/cachet/www/
redirect_stderr=true
autostart=true
autorestart=true
user=cachet
Save and close the file, then restart Supervisor.
- sudo systemctl restart supervisor
Enable the Supervisor service so that it starts automatically when the server is rebooted.
- sudo systemctl enable supervisor
The database and task queue are ready; the next component to set up is the web server.
We will use Nginx as the web server proxy that will talk to php-fpm
. The prerequisites section has tutorials on how to set up Nginx with a TLS certificate issued by Let’s Encrypt.
Let’s add the Nginx configuration file necessary for Cachet. Open /etc/nginx/sites-available/cachet.conf
with nano
or your favorite editor.
- sudo nano /etc/nginx/sites-available/cachet.conf
This is the full text of the file, which you should copy and paste in. Make sure to replace example.com
with your domain name. The function of each section is described in more detail below.
server {
server_name example.com;
listen 80;
return 301 https://$server_name$request_uri;
}
server {
listen 443;
server_name example.com;
root /home/cachet/www/public;
index index.php;
ssl on;
## Location of the Let's Encrypt certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
## From https://cipherli.st/
## and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
## Disable preloading HSTS for now. You can use the commented out header line that includes
## the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_buffer_size 1400;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
location / {
try_files $uri /index.php$is_args$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm-cachet.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_keep_conn on;
}
}
Here’s what each section of this file does.
The first server
block redirects all HTTP traffic to HTTPS:
server {
server_name example.com;
listen 80;
return 301 https://$server_name$request_uri;
}
. . .
The second server
block contains specific information about this setup, like SSL details and php-fpm
configuration.
The root
directive tells Nginx where the root directory of Cachet is. Is should point to the public
directory and since we cloned Cachet in /home/cachet/www/
, it ultimately becomes root /home/cachet/www/public;
.
. . .
server {
listen 443;
server_name example.com;
root /home/cachet/www/public;
index index.php;
. . .
}
The SSL certificates live inside the Let’s Encrypt directory, which should be named after your domain name:
. . .
server {
. . .
ssl on;
## Location of the Let's Encrypt certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
. . .
}
The rest of the SSL options are taken directly from the Nginx and Let’s Encrypt tutorial:
. . .
server {
. . .
## From https://cipherli.st/
## and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_ecdh_curve secp384r1;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
## Disable preloading HSTS for now. You can use the commented out header line that includes
## the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
ssl_buffer_size 1400;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
. . .
}
The location ~ \.php$
section tells Nginx how to serve PHP files. The most important part is to point to the Unix socket file that we used when we created /etc/php5/fpm/pool.d/cachet.conf
. Specifically, that is /var/run/php5-fpm-cachet.sock
.
. . .
server {
. . .
location / {
try_files $uri /index.php$is_args$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm-cachet.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_keep_conn on;
}
}
Save and close the file if you haven’t already.
Now that the Cachet configuration for Nginx is created, create a symlink to the sites-enabled
directory, because this is where Nginx looks and picks the configuration files to use:
- sudo ln -s /etc/nginx/sites-available/cachet.conf /etc/nginx/sites-enabled/cachet.conf
Restart Nginx for the changes to take effect.
- sudo systemctl restart nginx
And enable the Nginx service so that it starts automatically when the server is rebooted.
- sudo systemctl enable nginx
That’s it! If you now navigate to the domain name in your browser, you’ll see Cachet’s setup page. Let’s walk through it.
The remainder of Cachet’s setup is done through the GUI in your browser. It involves setting the site name and timezone as well as creating the administrator account. There are three steps (setting up the environment, the status page, and the administrator account), and you can always change the configuration later in Cachet’s settings dashboard.
The first configuration step is the Environment Setup.
Note: The Cachet version we are using has a bug where email settings are not shown in the Environment Setup page even, if you have already set them up in .env
. This will be fixed in version 2.4.
The fields should be filled in as follows:
Click Next to go to the next step.
In this section, you set up the site name, site domain, timezone, and language.
Note: Cachet has support for many languages, but it is a community-driven project, which means that there may be some untranslated strings in non-English languages. You can view the list of supported languages, which also includes the percentage of translated content.
The fields should be filled in as follows:
Click Next to go to the next step.
Finally, set up the administrator account. Pick your username, and enter a valid email address and a strong password.
Click Complete Setup to save all the changes.
On the Complete Setup page, you will be informed that Cachet has been configured successfully. You can now click on the Go the dashboard button to log in with your admin credentials and visit Cachet’s dashboard page.
Cachet is now fully set up and functional. The last step covers how to upgrade Cachet in the future.
Using Git makes it extremely easy to upgrade when a new version of Cachet comes out. All you need to do is to checkout that relevant tag and then run the database migrations.
Note: It is always a good idea to back up Cachet and its database before attempting to upgrade to a new version. For SQLite, you only need to copy the database/database.sqlite
file.
First, switch to the cachet user and move to Cachet’s installation directory.
- sudo su - cachet
- cd /home/cachet/www
You can optionally turn on the maintenance page.
- php artisan down
Fetch the latest Cachet code from GitHub.
- git fetch --all
And list all tags.
- git tag -l
You will see all current tags starting with the letter v
. You may notice some that are in a beta or Release Candidate (RC) status. Because this a production server, you can ignore those. You can also visit the Cachet releases page to see what the latest tag is.
When you find the tag you want to use to upgrade, use Git to check out that tag. For example, if you were to upgrade to version 2.4.0, you would use:
- git checkout v2.4.0
Remove Cachet’s cache before continuing.
- rm -rf bootstrap/cache{,t}/*
Next, upgrade the Composer dependencies, which usually contain bug fixes, performance enhancements, and new features.
- composer install --no-interaction --no-dev -o --no-scripts
Finally, run the migrations.
- php artisan app:update
If you turned on the maintenance page, you can now enable access again.
- php artisan up
The new version of Cachet will be up and running.
You’ve set up Cachet with SSL backed by SQLite and know how to keep it maintained with Git. You can choose other databases, like MySQL or PostgreSQL. To explore more of Cachet’s options, check out the official Cachet documentation.
]]>