Question

What's the best way to set up CD from GitHub Actions?

I have a Django Application running with docker on a droplet.

I have a Redis and an Elastic search service running from docker which I obviously would like to keep the data from instead of just rm the directory. My database is a DO managed instance.

I already have a non-root user setup along with an ssh key on my GitHub for that user.

Is there any documentation on DO or one that you would recommend to set this up?

Here’s my current workflow:

name: CI/CD

on:
  push:
    branches: ["main"]
  pull_request:
    branches: ["main"]

jobs:
  CI:
    runs-on: ubuntu-latest
    environment: Django Test
    services:
      elasticsearch:
        image: elasticsearch:7.17.9
        ports:
          - "9200:9200"
        options: -e="discovery.type=single-node" --health-cmd="curl http://localhost:9200/_cluster/health" --health-interval=10s --health-timeout=5s --health-retries=10

      postgres:
        image: postgres:13
        env:
          POSTGRES_USER: postgres
          POSTGRES_DB: postgres
          POSTGRES_PASSWORD: postgres
        ports:
          - 5433:5432
        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5

    strategy:
      max-parallel: 4
      matrix:
        python-version: ["3.10"]

    steps:
      - uses: actions/checkout@v3
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v3
        with:
          python-version: ${{ matrix.python-version }}
      - name: Install Dependencies
        run: |
          python -m pip install --upgrade pip
          pip install -r requirements.txt
      - name: Run Tests
        env:
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
          POSTGRES_NAME: ${{ secrets.POSTGRES_NAME }}
          POSTGRES_USER: ${{ secrets.POSTGRES_USER }}
          POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}
          POSTGRES_PORT: ${{ secrets.POSTGRES_PORT }}
          DJANGO_SETTINGS_MODULE: ${{ secrets.DJANGO_SETTINGS_MODULE }}
          CELERY_BROKER_URL: ${{ secrets.CELERY_BROKER_URL }}
          YOUTUBE_V3_API_KEY: ${{ secrets.YOUTUBE_V3_API_KEY}}
        run: |
          coverage run manage.py test && coverage report --fail-under=90

Submit an answer


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

Sign In or Sign Up to Answer

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Bobby Iliev
Site Moderator
Site Moderator badge
December 28, 2023

Hi there,

Quickly jumping in here! Great to hear that you’ve got your deployment process working with SSH, git pull, and Docker Compose! To minimize downtime during deployment, there are several strategies you can implement.

  1. Building a Docker image can take a significant amount of time, during which your application might be unavailable.

  2. Stopping and starting containers also contributes to downtime.

What I could suggest here is:

  1. Instead of building images on the production server, consider using a GitHub Workflow to build Docker images and push them to a registry. Then, your production server can simply pull the latest image, reducing the build time and the load on the production server.

  2. Docker Compose supports adding some meta information for the updates, which can help minimize downtime by updating containers one by one rather than all at once:

    https://docs.docker.com/compose/compose-file/deploy/

  3. Implement health checks in your Docker configuration. This ensures that traffic is only routed to the container once it’s fully ready to handle requests:

    https://docs.docker.com/compose/compose-file/compose-file-v3/#healthcheck

Here’s an example deploy.sh script that you can use on your server once you’ve offloaded the build stage to a GitHub action and have your images available on a container registry:

#!/bin/bash

# Pull the latest version of the repository
git pull origin main

# Pull the latest Docker images
docker-compose pull

# Apply database migrations
# Uncomment the next line if you have database migrations
# docker-compose run web python manage.py migrate

# Perform a rolling update of the containers
docker-compose up -d --no-deps --build web

# Optional: Clean up after the update
docker image prune -f
  • This script pulls the latest code from your repository, pulls the latest Docker images, and then uses docker-compose up -d to restart your containers with minimal downtime.
  • The --no-deps flag prevents Docker Compose from also recreating linked services.
  • The --build flag is optional if you are building images on the fly. If you are pulling pre-built images from a registry, you can omit this flag.

Hope that this helps!

Best,

Bobby

alexdo
Site Moderator
Site Moderator badge
December 24, 2023

Heya,

A possible solution to keep the data from your Redis and Elasticsearch services when running your Django application on a DigitalOcean Droplet, you can use Docker volumes.

Docker volumes allow you to store data that is independent of the Docker containers themselves, ensuring that the data persists even if you restart or remove the containers.

Happy holidays!

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Featured on Community

Get our biweekly newsletter

Sign up for Infrastructure as a Newsletter.

Hollie's Hub for Good

Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.

Become a contributor

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

Welcome to the developer cloud

DigitalOcean makes it simple to launch in the cloud and scale up as you grow — whether you're running one virtual machine or ten thousand.

Learn more