I’m currently trying out the managed Kubernetes Cluster with the Gitlab CI/CD and integrated private Registry. To understand my problem you need to know that I currently have a working Docker Swarm Cluster with that CI/CD and private Registry in use.

However with the managed Kubernetes the Pod Creation is always stuck at “ImagePullBackOff” with following error:

Failed to pull image "registry.gitlab.com/<PROJECT-PATH>/<BRANCH>:<COMMIT HASH>": rpc error: code = Unknown desc = Error response from daemon: Get "registry.gitlab.com/<PROJECT-PATH>/<BRANCH>:<COMMIT HASH>": denied: access forbidden

I tried it with Helm/Tiller and also with a normal Kubernetes Deploy file. The secret is in the same namespace as the deployment and is working on the mentioned Docker Swarm and on my local machine.

Is DO doing something weird here? Maybe someone of you have more information.

Kind regards,

edited by AHA
1 comment

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

Submit an Answer
5 answers

I have similiar problem each time i create gitlab registry secrets via kubectl create secret. It just wont pull any image, but when i use this script everything works:

# Create Docker Image Registry secret in Kubernetes
# Usage:
#   REGISTRY=registry.gitlab.com \
#   LOGIN=<your-username> \
#   PASSWORD=<your-pword> \
#   NAMESPACE=<your namespace> \
#   NAME=<secret-name> \
#   ./registry.sh
# The same as:
# kubectl create \
#         --namespace=<your-namespace> \
#         secret docker-registry <secret-name> \
#         --docker-server=<your-registry-server> \
#         --docker-username=<your-username> \
#         --docker-password=<your-pword> \

set -eo pipefail

if [ "$1" == "delete" ]; then
  kubectl delete secret \
    --namespace="${NAMESPACE}" \
    "${NAME}" || true

AUTH=$(echo -n ${LOGIN}:${PASSWORD} | base64 -w 0)

export TOKEN=$(envsubst <<<'
' | base64 -w 0)

envsubst <<<'
apiVersion: v1
kind: Secret
  name: ${NAME}
  namespace: ${NAMESPACE}
  .dockerconfigjson: ${TOKEN}
type: kubernetes.io/dockerconfigjson
' | kubectl create -f -

You need to create a secret to authorize kubernetes to pull images from the registry.

In order to do that you may need to create a Secret Object with the base64 of your local dockerconfig.json like so:

apiVersion: v1
kind: Secret
  name: docker-registry-configuration
  namespace: your-namespace
  .dockerconfigjson: base64encodedstring
type: kubernetes.io/dockerconfigjson


apiVersion: extensions/v1beta1
kind: Deployment
  name: your-application
  namespace: your-namespace
        image: myprivate.registry.com/image-name
        name: container-name
      - name: docker-registry-configuration
      restartPolicy: Always
status: {}


as mentioned I have a secret in place for this image. It’s a Gitlab Deploy Token which is pushed to the kubernetes like this:

kubectl create secret -n "$KUBE_NAMESPACE" \
    docker-registry gitlab-registry \
    --docker-server="$CI_REGISTRY" \
    --docker-username="$CI_REGISTRY_USER" \
    --docker-password="$CI_REGISTRY_PASSWORD" \
    --docker-email="$GITLAB_USER_EMAIL" \
    -o yaml --dry-run | kubectl replace -n "$KUBE_NAMESPACE" --force -f -

The Variables here are environment Variables in the Gitlab CI.

After creating the secret I deploy the application with helm:

if [[ -z "$CI_COMMIT_TAG" ]]; then

helm upgrade --install \
      --wait \
      --set service.enabled="$service_enabled" \
      --set gitlab.app="$CI_PROJECT_PATH_SLUG" \
      --set gitlab.env="$CI_ENVIRONMENT_SLUG" \
      --set gitlab.envName="$CI_ENVIRONMENT_NAME" \
      --set gitlab.envURL="$CI_ENVIRONMENT_URL" \
      --set releaseOverride="$RELEASE_NAME" \
      --set image.repository="$image_repository" \
      --set image.tag="$image_tag" \
      --set image.pullPolicy=IfNotPresent \
      --set image.secrets[0].name="$secret_name" \
      --set application.track="$track" \
      --set application.database_url="$DATABASE_URL" \
      --set application.secretName="$APPLICATION_SECRET_NAME" \
      --set application.secretChecksum="$APPLICATION_SECRET_CHECKSUM" \
      --set service.commonName="le-$CI_PROJECT_ID.$KUBE_INGRESS_BASE_DOMAIN" \
      --set service.url="$CI_ENVIRONMENT_URL" \
      --set service.additionalHosts="$additional_hosts" \
      --set replicaCount="$replicas" \
      --set application.migrateCommand="$DB_MIGRATE" \
      --namespace="$KUBE_NAMESPACE" \
      "$name" \

Any movement or insight on this?

I had the same issue but in my case was a typo on the secret namespace. When I fixed that my problem got solved.