I am trying to get Drone CI/CD (v0.8.6) running in hosted Kubernetes (1.12) with SSL provisioned by Let’s Encrypt cert-manager.

The standard unencrypted install works fine, I specify a service of LoadBalancer and then DO notices that and creates a LoadBalancer that forwards to the ingress. It works fine with HTTP.

But when I introduce cert-manager, there are 2 problems:

  1. while it seems to mostly work, the http01 challenge is not mapped correctly. I see that there are routes created in the ingress for the challenge but they don’t get routed by the LoadBalancer and consequently no certificates are issued.
  2. The load balancer doesn’t get updated to forward 443 through to the ingress for termination – I"m guessing this is just a limitation of the automaticly deployed LB in DO.

I’m guessing that the helm chart needs to have some additional annotations passed through to the ingress (or manually applied) to make everything sync up in the DO environment – but I am new to cert-manager and don’t have a working environment to use as a reference.

So with all that, here are my questions:

  1. Does anyone have cert-manager working with a Load Balancer and ingress in hosted Kubernetes? It doesn’t look to me like this is currently working with the standard install of cert-manager via helm.
  2. would it be better to use dns01 challenges? it seemed like that was going to be harder to implement in DO but since the kubernetes networking is out of the loop it might be easier.
  3. Has anyone gotten Drone running in hosted kubernetes on DO with cert-manager?

Advice appreciated, thank you!

BTW, currently I took the lazy way and am using non-SSL deployment with a manually applied SSL termination in the Load Balancer and that is working OK, although with that approach I cannot allocate a certificate in a sub-domain and had to create a whole new TLD, but that is a topic for another thread.

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

2 answers

I have that exact setup working with ingress-nginx, in addition to external-dns chart adding the DNS entries automatically. You didn’t mention having nginx as the ingress so if you don’t I suggest you install that.

ingress-nginx will create the LoadBalancer automatically and forward ports 80 and 443 as TCP to the Droplets (so the TLS termination happens in the cluster). Nginx will also be cheaper than using services since only one LoadBalancer is needed for the whole cluster.

When installing ingress-nginx through Helm, set controller.publishService.enabled: true so that the Ingress objects will get the IP of the LoadBalancer instead of the Droplet they reside on. I don’t think this matters if you’re not using external-dns, but at least to me it improves readability.

With this setup the cert-manager will add a path for the http01 challenge to the Ingress object automatically.

These are my values for the drone ingress:

  enabled: true
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: 'true'
    - drone.example.com
    - secretName: drone-tls
        - drone.example.com
  • Aha – I’m certainly an ingorant newbie! I had no idea I needed to also install more charts to get it to work AFAIK none of the examples indicate that is needed.

    I will try installing the nginx-ingress chart, of course I have no idea what other configuration may be needed to make it work besides controller.publishService.enabled – but maybe that is enough.

    I assume that I can then use service.type: ClusterIP

    My initial attempts are not working but I have plenty of stuff to try – if you happen to have time to redact your configuration files and post them it would be wonderful.

    Thank you so much.

    • Here are the values I’m using. I did not include external-dns config since that’s specific for your provider, but I can send the values if you want to use it and can’t get it working.


          # Set the endpoints on all ingress objects to the Load Balancer. Needed for external-dns.
          enabled: true
        # these are only needed for Prometheus
          enabled: true
          enabled: true
              prometheus.io/scrape: "true"
              prometheus.io/port: "10254"


        enabled: true
          kubernetes.io/ingress.class: nginx
          kubernetes.io/tls-acme: 'true'
          - drone.example.com
          - secretName: drone-tls
              - drone.example.com
        host: "https://drone.example.com"
          DRONE_ADMIN: admin
          DRONE_PROVIDER: "github"
          DRONE_OPEN: "false"
          DRONE_GITHUB: "true"


        defaultIssuerName: letsencrypt-prod
        defaultIssuerKind: ClusterIssuer


      apiVersion: certmanager.k8s.io/v1alpha1
      kind: ClusterIssuer
        name: letsencrypt-prod
          server: https://acme-v02.api.letsencrypt.org/directory
          email: example@example.com
            name: letsencrypt-prod
          http01: {}
      • Thank you for the guidance! This worked perfectly – it even established a SAN SSL Certificate for drone.sub2.domain.com even though I already have a separate letsencrypt at sub1.domain.com and domain.com. Magical.

        In some ways I was very close to having this working before but I would always end up with one thing borked when another was working so it never quite came together, I doubt I would’ve figured out the combination of using nginx-ingress along with the ingressShim on my own.

        I didn’t try to set up external-dns, but I’m hopeful I will be able to figure it out.

        In case anyone needs even more detail, this is how I proceeded from a blank Digital Ocean k8s instance, using the configuration files above plus a little additional kubernetes config:


        apiVersion: v1
        kind: ServiceAccount
          name: tiller
          namespace: kube-system
        apiVersion: rbac.authorization.k8s.io/v1beta1
        kind: ClusterRoleBinding
          name: tiller
          apiGroup: rbac.authorization.k8s.io
          kind: ClusterRole
          name: cluster-admin
          - kind: ServiceAccount
            name: tiller
            namespace: kube-system

        Establish the cluster services:

        # Establish RBAC Service Account for Tiller
        kubectl create -f rbac-config.yml
        helm init --service-account tiller
        # Install nginx-ingress which will be the load-balancer for this cluster
        helm install --name nginx-ingress -f nginx-ingress-values.yml stable/nginx-ingress
        # Install cert-manager
        helm install --name cert-manager -f cert-manager-values.yml stable/cert-manager
        # Note: cert-manager has to be installed before this will work
        kubectl apply -f ./clusterissuer-letsencrypt-prod.yml

        At this point you should manually create the DNS entry for the Load Balancer that Digital Ocean created automatically when the helm chart nginx-ingress was installed above.

        Then ensure that your redirect URLs in drone-values.yml match those configured in the GitHub OAuth application.

        # Establish github OAuth Secrets
        kubectl create secret generic drone-server-secrets --namespace=default \
            --from-literal=DRONE_GITHUB_SECRET="...REDACTED..." \
        helm install --name drone -f ./drone-values.yml stable/drone


        Thanks again @myyra, you rock!

for these points:
1) it works for me
however i started with ONE domain and one service and one ssl cert,
(server PRIME and domanin www.prime.xyz)
then i moved to differents domanins to differents services and different ssl cert
(server PRIME and domanin www.prime.xyz and service SECONDARY with domain splash.secondary.xyz )
now i’m trying to have differents domains with multplice ssl cert and i’m not able to make it work actually
(server PRIME and domanin www.prime.xyz and service SECONDARY with domain splash.secondary.xyz AND service TROUBLE with domain trouble.secondary.xyz )

2) i’m using http01

3) i’m not using drone..


Submit an Answer