Pardon my ignorance as I’m new to Kubernetes. I’m trying to monitor my nginx-ingress using prometheus.

I initially installed the ingress using this tutorial which has you install the helm stable/nginx-ingress chart.

Then I installed the Kubernetes Monitoring Stack

Next I discovered you needed to enable metrics on the ingress-controller to expose them to prometheus. I used helm upgrade to set controller.metrics.enabled on the ingress. This had the effect of creating a new service nginx-ingress-controller-metrics but still no stats in prometheus.

Finally, I found this issue which said you needed to create a prometheus job to scape the ingress metrics. Unfortunately, it didnt say how to add the job.
Since the monitoring stack uses promethius-operator I ended up following this method to add the job (i think). Following these instructions I created a secret of the job as outlined in the stack overflow answer, added that secret to the promethius-operator namespace, then referenced it in spec.additionalScrapeConfigs on prometheus-operator-prometheus

That’s where I’m at now and still no luck seeing the ingress metrics. Any guidance would be much appreciated. Thanks in advance!

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
1 answer

Hey @owise1
did you managed to get the ingress metrics into prometheus operator?

I’m running the same stack and facing the same problem…
Would be great if you could share your solutions.

Thanks in advance!
Laurenz

  • nope. i never figured it out. I’m actually planning on trying google cloud platform because I believe it’s all built in. Let me know if you get it going on DO though.

    • That‘s sad - I will let you know if I finde it out…

    • Hey there, I encountered the same issue today I think. The problem is that prometheus operator by default doesn’t scrape from other releases. You can read about this here.

      By default, Prometheus discovers ServiceMonitors within its namespace, that are labeled with the same release tag as the prometheus-operator release. Sometimes, you may need to discover custom ServiceMonitors, for example used to scrape data from third-party applications. An easy way of doing this, without compromising the default ServiceMonitors discovery, is allowing Prometheus to discover all ServiceMonitors within its namespace, without applying label filtering. To do so, you can set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues to false.

      So I fixed this easily by…

      # upgrade ingress to enable metrics
      helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-nginx --set controller.metrics.serviceMonitor.enabled=true --set controller.metrics.enabled=true
      
      # upgrade prometheus operator to look in other namespaces
      helm upgrade prometheus-operator stable/prometheus-operator --namespace prometheus-operator --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
      
      

      You will need to make sure you have added the stable repo and the nginx-ingress repo so you can upgrade the chart.

      If this doesn’t work, check to make sure your nginx ServiceAgent is in the prometheus-operator namespace and your metrics are up and running.

      • Hey @SubjectiveBias
        thanks for your detailed response!

        I just tried to upgrade the nginx-ingress and the prometheus-operator with the settings from your post.

        I also created a ServiceMonitor like this:

        apiVersion: monitoring.coreos.com/v1
        kind: ServiceMonitor
        metadata:
          name: nginx-ingress-controller-metrics
          labels:
            app: nginx-ingress
        spec:
          endpoints:
            - interval: 30s
              port: metrics
          selector:
            matchLabels:
              app: nginx-ingress
              release: nginx-ingress
          namespaceSelector:
            matchNames:
              - kube-system
        
        

        But it unfortunately is still not working…
        My nginx ingress is running in the namespace “kube-system” and prometheus-operator is running in “monitoring”

        I created the service monitor in the monitoring namespace and can finde it in prometheus targets as:
        “monitoring/nginx-ingress-controller-metrics/0 (0/0 up)”

        But without any endpoint…

        Do you have any idea where my problem could be?

        • Hey I think you’re getting close.

          The command when upgrading nginx-ingress should have created a ServiceMonitor for you under the namespace you are using, so no need to create one yourself.

          The problem arises because nginx-ingress has it’s own ServiceMonitor in it’s own namespace (in your case kube-system), when prometheus-operator only looks in it’s own namespace by default (in your case monitoring).

          This is what I did to get it to work, changed to your namespaces:

          1: installed monitoring stack and nginx-ingress from helm / digitalocean marketplace

          2: add repos to upgrade from

          helm repo add stable https://kubernetes-charts.storage.googleapis.com
          helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
          helm repo update
          

          3: create ingress values.yaml (some of this is uneeded, but it’s what I’m currently using)

          controller:
            metrics:
              port: 10254
              enabled: true
              service:
                annotations:
                  prometheus.io/scrape: "true"
                  prometheus.io/port: "10254"
              serviceMonitor:
                enabled: true
                namespace: monitoring
                namespaceSelector:
                  any: true
          

          4: upgrade nginx and prometheus

          helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace kube-system --values values.yaml
          
          helm upgrade prometheus-operator stable/prometheus-operator --namespace monitoring --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false
          

          That long line about serviceMonitorSelectorNil… makes the operator look outside it’s release labels in helm. Apparently it’s known to cause wasted time.

Submit an Answer