Hi guys,
I have created an api service for my website that pulls the image from docker hub. That is all working fine, but the container isn’t running properly and I wanted to take a look inside the container for some kind of error message. Is there a way of doing that? What is the proper way to inspect logs for each pod/service/container?

Also, what do I use to create environment variables? Is this done via Config Maps?

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

2 answers

You can use kubectl logs to retrieve logs from a previous instantiation of a container with –previous flag, in case the container has crashed. If your pod has multiple containers, you should specify which container’s logs you want to access by appending a container name to the command.

@christianstrang If you’re wanting to look inside the container and view its logs, you can do one of the following:

Using Docker

shell into container

docker run --rm -it <image-name> bash

view container’s logs

docker logs <container-name>

Note: If you’re using Alpine based image, you will need to use sh instead of bash.

Using Kubernetes

shell into container

kubectl exec -i -t <pod-name> --container <container-name> -- /bin/bash

Note: If you’re using Alpine based image, you will need to use sh instead of bash.

view container’s logs

kubectl logs <pod-name> -c <container-name>  

Note: A Kubernetes pod can consist of 1 or more running containers. Also, I would recommend taking a look at a tool called stern for viewing logs within K8s cluster.

Next, you can use K8s ConfigMaps for your environment variables and K8s Secrets for your sensitive credentials. Finally, I would highly recommend testing your Docker container(s) prior to using them within your K8s cluster.

Well, I wish that the above information was helpful and I wish you all the best.

Think different and code well,


Submit an Answer