Hi all,

I have an app that I’ve deployed to a Kubernetes cluster, the app connects to a Kafka instance and reads messages from 1 specific group and 1 topic.

I want to have this a bit more resistant and I want to increase the number of replicas in Kubernetes and set it to 3 or 5 for example.

But I’m not sure if this is going to be OK for the Kafka messages.. has anyone done this before and did you have any issues having multiple pod replicas connecting to the same Kafka topic?

Any suggestions would be appreciated!

edited by bobbyiliev

These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.

×
2 answers

Caveat: I don’t have too much experience with Kafka, so better to double check what I’m about to say.

My understanding is that Kafka is specifically designed for this particular scenario. Multiple clients can add messages and multiple clients can consume them.

  • Hi @nabsul, thanks for the suggestion! I am afraid of getting duplicates, or messing up the data integrity. I might just give it a try and see if causes any issues. I was hoping to find some Kafka documentation but I couldn’t.

I think these points in the documentation are pretty helpful:

https://kafka.apache.org/uses#uses_messaging

I also found the following quote here: https://docs.microsoft.com/en-us/azure/hdinsight/kafka/apache-kafka-producer-consumer-api

If you create multiple consumer instances using the same group ID, they’ll load balance reading from the topic.

Still, even if the system you’re using supports your requirements, you might be accidentally misconfiguring it or something like that. So I highly recommend testing your code as much as possible.

Submit an Answer