Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
250 views
in Technique[技术] by (71.8m points)

Kafka streams produces too many consumer offsets for exactly_once semantic

we have an application, which uses kafka streams for reading from topic -> processing -> saving to another topic. Our application uses exactly_once semantic. Everything seems to be fine until we updated our application from kafka-streams 2.1.1 to version 2.6.0. We run out of disk space and reason was __consumer_offsets topics with lot of data.

I investigated __consumer_offsets topic and every 100 ms is written offset info for each topic and partition. I am aware about that exactly_once semantic changes commit interval to 100 ms, that would make a sense, but we did not notice such a behavior in 2.1.1 version. I tried to compare sourcodes between 2.1.1 and 2.6.0 and I did not find out fundamental difference that explained such a behavior.

Thanks for your answers

question from:https://stackoverflow.com/questions/65941718/kafka-streams-produces-too-many-consumer-offsets-for-exactly-once-semantic

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...