Jun 11, 2020

Concurrently Process a Single Kafka Partition

Concurrency in Kafka is defined by how many partitions make up a topic. For a consumer group, there can be as many consumers as there are partitions, with each consumer being assigned one or more partitions. If there are more partitions than consumers, some or all of the consumers will be assigned multiple partitions. If there are more consumers than partitions, the extra consumers will sit idle, potentially waiting for a rebalance when one of the other consumers goes down.

But what if there is a need to process a topic with a single partition faster than one consumer can process by itself?

Recently at a client we were debugging a data load trying to find some missing data. The data load is controlled by another team two hops away from where our process receives the data. We were missing some records on our end but the source team was convinced they loaded all the data without dropping any records. To verify that, I started up kafka-avro-console-consumer and pointed it at that topic to see what was there. The data on that topic is serialized with Avro and the field we needed to match on is defined as a logical type, so it shows up as gibberish in the output meaning grepping the output won’t work very well.

The next step was to write a simple consumer program that would deserialize the Avro and convert the field we needed to match on. Simple enough. So I fire it up and let it run. Four hours and over 300 million records later, I found the data we were looking for (it was dropped by an intermediate process in the next step of the pipeline). 300 million records is a lot of records for one partition and four hours is a long time to wait. How can this be sped up if we need to look for more data?

Why not split the partition into segments and let multiple consumers scan through their own offset range? Each consumer would need its own group and a start and end offset to scan through. So I gave it a shot with 10 threads and it reduced the processing time to somewhere in the neighborhood of 30 minutes.

I’ve written a sample project to illustrate how this works and you can find it on Github. Here are some of the key parts.

Find out the first and last offset to calculate number of messages.

long startOffset = kafkaConsumer.position(topicPartition);
long endOffset = kafkaConsumer.position(topicPartition);

Split up the partition into a set of ranges for each thread to process.

long sliceSize = (endOffset - startOffset) / numberOfSlices;
List<Range> ranges = IntStream.range(0, numberOfSlices)
     .mapToObj(i -> new Range(startOffset + (i * sliceSize), startOffset + (i * sliceSize) + sliceSize - 1))
//make sure last range includes the endOffset
ranges.set(numberOfSlices - 1, new Range(ranges.get(numberOfSlices - 1).getStart(), endOffset));

In each thread, set the offset to the beginning of the range.

TopicPartition topicPartition = new TopicPartition(topicName, partition);
// move this consumer's offset to the beginning of it's range
kafkaConsumer.seek(topicPartition, startOffset);

Process over this thread’s range and do something when a particular record is found.

// loop over the records until it reaches it's ending offset.  It may go beyond the ending
// offset due to the batch size.
while (currentOffset < endOffset) {
    ConsumerRecords<String, Product> records = kafkaConsumer.poll(Duration.of(1000, ChronoUnit.MILLIS));
    for (ConsumerRecord<String, Product> record : records) {
        if (record.value().getModel() != null && record.value().getModel().equals("T65B")) {
            // we found the record we were looking for!
            LOG.info("Found record: {} - offset {}", record.value().getModel(), record.offset());
    currentOffset = kafkaConsumer.position(topicPartition);

This isn’t something I’d get too carried away with and create 1000 groups to split up a partition as that might overwhelm Kafka, though I don’t have any concrete facts to back that up. The KafkaConsumer documentation does state that any number of groups can subscribe to a topic, so maybe it’s not that big of a concern.

Processing data this way, however, breaks Kafka’s ordering guarantee, so keep that in mind if using this method to quickly catch up on processing a singly-partitioned topic.

Link to sample project: https://github.com/brendonanderson/partitionsplit

About the Author

Brendon Anderson profile.

Brendon Anderson

Sr. Consultant

Brendon has over 15 years of software development experience at organizations large and small.  He craves learning new technologies and techniques and lives in and understands large enterprise application environments with complex software and hardware architectures.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Blog Posts
Seamlessly Integrating Micro Apps with iFrame
A recent client wanted to upgrade a small portion of their legacy application with a more modern UI and extra functionality, like fuzzy text search. There are a few approaches to incremental upgrades of legacy […]
Consul on Docker EE
[markdown]”Consul is a service networking solution to connect and secure services across any runtime platform and public or private cloud” – [https://consul.io](https://consul.io) This post details a way to run a Consul cluster on Docker EE […]
Passing the AWS Machine Learning Speciality Exam
The Amazon Machine Learning Specialty Exam is a 3-hour, 65 question test. It is designed to test your skills in AWS specific Data Engineering and Machine Learning Practices along with Machine Learning in general. I […]
Consistent Python environments with Poetry and pre-commit hooks
Clean and Consistent Environments Regardless of the programming language you are working in, it can sometimes be a struggle to maintain a clean codebase and a consistent development environment for all members of your team, […]