
Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:
Once the early-adopter seats are all used, the price will go up and stay at $33/year.
Last updated: May 7, 2025
In this tutorial, we’ll explore the monitoring capabilities provided by Spring Kafka using Micrometer and Spring Boot Actuator. We’ll start by looking at the native metrics exposed by Apache Kafka for both producers and consumers, which offer valuable insights into performance, throughput, errors, and latency.
Next, we’ll dive into the Spring-specific metrics exposed under spring.kafka.listener and spring.kafka.template. We’ll also learn how to customize @KafkaListener and KafkaTemplate to add custom tags to these metrics through Spring configuration.
Lastly, we’ll discuss tracing and see how Spring Kafka makes it easy to propagate the tracing information generated by Micrometer. This allows us to track and correlate messages for better debugging and monitoring.
For this article’s code samples, we’ll assume we’re creating the backend application for a blogging and learning website, like Baeldung.
We’ll use a simple Spring Boot application that uses the spring-kafka and spring-boot-starter-actuator dependencies:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
In the src/test/resources/docker folder, we can find a docker-compose.yml file that can be used to start the application locally.
Additionally, the application is configured to expose all the metrics via actuator endpoints:
management.endpoints.web.exposure.include: '*'
At this point, we should be able to start the application locally, access the actuator on http://localhost:8081/actuator, and see all the exposed endpoints and metrics:
In terms of functionality, our service provides an HTTP endpoint to create article comments. Once a comment is submitted, the application sends a message to the Kafka topic baeldung.article-comment.added, which lets us track Kafka producer metrics.
Later, we’ll set up a Kafka listener to consume messages from the same topic. This will help us understand how to monitor the Kafka listener and what metrics it exposes.
Just by adding the spring-boot-starter-actuator dependency, we’ll expose some Kafka metrics out of the box. This feature is available starting with Spring Boot version 2.5 – let’s focus on the native metrics for the Kafka producer and consumer.
We’ll need to make the application produce some events, so let’s create some comments through the REST API:
curl --location 'http://localhost:8081/api/articles/oop-best-practices/comments' \
--header 'Content-Type: application/json' \
--data '{
"articleAuthor": "Andrey the Author",
"comment": "Great article!",
"commentAuthor": "Richard the Reader"
}'
After that, we can check the http://localhost:8081/actuator/metrics endpoint again. This time, we’ll see a number of Kafka producer metrics, including values for things like latency and failure rate:
To dive deeper, we can find a comprehensive list of all Kafka producer metrics in the Apache Kafka documentation.
Needless to say, we can explore each of these metrics by appending its name to the path. For instance, let’s access the endpoint monitoring the producer error rate:
Similar to the producer metrics, Micrometer records metrics related to the Kafka consumer and exposes them via the actuator.
To illustrate this, let’s add a @KafkaListener annotation to our application. For simplicity, it’ll listen to the same baeldung.article-comment.added topic:
@Component
public class ArticleCommentsListener {
@KafkaListener(topics = "baeldung.article-comment.added")
public void onArticleComment(ArticleCommentAddedEvent event) {
// some logic here...
}
}
Now, we can run the application, send a few more requests, and then check the actuator metrics again. This time, we should see several metrics related to the Kafka consumer. Among others, we’ll notice metrics such as:
The full list of consumer properties we can expect to find is available in Kafka’s official documentation.
Spring Kafka also has a built-in API for enriching the native metrics with custom tags. For example, we can customize the producer metrics by adding a listener to the ProducerFactory bean:
@Bean
ProducerFactory<String, ArticleCommentAddedEvent> producerFactory(
KafkaProperties kafkaProperties, MeterRegistry meterRegistry
) {
ProducerFactory pf = new DefaultKafkaProducerFactory<>(kafkaProperties.buildProducerProperties());
pf.addListener(
new MicrometerProducerListener<String, String>(
meterRegistry,
Collections.singletonList(new ImmutableTag("app-name", "article-comments-app"))
)
);
return pf;
}
As we can see, this adds a custom name to all Kafka producer metrics. Additionally, we can modify the ConsumerFactory, adding a MicrometerConsumerListener to add custom tags to the consumer metrics.
In our demo application, we use the KafkaTemplate bean to publish messages to Kafka. In addition to producer metrics, Micrometer also records metrics specific to KafkaTemplate and exposes them under the spring.kafka.template metric name.
We can customize the KafkaTemplate bean to add or update the tags associated with this metric. For example, the setMicrometerTags() method allows us to define tags as key-value pairs and attach them to a specific kafkaTemplate bean:
@Bean
@Qualifier("articleCommentsKafkaTemplate")
KafkaTemplate<String, ArticleCommentAddedEvent> articleCommentsKafkaTemplate(
ProducerFactory<String, ArticleCommentAddedEvent> producerFactory
) {
var template = new KafkaTemplate<>(producerFactory);
template.setMicrometerTags(Map.of(
"topic", "baeldung.article-comment.added"
));
return template;
}
Moreover, we can use setMicrometerTagsProvider() to dynamically generate the tags for a given record. Let’s use this method to extract the record’s key and attach it as a tag:
template.setMicrometerTagsProvider(
record -> Map.of("article-slug", record.key().toString())
);
We can now add a few more comments to different articles and then verify the http://localhost:8081/actuator/metrics/spring.kafka.template endpoint:
As expected, the data includes all recorded information about KafkaTemplate performance along with our custom tags.
Just like with KafkaTemplate, Micrometer also monitors KafkaListener and exposes related metrics under the spring.kafka.listener name. Spring Kafka maintains a consistent API, making it easy to configure custom tags for listener metrics.
The setMicrometerTags() and setMicrometerTagsProvider() methods can be used to attach these custom Micrometer tags, and are configured at the ConcurrentKafkaListenerContainerFactory level:
@Bean
ConcurrentKafkaListenerContainerFactory<String, String> customKafkaListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory
) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
ContainerProperties containerProps = factory.getContainerProperties();
containerProps.setMicrometerTags(Map.of(
"app-name", "article-comments-app"
));
containerProps.setMicrometerTagsProvider(
record -> Map.of("article-slug", record.key().toString())
);
return factory;
}
Additionally, we need to update our @KafkaListener annotation and point it to the modified containerFactory:
@KafkaListener(
topics = "baeldung.article-comment.added",
containerFactory = "customKafkaListenerContainerFactory"
)
public void onArticleComment(ArticleCommentAddedEvent event) {
// ...
}
As a result, Micrometer attaches our static and dynamic tags to the metric, and Actuator exposes them through the http://localhost:8081/actuator/metrics/spring.kafka.listener endpoint.
Micrometer’s tracing feature helps us track the flow of a request by adding trace information to logs. This makes debugging and monitoring much easier.
Then, we’ll use Spring Kafka’s built-in features to easily propagate the tracing context across our system, through message metadata.
Micrometer uses the Mapped Diagnostic Context (MDC) to store tracing details in the form of two IDs: traceId and spanId.
We can observe this by placing a breakpoint in our REST controller and evaluating the expression MDC.getCopyOfContextMap() when the code is paused:
Having the two fields inside MDC allows us to easily enrich our logs with the tracing information. Let’s configure this in our logback.xml:
As a result, whenever the traceId and spanId are present in the MDC context, they are logged next to the thread name.
The real benefit of adding tracing to our applications is the ability to propagate the tracing information across the system and correlate the different components. In other words, we need to use metadata, such as HTTP request headers or Kafka message headers, to pass along the traceId and spanId information.
First, let’s take a look at our KafkaTemplate. For each message, we should customize it to extract the traceId from the MDC and add it as a custom header. Luckily, this functionality is already supported — we just need to enable it by calling setObservationEnabled(true). Let’s apply this to our KafkaTemplate bean:
@Bean
KafkaTemplate<String, ArticleCommentAddedEvent> articleCommentsKafkaTemplate(
ProducerFactory<String, ArticleCommentAddedEvent> producerFactory)
{
var template = new KafkaTemplate<>(producerFactory);
template.setObservationEnabled(true);
// other config ...
return template;
}
As a result, KafkaTemplate now adds the trace information as a message header with the key “traceparent“.
On the listener side, we can already see the new traceparent header, but we still need to parse it and add it to the MDC. Similar to the producer, Spring Kafka can handle this for us if we enable observation at the container level:
@Bean
ConcurrentKafkaListenerContainerFactory<String, String> customKafkaListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory
) {
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
ContainerProperties containerProps = factory.getContainerProperties();
containerProps.setObservationEnabled(true);
// other config...
return factory;
}
With this setup, we can run the application locally, send a few requests, and trace the flow from the initial HTTP request to a Kafka message, which is then processed by the listener in a different thread. This allows us to correlate the entire flow:
[http-nio-8081-exec-2] __680df9d4fcab49ea0511b54ff0f3ce9f__0511b54ff0f3ce9f INFO HTTP Request received to save article comment: ArticleCommentAddedDto[...]
[org.s.kafka...#0-0-C-1] __680df9d4fcab49ea0511b54ff0f3ce9f__de00d94a8258a1b9 INFO Kafka Message Received: Comment Added: ArticleCommentAddedEvent[...]
Needless to say, this is just a simple example, but the feature becomes very useful for tracing requests across more complex systems with multiple services.
In this article, we explored the monitoring features provided by Spring Kafka. By combining Apache Kafka’s native metrics with the extended monitoring support provided by Spring Kafka and Micrometer, we gain a comprehensive view of our messaging system’s health and performance.
The native metrics give us low-level operational insights, while Spring-specific metrics allow for more contextual and application-aware observability. Lastly, we learned how to enrich each of these metrics with tailor-made tags.
Finally, we learned how to enable tracing and enrich our logs with the traceId and spanId information. We customized our Spring Kafka beans to propagate these fields via the traceparent message header, which allowed us to track the flow of messages across different components.
As always, the code presented in this article is available over on GitHub.