Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Microsoft – NPI EA (cat = Baeldung)
announcement - icon

Azure Container Apps is a fully managed serverless container service that enables you to build and deploy modern, cloud-native Java applications and microservices at scale. It offers a simplified developer experience while providing the flexibility and portability of containers.

Of course, Azure Container Apps has really solid support for our ecosystem, from a number of build options, managed Java components, native metrics, dynamic logger, and quite a bit more.

To learn more about Java features on Azure Container Apps, visit the documentation page.

You can also ask questions and leave feedback on the Azure Container Apps GitHub page.

Partner – Microsoft – NPI EA (cat= Spring Boot)
announcement - icon

Azure Container Apps is a fully managed serverless container service that enables you to build and deploy modern, cloud-native Java applications and microservices at scale. It offers a simplified developer experience while providing the flexibility and portability of containers.

Of course, Azure Container Apps has really solid support for our ecosystem, from a number of build options, managed Java components, native metrics, dynamic logger, and quite a bit more.

To learn more about Java features on Azure Container Apps, you can get started over on the documentation page.

And, you can also ask questions and leave feedback on the Azure Container Apps GitHub page.

Partner – Orkes – NPI EA (cat=Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag=Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – Guide Spring Cloud – NPI EA (cat=Spring Cloud)
announcement - icon

Let's get started with a Microservice Architecture with Spring Cloud:

>> Join Pro and download the eBook

eBook – Mockito – NPI EA (tag = Mockito)
announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide:

Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Reactive – NPI EA (cat=Reactive)
announcement - icon

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Join Pro and download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Jackson – NPI EA (cat=Jackson)
announcement - icon

Do JSON right with Jackson

Download the E-book

eBook – HTTP Client – NPI EA (cat=Http Client-Side)
announcement - icon

Get the most out of the Apache HTTP Client

Download the E-book

eBook – Maven – NPI EA (cat = Maven)
announcement - icon

Get Started with Apache Maven:

Download the E-book

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

eBook – RwS – NPI EA (cat=Spring MVC)
announcement - icon

Building a REST API with Spring?

Download the E-book

Course – LS – NPI EA (cat=Jackson)
announcement - icon

Get started with Spring and Spring Boot, through the Learn Spring course:

>> LEARN SPRING
Course – RWSB – NPI EA (cat=REST)
announcement - icon

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Course – LSS – NPI EA (cat=Spring Security)
announcement - icon

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth, to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project.

You can explore the course here:

>> Learn Spring Security

Course – All Access – NPI EA (cat= Spring)
announcement - icon

All Access is finally out, with all of my Spring courses. Learn JUnit is out as well, and Learn Maven is coming fast. And, of course, quite a bit more affordable. Finally.

>> GET THE COURSE
Course – LSD – NPI EA (tag=Spring Data JPA)
announcement - icon

Spring Data JPA is a great way to handle the complexity of JPA with the powerful simplicity of Spring Boot.

Get started with Spring Data JPA through the guided reference course:

>> CHECK OUT THE COURSE

Partner – LambdaTest – NPI EA (cat=Testing)
announcement - icon

End-to-end testing is a very useful method to make sure that your application works as intended. This highlights issues in the overall functionality of the software, that the unit and integration test stages may miss.

Playwright is an easy-to-use, but powerful tool that automates end-to-end testing, and supports all modern browsers and platforms.

When coupled with LambdaTest (an AI-powered cloud-based test execution platform) it can be further scaled to run the Playwright scripts in parallel across 3000+ browser and device combinations:

>> Automated End-to-End Testing With Playwright

Course – Spring Sale 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Spring Sale. All Courses are 25% off until 26th May, 2025:

>> EXPLORE ACCESS NOW

Course – Spring Sale 2025 – NPI (cat=Baeldung)
announcement - icon

Yes, we're now running our Spring Sale. All Courses are 25% off until 26th May, 2025:

>> EXPLORE ACCESS NOW

1. Overview

In this tutorial, we’ll explore the monitoring capabilities provided by Spring Kafka using Micrometer and Spring Boot Actuator. We’ll start by looking at the native metrics exposed by Apache Kafka for both producers and consumers, which offer valuable insights into performance, throughput, errors, and latency.

Next, we’ll dive into the Spring-specific metrics exposed under spring.kafka.listener and spring.kafka.template. We’ll also learn how to customize @KafkaListener and KafkaTemplate to add custom tags to these metrics through Spring configuration.

Lastly, we’ll discuss tracing and see how Spring Kafka makes it easy to propagate the tracing information generated by Micrometer. This allows us to track and correlate messages for better debugging and monitoring.

2. Setting up the Environment

For this article’s code samples, we’ll assume we’re creating the backend application for a blogging and learning website, like Baeldung.

We’ll use a simple Spring Boot application that uses the spring-kafka and spring-boot-starter-actuator dependencies:

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

In the src/test/resources/docker folder, we can find a docker-compose.yml file that can be used to start the application locally.

Additionally, the application is configured to expose all the metrics via actuator endpoints:

management.endpoints.web.exposure.include: '*'

At this point, we should be able to start the application locally, access the actuator on http://localhost:8081/actuator, and see all the exposed endpoints and metrics:

A display of the default actuator endpoint

In terms of functionality, our service provides an HTTP endpoint to create article comments. Once a comment is submitted, the application sends a message to the Kafka topic baeldung.article-comment.added, which lets us track Kafka producer metrics.

Later, we’ll set up a Kafka listener to consume messages from the same topic. This will help us understand how to monitor the Kafka listener and what metrics it exposes.

3. Native Kafka Metrics

Just by adding the spring-boot-starter-actuator dependency, we’ll expose some Kafka metrics out of the box. This feature is available starting with Spring Boot version 2.5 – let’s focus on the native metrics for the Kafka producer and consumer.

3.1. Producer Metrics

We’ll need to make the application produce some events, so let’s create some comments through the REST API:

curl --location 'http://localhost:8081/api/articles/oop-best-practices/comments' \
--header 'Content-Type: application/json' \
--data '{
    "articleAuthor": "Andrey the Author",
    "comment": "Great article!",
    "commentAuthor": "Richard the Reader"
}'

After that, we can check the http://localhost:8081/actuator/metrics endpoint again. This time, we’ll see a number of Kafka producer metrics, including values for things like latency and failure rate:

  • kafka.producer.record.error.rate – how often record sends are failing
  • kafka.producer.request.latency.avg – average time it takes to complete a produce request
  • kafka.producer.buffer.exhausted.rate – how frequently the producer runs out of buffer space
  • kafka.producer.record.send.rate – rate at which records are sent to the broker
  • kafka.producer.requests.in.flight – number of unacknowledged produce requests currently in flight

To dive deeper, we can find a comprehensive list of all Kafka producer metrics in the Apache Kafka documentation.

Needless to say, we can explore each of these metrics by appending its name to the path. For instance, let’s access the endpoint monitoring the producer error rate:

A display of the producer error metrics

3.2. Consumer Metrics

Similar to the producer metrics, Micrometer records metrics related to the Kafka consumer and exposes them via the actuator.

To illustrate this, let’s add a @KafkaListener annotation to our application. For simplicity, it’ll listen to the same baeldung.article-comment.added topic:

@Component
public class ArticleCommentsListener {

    @KafkaListener(topics = "baeldung.article-comment.added")
    public void onArticleComment(ArticleCommentAddedEvent event) {
        // some logic here...
    }

}

Now, we can run the application, send a few more requests, and then check the actuator metrics again. This time, we should see several metrics related to the Kafka consumer. Among others, we’ll notice metrics such as:

  • kafka.consumer.fetch.manager.records.lag – how far behind the consumer is
  • kafka.consumer.fetch.manager.fetch.latency.avg – average time to fetch data from the broker
  • kafka.consumer.coordinator.rebalance.rate.per.hour – how often consumer group rebalancing occurs
  • kafka.consumer.last.poll.seconds.ago – time since the last poll from the consumer
  • kafka.consumer.time.between.poll.avg – average time between consecutive poll calls

The full list of consumer properties we can expect to find is available in Kafka’s official documentation.

3.3. Adding Custom Tags

Spring Kafka also has a built-in API for enriching the native metrics with custom tags. For example, we can customize the producer metrics by adding a listener to the ProducerFactory bean:

@Bean
ProducerFactory<String, ArticleCommentAddedEvent> producerFactory(
    KafkaProperties kafkaProperties, MeterRegistry meterRegistry
) {
    ProducerFactory pf = new DefaultKafkaProducerFactory<>(kafkaProperties.buildProducerProperties());
    pf.addListener(
        new MicrometerProducerListener<String, String>(
            meterRegistry,
            Collections.singletonList(new ImmutableTag("app-name", "article-comments-app"))
        )
    );
    return pf;
}

As we can see, this adds a custom name to all Kafka producer metrics. Additionally, we can modify the ConsumerFactory, adding a MicrometerConsumerListener to add custom tags to the consumer metrics.

4. Monitoring KafkaTemplate

In our demo application, we use the KafkaTemplate bean to publish messages to Kafka. In addition to producer metrics, Micrometer also records metrics specific to KafkaTemplate and exposes them under the spring.kafka.template metric name.

We can customize the KafkaTemplate bean to add or update the tags associated with this metric. For example, the setMicrometerTags() method allows us to define tags as key-value pairs and attach them to a specific kafkaTemplate bean:

@Bean
@Qualifier("articleCommentsKafkaTemplate")
KafkaTemplate<String, ArticleCommentAddedEvent> articleCommentsKafkaTemplate(
    ProducerFactory<String, ArticleCommentAddedEvent> producerFactory
) {
    var template = new KafkaTemplate<>(producerFactory);
    template.setMicrometerTags(Map.of(
        "topic", "baeldung.article-comment.added"
    ));

    return template;
}

Moreover, we can use setMicrometerTagsProvider() to dynamically generate the tags for a given record. Let’s use this method to extract the record’s key and attach it as a tag:

template.setMicrometerTagsProvider(
    record -> Map.of("article-slug", record.key().toString())
);

We can now add a few more comments to different articles and then verify the http://localhost:8081/actuator/metrics/spring.kafka.template endpoint:

A display of the Kafka Template metrics

As expected, the data includes all recorded information about KafkaTemplate performance along with our custom tags.

5. Monitoring KafkaListener

Just like with KafkaTemplate, Micrometer also monitors KafkaListener and exposes related metrics under the spring.kafka.listener name. Spring Kafka maintains a consistent API, making it easy to configure custom tags for listener metrics.

The setMicrometerTags() and setMicrometerTagsProvider() methods can be used to attach these custom Micrometer tags, and are configured at the ConcurrentKafkaListenerContainerFactory level:

@Bean
ConcurrentKafkaListenerContainerFactory<String, String> customKafkaListenerContainerFactory(
    ConsumerFactory<String, String> consumerFactory
) {
    ConcurrentKafkaListenerContainerFactory<String, String> factory =
        new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory);

    ContainerProperties containerProps = factory.getContainerProperties();
    containerProps.setMicrometerTags(Map.of(
        "app-name", "article-comments-app"
    ));
    containerProps.setMicrometerTagsProvider(
        record -> Map.of("article-slug", record.key().toString())
    );
    return factory;
}

Additionally, we need to update our @KafkaListener annotation and point it to the modified containerFactory:

@KafkaListener(
    topics = "baeldung.article-comment.added",
    containerFactory = "customKafkaListenerContainerFactory"
)
public void onArticleComment(ArticleCommentAddedEvent event) {
    // ...
}

As a result, Micrometer attaches our static and dynamic tags to the metric, and Actuator exposes them through the http://localhost:8081/actuator/metrics/spring.kafka.listener endpoint.

6. Tracing Kafka Messages

Micrometer’s tracing feature helps us track the flow of a request by adding trace information to logs. This makes debugging and monitoring much easier.

Then, we’ll use Spring Kafka’s built-in features to easily propagate the tracing context across our system, through message metadata.

6.1. Enriching the Logs

Micrometer uses the Mapped Diagnostic Context (MDC) to store tracing details in the form of two IDs: traceId and spanId.

We can observe this by placing a breakpoint in our REST controller and evaluating the expression MDC.getCopyOfContextMap() when the code is paused:

an IDE view of the spanId and traceId values in the MDC at request time

Having the two fields inside MDC allows us to easily enrich our logs with the tracing information. Let’s configure this in our logback.xml:

the needed logback configuration

As a result, whenever the traceId and spanId are present in the MDC context, they are logged next to the thread name.

6.2. Propagating the Context

The real benefit of adding tracing to our applications is the ability to propagate the tracing information across the system and correlate the different components. In other words, we need to use metadata, such as HTTP request headers or Kafka message headers, to pass along the traceId and spanId information.

First, let’s take a look at our KafkaTemplate. For each message, we should customize it to extract the traceId from the MDC and add it as a custom header. Luckily, this functionality is already supported — we just need to enable it by calling setObservationEnabled(true). Let’s apply this to our KafkaTemplate bean:

@Bean
KafkaTemplate<String, ArticleCommentAddedEvent> articleCommentsKafkaTemplate(
  ProducerFactory<String, ArticleCommentAddedEvent> producerFactory) 
{
    var template = new KafkaTemplate<>(producerFactory);
    template.setObservationEnabled(true);
    // other config ...
    return template;
}

As a result, KafkaTemplate now adds the trace information as a message header with the key “traceparent“.

On the listener side, we can already see the new traceparent header, but we still need to parse it and add it to the MDC. Similar to the producer, Spring Kafka can handle this for us if we enable observation at the container level:

@Bean
ConcurrentKafkaListenerContainerFactory<String, String> customKafkaListenerContainerFactory(
  ConsumerFactory<String, String> consumerFactory
) {
    var factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory);

    ContainerProperties containerProps = factory.getContainerProperties();
    containerProps.setObservationEnabled(true);
    // other config...
    return factory;
}

With this setup, we can run the application locally, send a few requests, and trace the flow from the initial HTTP request to a Kafka message, which is then processed by the listener in a different thread. This allows us to correlate the entire flow:

[http-nio-8081-exec-2] __680df9d4fcab49ea0511b54ff0f3ce9f__0511b54ff0f3ce9f INFO HTTP Request received to save article comment: ArticleCommentAddedDto[...]
[org.s.kafka...#0-0-C-1] __680df9d4fcab49ea0511b54ff0f3ce9f__de00d94a8258a1b9 INFO  Kafka Message Received: Comment Added: ArticleCommentAddedEvent[...]

Needless to say, this is just a simple example, but the feature becomes very useful for tracing requests across more complex systems with multiple services.

7. Conclusion

In this article, we explored the monitoring features provided by Spring Kafka. By combining Apache Kafka’s native metrics with the extended monitoring support provided by Spring Kafka and Micrometer, we gain a comprehensive view of our messaging system’s health and performance.

The native metrics give us low-level operational insights, while Spring-specific metrics allow for more contextual and application-aware observability. Lastly, we learned how to enrich each of these metrics with tailor-made tags.

Finally, we learned how to enable tracing and enrich our logs with the traceId and spanId information. We customized our Spring Kafka beans to propagate these fields via the traceparent message header, which allowed us to track the flow of messages across different components.

As always, the code presented in this article is available over on GitHub.

Baeldung Pro – NPI EA (cat = Baeldung)
announcement - icon

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Partner – Microsoft – NPI EA (cat = Spring Boot)
announcement - icon

Azure Container Apps is a fully managed serverless container service that enables you to build and deploy modern, cloud-native Java applications and microservices at scale. It offers a simplified developer experience while providing the flexibility and portability of containers.

Of course, Azure Container Apps has really solid support for our ecosystem, from a number of build options, managed Java components, native metrics, dynamic logger, and quite a bit more.

To learn more about Java features on Azure Container Apps, visit the documentation page.

You can also ask questions and leave feedback on the Azure Container Apps GitHub page.

Partner – Orkes – NPI EA (cat = Spring)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

Partner – Orkes – NPI EA (tag = Microservices)
announcement - icon

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

eBook – HTTP Client – NPI EA (cat=HTTP Client-Side)
announcement - icon

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints. Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

>> Download the eBook

eBook – Java Concurrency – NPI EA (cat=Java Concurrency)
announcement - icon

Handling concurrency in an application can be a tricky process with many potential pitfalls. A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

eBook – Java Streams – NPI EA (cat=Java Streams)
announcement - icon

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

>> Join Pro and download the eBook

eBook – Persistence – NPI EA (cat=Persistence)
announcement - icon

Working on getting your persistence layer right with Spring?

Explore the eBook

Course – LS – NPI EA (cat=REST)

announcement - icon

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

Course – Spring Sale 2025 – NPI EA (cat= Baeldung)
announcement - icon

Yes, we're now running our Spring Sale. All Courses are 25% off until 26th May, 2025:

>> EXPLORE ACCESS NOW

Course – Spring Sale 2025 – NPI (All)
announcement - icon

Yes, we're now running our Spring Sale. All Courses are 25% off until 26th May, 2025:

>> EXPLORE ACCESS NOW

eBook Jackson – NPI EA – 3 (cat = Jackson)
Subscribe
Notify of
guest
2 Comments
Oldest
Newest
Inline Feedbacks
View all comments