
Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:
Once the early-adopter seats are all used, the price will go up and stay at $33/year.
Last updated: May 12, 2025
NoSQL databases have become a popular choice for building an application’s persistence layer.
Amazon DynamoDB is one such serverless and fully managed NoSQL database provided by Amazon Web Services (AWS). For nearly a decade, DynamoDB has established itself as one of the most popular and widely used NoSQL databases in the cloud due to its scalability, flexibility, and performance.
The core components we interact with when working with DynamoDB are tables, items, and attributes. A table is a collection of items, and each item is a collection of attributes.
In this tutorial, we’ll explore integrating Amazon DynamoDB into a Spring Boot application.
Before we can start interacting with the Amazon DynamoDB service, we’ll need to include the necessary dependency and configure our application correctly.
We’ll be using Spring Cloud AWS to establish a connection and interact with the DynamoDB service, rather than using the DynamoDB SDK provided by AWS directly. Spring Cloud AWS is a wrapper around the official AWS SDKs, which significantly simplifies configuration and provides simple methods to interact with AWS services.
Let’s start by adding the DynamoDB starter dependency from Spring Cloud AWS to our project’s pom.xml file:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-starter-dynamodb</artifactId>
<version>3.3.0</version>
</dependency>
Next, let’s also include the Spring Cloud AWS BOM (Bill of Materials) to manage the version of the DynamoDB starter in our pom.xml:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws</artifactId>
<version>3.3.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
With this addition, we can now remove the version tag from our starter dependency.
The BOM ensures version compatibility between the declared dependencies, avoids conflicts, and makes it easier to update dependency versions in the future.
Now, to interact with the DynamoDB service, we need to configure our AWS credentials for authentication and the AWS region where we’ve provisioned our table.
We’ll configure these properties in our application.yaml file:
spring:
cloud:
aws:
dynamodb:
region: ${AWS_REGION}
credentials:
access-key: ${AWS_ACCESS_KEY}
secret-key: ${AWS_SECRET_KEY}
We use the ${} property placeholder to load the values of our properties from environment variables.
Now, let’s define a simple User entity class that represents the data model for our DynamoDB table:
@DynamoDbBean
public class User {
private UUID id;
private String name;
private String email;
@DynamoDbPartitionKey
public UUID getId() {
return id;
}
// standard setters and getters
}
Here, we annotate our User class with the @DynamoDbBean annotation, marking it as an entity that can be mapped to a DynamoDB table. It’s important to note that Spring Cloud AWS is unable to map the table attributes when using the package-private modifier, hence the User entity and its corresponding getter and setter methods must be declared as public.
Additionally, we configure the id field as the partition key, i.e., the primary key for our table by annotating the getId() method with the @DynamoDbPartitionKey annotation. We should remember to place this annotation on the getter method, and not to the field itself.
DynamoDB also supports composite primary keys through the use of optional sort keys. We can define a composite primary key by annotating the corresponding getter methods of the additional fields with the @DynamoDbSortKey annotation. We won’t be using sort keys in this tutorial, but it’s good to be aware of their existence.
By default, Spring Cloud AWS converts the entity class name into its snake case representation to determine the corresponding DynamoDB table name. However, this convention might not always align with our naming conventions or requirements.
We can define a custom bean that implements the DynamoDbTableNameResolver interface to override this default behavior.
Let’s start by creating a custom annotation to specify the table name directly on our entity class:
@Target(TYPE)
@Retention(RUNTIME)
@interface TableName {
String name();
}
We create a simple @TableName annotation, which takes a single name attribute where we can specify the desired DynamoDB table name.
Next, let’s create our custom DynamoDbTableNameResolver implementation:
@Component
class CustomTableNameResolver implements DynamoDbTableNameResolver {
@Override
public <T> String resolve(Class<T> clazz) {
return clazz.getAnnotation(TableName.class).name();
}
}
Here, we override the resolve() method in our CustomTableNameResolver class. We retrieve the name attribute value of the @TableName annotation applied to the entity class and use it as the table name.
Finally, let’s annotate our User class with the new @TableName annotation:
@DynamoDbBean
@TableName(name = "users")
class User {
// ...
}
With this configuration, Spring Cloud AWS will use our CustomTableNameResolver class to determine that the User entity maps to the users table in DynamoDB.
During development, it’s often convenient to test our application locally. LocalStack is a popular tool that allows us to run an emulated AWS environment locally on our machine. We’ll use Testcontainers to set up the LocalStack service in our application.
The prerequisite for running the LocalStack service via Testcontainers is an active Docker instance. We need to ensure this prerequisite is met when running the test suite either locally or when using a CI/CD pipeline.
First, let’s add the necessary test dependencies to our pom.xml:
<dependency>
<groupId>io.awspring.cloud</groupId>
<artifactId>spring-cloud-aws-testcontainers</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>localstack</artifactId>
<scope>test</scope>
</dependency>
We import the Spring Cloud AWS Testcontainers dependency and the LocalStack module of Testcontainers. These dependencies provide us with the necessary classes to spin up ephemeral Docker instances for the LocalStack service.
Next, we’ll need to provision a DynamoDB table that our application can interact with. Localstack provides the ability to create the required AWS resources when the container is started via Initialization Hooks.
Let’s create an init-dynamodb-table.sh bash script for this purpose inside our src/test/resources folder:
#!/bin/bash
table_name="users"
partition_key="id"
awslocal dynamodb create-table
--table-name "$table_name"
--key-schema AttributeName="$partition_key",KeyType=HASH
--attribute-definitions AttributeName="$partition_key",AttributeType=S
--billing-mode PAY_PER_REQUEST
echo "DynamoDB table '$table_name' created successfully with partition key '$partition_key'"
echo "Executed init-dynamodb-table.sh"
The above script creates a DynamoDB table with the name users. We use the awslocal command inside the shell script, which is a wrapper around the AWS CLI that points to the LocalStack service. We end the script by writing a few echo statements to confirm the script’s successful execution.
We’ll copy this script to the /etc/localstack/init/ready.d path inside the LocalStack container for execution in the upcoming section.
Next, let’s create a @TestConfiguration class that defines our Testcontainers bean:
@TestConfiguration(proxyBeanMethods = false)
class TestcontainersConfiguration {
@Bean
@ServiceConnection
LocalStackContainer localStackContainer() {
return new LocalStackContainer(DockerImageName.parse("localstack/localstack:4.3.0"))
.withServices(LocalStackContainer.Service.DYNAMODB)
.withCopyFileToContainer(
MountableFile.forClasspathResource("init-dynamodb-table.sh", 0744),
"/etc/localstack/init/ready.d/init-dynamodb-table.sh"
)
.waitingFor(Wait.forLogMessage(".*Executed init-dynamodb-table.sh.*", 1));
}
}
We specify the latest stable version of the LocalStack Docker image when creating our LocalStackContainer bean.
Then, we enable the DynamoDB service and copy our bash script into the container to ensure the table creation. Additionally, we configure a strategy to wait for the Executed init-dynamodb-table.sh statement to be printed, as defined in our init script.
We also annotate our bean method with the @ServiceConnection annotation, which dynamically registers all the properties required to set up a connection with the started LocalStack container. Now, we can use this configuration by annotating our test classes with the @Import(TestcontainersConfiguration.class) annotation.
Now that we have our local environment set up, let’s use the DynamoDbTemplate bean that Spring Cloud AWS automatically creates for us to interact with the users table we’ve provisioned.
Let’s start by creating a new User item in the provisioned DynamoDB table:
User user = Instancio.create(User.class);
dynamoDbTemplate.save(user);
Key partitionKey = Key.builder().partitionValue(user.getId().toString()).build();
User retrievedUser = dynamoDbTemplate.load(partitionKey, User.class);
assertThat(retrievedUser)
.isNotNull()
.usingRecursiveComparison()
.isEqualTo(user);
We use Instancio to create a new User object with random test data. Then, we persist it to our provisioned table using the save() method of the DynamoDbTemplate bean.
To verify this operation, we create a Key object using the user’s partition key value and pass it to the load() method. Then, we assert that the retrievedUser matches the original user we persisted.
Next, let’s update an existing User item:
String updatedName = RandomString.make();
String updatedEmail = RandomString.make();
user.setName(updatedName);
user.setEmail(updatedEmail);
dynamoDbTemplate.update(user);
Key partitionKey = Key.builder().partitionValue(user.getId().toString()).build();
User updatedUser = dynamoDbTemplate.load(partitionKey, User.class);
assertThat(updatedUser.getName())
.isEqualTo(updatedName);
assertThat(updatedUser.getEmail())
.isEqualTo(updatedEmail);
Here, we update the name and email attributes of an already persisted User object. We call the update() method of the DynamoDbTemplate bean to persist the changes. We assert that the attributes have been modified correctly by retrieving the updated user.
Finally, let’s delete a User item from our table:
dynamoDbTemplate.delete(user);
Key partitionKey = Key.builder().partitionValue(user.getId().toString()).build();
User deletedUser = dynamoDbTemplate.load(partitionKey, User.class);
assertThat(deletedUser)
.isNull();
We call the delete() method of the DynamoDbTemplate bean, passing the User object to be deleted. Then, we attempt to retrieve the deleted object using the same partition key and assert that it no longer exists in the table.
We’ve already seen that we can use the load() method of the DynamoDbTemplate bean to fetch an item from the table using its partition key.
However, we may also need to retrieve multiple items or filter them based on non-key attributes. DynamoDB provides the scan operation to achieve this.
First, let’s see how to retrieve all User items from the provisioned table:
int numberOfUsers = 10;
for (int i = 0; i < numberOfUsers; i++) {
User user = Instancio.create(User.class);
dynamoDbTemplate.save(user);
}
List<User> retrievedUsers = dynamoDbTemplate
.scanAll(User.class)
.items()
.stream()
.toList();
assertThat(retrievedUsers.size())
.isEqualTo(numberOfUsers);
We save multiple User objects to our table and then use the scanAll() method to retrieve all items. We assert that the number of retrieved users matches the number of users we initially saved.
By default, if the query result exceeds 1 MB in size, DynamoDB paginates the response and returns a token to fetch the next page. However, Spring Cloud AWS handles the pagination for us behind the scenes and returns all the items saved in the table.
Additionally, we can perform a scan operation with a filter expression to retrieve specific items:
Expression expression = Expression.builder()
.expression("#email = :email")
.putExpressionName("#email", "email")
.putExpressionValue(":email", AttributeValue.builder().s(user.getEmail()).build())
.build();
ScanEnhancedRequest scanRequest = ScanEnhancedRequest
.builder()
.filterExpression(expression)
.build();
User retrievedUser = dynamoDbTemplate.scan(scanRequest, User.class)
.items()
.stream()
.findFirst()
.get();
assertThat(retrievedUser)
.isNotNull()
.usingRecursiveComparison()
.isEqualTo(user);
Here, we create an Expression object with a filter condition that matches the email attribute of a specific User. We wrap this expression inside a ScanEnhancedRequest object and pass it to the scan() method. Finally, we assert that the retrievedUser matches the user we’d initially saved.
We’ve used the LocalStack emulator for our demonstration. However, when working against the real DynamoDB service, we’ll need to assign the following IAM policy to the IAM user we’ve configured in our application:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:Scan"
],
"Resource": "arn:aws:dynamodb:REGION:ACCOUNT_ID:table/users"
}
]
}
Here, we grant permissions for the specific actions our application performs on the users table. Our IAM policy conforms to the least privilege principle, granting only the necessary permissions required by our application to function correctly.
We should remember to replace the REGION and ACCOUNT_ID placeholders with the actual values in the Resource ARN.
In this article, we’ve explored integrating Amazon DynamoDB into a Spring Boot application using Spring Cloud AWS.
We walked through the necessary configuration, defined our data model along with a custom table name resolver. Then, using Testcontainers, we started an ephemeral Docker container for the LocalStack service, creating a local test environment.
Finally, we used the DynamoDbTemplate to perform basic CRUD operations and scan operations on our provisioned DynamoDB table and discussed the required IAM permissions.
As always, all the code examples used in this article are available over on GitHub.