
Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode, for a clean learning experience:
Once the early-adopter seats are all used, the price will go up and stay at $33/year.
Last updated: May 7, 2025
Businesses often need to extract meaningful data from various types of audio content, such as transcribing customer support calls for sentiment analysis, generating subtitles for videos, or generating minutes from meetings. However, manually transcribing audio files is a time-consuming and expensive process.
To automate this process, OpenAI offers powerful speech-to-text models capable of accurately transcribing audio files in multiple languages.
In this tutorial, we’ll explore how to transcribe audio files with OpenAI’s speech-to-text models using Spring AI.
To follow along with this tutorial, we’ll need an OpenAI API key.
Before we can start implementing our audio transcriber, we’ll need to include the necessary dependency and configure our application correctly.
Let’s start by adding Spring AI’s OpenAI starter dependency to our project’s pom.xml file:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
<version>1.0.0-M7</version>
</dependency>
Since the current version, 1.0.0-M7, is a milestone release, we’ll also need to add the Spring Milestones repository to our pom.xml:
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
This repository is where milestone versions are published, as opposed to the standard Maven Central repository.
Next, let’s configure the OpenAI API key and speech-to-text model in our application.yaml file:
spring:
ai:
openai:
api-key: ${OPENAI_API_KEY}
audio:
transcription:
options:
model: whisper-1
language: en
We use the ${} property placeholder to load the value of our API key from an environment variable.
Here, we specify OpenAI’s Whisper model via its model-id of whisper-1. It’s important to note that OpenAI offers more advanced and higher-quality speech-to-text models like gpt-4o-transcribe and gpt-4o-mini-transcribe. However, they’re not supported by the current version of Spring AI.
Additionally, we specify en as the language of our audio files. Alternatively, we can specify a different input language in ISO-639-1 format according to requirements. If not specified, the specified model will attempt to detect the language spoken in the audio automatically.
On configuring the above properties, Spring AI automatically creates a bean of type OpenAiAudioTranscriptionModel, allowing us to interact with the specified model.
With our configurations in place, let’s create an AudioTranscriber service class. We’ll inject the OpenAiAudioTranscriptionModel bean that Spring AI automatically creates for us.
But first, let’s define two simple records to represent the request and response payloads:
record TranscriptionRequest(MultipartFile audioFile, @Nullable String context) {}
record TranscriptionResponse(String transcription) {}
The TranscriptionRequest contains the audioFile to be transcribed and an optional context to help the model with the transcription process. It’s important to note that OpenAI currently supports audio files in mp3, mp4, mpeg, mpga, m4a, wav, and webm formats.
Similarly, the TranscriptionResponse simply holds the generated transcription.
Now, let’s implement the intended functionality:
TranscriptionResponse transcribe(TranscriptionRequest transcriptionRequest) {
AudioTranscriptionPrompt prompt = new AudioTranscriptionPrompt(
transcriptionRequest.audioFile().getResource(),
OpenAiAudioTranscriptionOptions
.builder()
.prompt(transcriptionRequest.context())
.build()
);
AudioTranscriptionResponse response = openAiAudioTranscriptionModel.call(prompt);
return new TranscriptionResponse(response.getResult().getOutput());
}
Here, we add a new transcribe() method to our AudioTranscriber class.
We create an AudioTranscriptionPrompt object using the audioFile resource and the optional context prompt. Then, we use it to invoke the call() method of the autowired OpenAiAudioTranscriptionModel bean.
Finally, we extract the transcribed text from the response and return it wrapped in our TranscriptionResponse record.
Currently, for the speech-to-text models, the audio file size is limited to 25 MB. However, by default, Spring Boot limits the size of uploaded files to 1 MB. Let’s increase this limit in our application.yaml file:
spring:
servlet:
multipart:
max-file-size: 25MB
max-request-size: 25MB
We set the maximum file size and request size to 25MB, which should be sufficient for most audio transcription requests.
Now that we’ve implemented our service layer, let’s expose a REST API on top of it:
@PostMapping("/transcribe")
ResponseEntity<TranscriptionResponse> transcribe(
@RequestParam("audioFile") MultipartFile audioFile,
@RequestParam("context") String context
) {
TranscriptionRequest transcriptionRequest = new TranscriptionRequest(audioFile, context);
TranscriptionResponse response = audioTranscriber.transcribe(transcriptionRequest);
return ResponseEntity.ok(response);
}
Next, let’s use the HTTPie CLI to invoke the above API endpoint:
http -f POST :8080/transcribe [email protected] context="Short description about Baeldung"
Here, we invoke our /transcribe API and send the audioFile along with its context. For our demonstration, we’ve prepared an audio file that provides a short description of Baeldung. This file can be found in the src/test/resources/audio folder of the codebase.
Let’s see what we get as a response:
{
"transcription": "Baeldung is a top-notch educational platform that specializes in Java, Spring, and related technologies. It offers a wealth of tutorials, articles, and courses that help developers master programming concepts. Known for its clear examples and practical guides, Baeldung is a go-to resource for developers looking to level up their skills."
}
As we can see, the API returns the correct transcription of the provided audio file.
Notice how providing the context prompt helps the model correctly transcribe the Baeldung name. Without this context, the Whisper model transcribes the word as Baildung.
In this article, we’ve explored transcribing audio files with OpenAI in Spring AI.
We walked through the necessary configuration and implemented an audio transcriber using OpenAI’s Whisper speech-to-text model. We also tested our application and saw how providing a context prompt improves the accuracy of the generated transcription, especially for domain-specific names.
As always, all the code examples used in this article are available over on GitHub.