- Introduction
- Contract testing with Pact
- Running the Application
- Running Locally via Docker Compose
- Deploying to Kubernetes
This is the Narration REST API microservice. It is a blocking HTTP microservice using the Quarkus LangChain4J extension to integrate with an AI service to generate text narrating a given fight.
The Narration microservice needs to access an AI service to generate the text narrating the fight. The default codebase uses OpenAI (via the quarkus-langchain4j-openai
extension). This extension could be swapped for the quarkus-langchain4j-azure-openai
extension with little to no code changes to connect to Azure OpenAI.
Additionally, the service can generate images and image captions from a narration using DALL-E.
Note
Azure OpenAI, or "OpenAI on Azure" is a service that provides REST API access to OpenAI’s models, including the GPT-4, GPT-3, Codex and Embeddings series. The difference between OpenAI and Azure OpenAI is that it runs on Azure global infrastructure, which meets your production needs for critical enterprise security, compliance, and regional availability.
This service is implemented using RESTEasy Reactive with blocking endpoints. Additionally, this application favors constructor injection of beans over field injection (i.e. @Inject
annotation).
The following table lists the available REST endpoints. The OpenAPI document for the REST endpoints is also available.
Path | HTTP method | Response Status | Response Object | Description |
---|---|---|---|---|
/api/narration |
POST |
200 |
String |
Creates a narration for the passed in Fight request body. |
/api/narration |
POST |
400 |
Invalid Fight |
|
/api/narration/image |
POST |
200 |
FightImage |
Generate an image and caption using DALL-E for a narration |
/api/narration/image |
POST |
400 |
Invalid narration passed in | |
/api/narration/hello |
GET |
200 |
String |
Ping "hello" endpoint |
Pact is a code-first tool for testing HTTP and message integrations using contract tests
. Contract tests assert that inter-application messages conform to a shared understanding that is documented in a contract. Without contract testing, the only way to ensure that applications will work correctly together is by using expensive and brittle integration tests.
Eric Deandrea and Holly Cummins recently spoke about contract testing with Pact and used the Quarkus Superheroes for their demos. Watch the replay and view the slides if you'd like to learn more about contract testing.
The rest-narration
application is a Pact Provider, and as such, should run provider verification tests against contracts produced by consumers.
As this README states, contracts generally should be hosted in a Pact Broker and then automatically discovered in the provider verification tests.
One of the main goals of the Superheroes application is to be super simple and just "work" by anyone who may clone this repo. That being said, we can't make any assumptions about where a Pact broker may be or any of the credentials required to access it.
Therefore, the Pact contract is committed into this application's source tree inside the src/test/resources/pacts
directory. In a realistic
scenario, if a broker wasn't used, the consumer's CI/CD would commit the contracts into this repository's source control.
The Pact tests use the Quarkus Pact extension. This extension is recommended to give the best user experience and ensure compatibility.
The application runs on port 8087
(defined by quarkus.http.port
in application.properties
).
From the quarkus-super-heroes/rest-narration
directory, simply run ./mvnw quarkus:dev
to run Quarkus Dev Mode, or running quarkus dev
using the Quarkus CLI. The application will be exposed at http://localhost:8087 and the Quarkus Dev UI will be exposed at http://localhost:8087/q/dev.
Currently, the only supported OpenAI providers are the Microsoft Azure OpenAI Service and OpenAI. The application uses OpenAI via the quarkus-langchain4j-openai
extension as its default. This integration requires creating resources, either on OpenAI or Azure, in order to work properly.
For Azure, the create-azure-openai-resources.sh
script can be used to create the required Azure resources. It will provide you all the necessary configuration. Similarly, the delete-azure-openai-resources.sh
script can be used to delete the Azure resources.
Caution
Using Azure OpenAI or OpenAI may not be a free resource for you, so please understand this! Unless configured otherwise, this application does NOT communicate with any external service. Instead, by default, it just returns a default narration.
Because of this integration and our goal to keep this application working at all times, all the OpenAI integration is disabled by default. A default narration will be provided. In dev mode, the Quarkus WireMock extension serves a default response.
If you'd like to make live calls to an OpenAI provider, set the -Dquarkus.profile=openai
or -Dquarkus.profile=azure-openai
property. This will turn off the Quarkus WireMock functionality and set the application back up to talk to the OpenAI provider. You still need to specify your provider-specific properties, though.
Here's a quick look at what the UI would look like with this integration turned on:
Superheroes.AI.mp4
Dev Mode:
quarkus dev --clean -Dquarkus.profile=openai -Dquarkus.langchain4j.openai.api-key=my-key
Running via java -jar
:
./mvnw clean package -DskipTests
java -Dquarkus.profile=openai -Dquarkus.langchain4j.openai.api-key=my-key -jar target/quarkus-app/quarkus-run.jar
Dev Mode:
quarkus dev --clean -Dquarkus.profile=azure-openai -Dquarkus.langchain4j.azure-openai.api-key=my-key -Dquarkus.langchain4j.azure-openai.resource-name=my-resource-name -Dquarkus.langchain4j.azure-openai.deployment-name=my-deployment-name
Running via java -jar
:
./mvnw clean package -DskipTests -Dquarkus.profile=azure-openai
java -Dquarkus.profile=azure-openai -Dquarkus.langchain4j.azure-openai.api-key=my-key -Dquarkus.langchain4j.azure-openai.resource-name=my-resource-name -Dquarkus.langchain4j.azure-openai.deployment-name=my-deployment-name -jar target/quarkus-app/quarkus-run.jar
Note
The application still has resiliency built-in in case of failures.
To enable the OpenAI integration the following properties must be set, either in application.properties
or as environment variables:
Description | Environment Variable | Java Property | Value |
---|---|---|---|
OpenAI API Key | QUARKUS_LANGCHAIN4J_OPENAI_API_KEY |
quarkus.langchain4j.openai.api-key |
Your OpenAI API Key |
Description | Environment Variable | Java Property | Value |
---|---|---|---|
Set the Azure OpenAI profile | QUARKUS_PROFILE |
quarkus.profile |
azure-openai |
Azure cognitive services account key | QUARKUS_LANGCHAIN4J_AZURE_OPENAI_API_KEY |
quarkus.langchain4j.azure-openai.api-key |
Your azure openai key |
The Azure OpenAI resource name | QUARKUS_LANGCHAIN4J_AZURE_OPENAI_RESOURCE_NAME |
quarkus.langchain4j.azure-openai.resource-name |
Your azure openai resource name |
Azure cognitive services deployment name | QUARKUS_LANGCHAIN4J_AZURE_OPENAI_DEPLOYMENT_NAME |
quarkus.langchain4j.azure-openai.deployment-name |
Your azure openai deployment id/name |
Pre-built images for this application can be found at quay.io/quarkus-super-heroes/rest-narration
.
Pick one of the versions of the application from the table below and execute the appropriate docker compose command from the quarkus-super-heroes/rest-narration
directory.
Description | Image Tag | Docker Compose Run Command |
---|---|---|
JVM Java 21 | java21-latest |
docker compose -f deploy/docker-compose/java21.yml up --remove-orphans |
JVM Java 21 (Azure OpenAI) | java21-latest-azure-openai |
Modify the image in deploy/docker-compose/java21.yml , update environment variables, then run docker compose -f deploy/docker-compose/java21.yml up --remove-orphans |
Native | native-latest |
docker compose -f deploy/docker-compose/native.yml up --remove-orphans |
Native (Azure OpenAI) | native-latest-azure-openai |
Modify the image in deploy/docker-compose/native.yml , update environment variables, then run docker compose -f deploy/docker-compose/native.yml up --remove-orphans |
Important
The running application will NOT make live calls to an OpenAI provider. You will need to modify the descriptors accordingly to have the application make live calls to an OpenAI provider.
For the Azure OpenAI variants listed above, you first need to modify the appropriate Docker Compose descriptor image with the -azure-openai
tag. Then you need to update the environment variables according to the Azure OpenAI properties.
These Docker Compose files are meant for standing up this application only. If you want to stand up the entire system, follow these instructions.
Once started the application will be exposed at http://localhost:8087
.
The application can be deployed to Kubernetes using pre-built images or by deploying directly via the Quarkus Kubernetes Extension. Each of these is discussed below.
Pre-built images for this application can be found at quay.io/quarkus-super-heroes/rest-narration
.
Deployment descriptors for these images are provided in the deploy/k8s
directory. There are versions for OpenShift, Minikube, Kubernetes, and Knative.
Note
The Knative variant can be used on any Knative installation that runs on top of Kubernetes or OpenShift. For OpenShift, you need OpenShift Serverless installed from the OpenShift operator catalog. Using Knative has the benefit that services are scaled down to zero replicas when they are not used.
Pick one of the versions of the application from the table below and deploy the appropriate descriptor from the deploy/k8s
directory.
Description | Image Tag | OpenShift Descriptor | Minikube Descriptor | Kubernetes Descriptor | Knative Descriptor |
---|---|---|---|---|---|
JVM Java 21 | java21-latest |
java21-openshift.yml |
java21-minikube.yml |
java21-kubernetes.yml |
java21-knative.yml |
Native | native-latest |
native-openshift.yml |
native-minikube.yml |
native-kubernetes.yml |
native-knative.yml |
Important
As with the Docker compose descriptors above, the running application will NOT make live calls to an OpenAI provider. You will need to modify the descriptors accordingly to have the application make live calls to an OpenAI provider.
Additionally, there are also java21-latest-azure-openai
and native-latest-azure-openai
image tags available. You would need to modify the Kubernetes descriptor manually before deploying.
You would first need to modify the image with the appropriate image tag, then update the environment variables according to the Azure OpenAI properties.
The application is exposed outside of the cluster on port 80
.
These are only the descriptors for this application only. If you want to deploy the entire system, follow these instructions.
Following the deployment section of the Quarkus Kubernetes Extension Guide (or the deployment section of the Quarkus OpenShift Extension Guide if deploying to OpenShift), you can run one of the following commands to deploy the application and any of its dependencies (see Kubernetes (and variants) resource generation of the automation strategy document) to your preferred Kubernetes distribution.
Note
For non-OpenShift or minikube Kubernetes variants, you will most likely need to push the image to a container registry by adding the -Dquarkus.container-image.push=true
flag, as well as setting the quarkus.container-image.registry
, quarkus.container-image.group
, and/or the quarkus.container-image.name
properties to different values.
Target Platform | Java Version | Command |
---|---|---|
Kubernetes | 21 | ./mvnw clean package -Dquarkus.profile=kubernetes -Dquarkus.kubernetes.deploy=true -DskipTests |
OpenShift | 21 | ./mvnw clean package -Dquarkus.profile=openshift -Dquarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 -Dquarkus.container-image.group=$(oc project -q) -Dquarkus.kubernetes.deploy=true -DskipTests |
Minikube | 21 | ./mvnw clean package -Dquarkus.profile=minikube -Dquarkus.kubernetes.deploy=true -DskipTests |
Knative | 21 | ./mvnw clean package -Dquarkus.profile=knative -Dquarkus.kubernetes.deploy=true -DskipTests |
Knative (on OpenShift) | 21 | ./mvnw clean package -Dquarkus.profile=knative-openshift -Dquarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 -Dquarkus.container-image.group=$(oc project -q) -Dquarkus.kubernetes.deploy=true -DskipTests |
You may need to adjust other configuration options as well (see Quarkus Kubernetes Extension configuration options and Quarkus OpenShift Extension configuration options).
Tip
The do_build
function in the generate-k8s-resources.sh
script uses these extensions to generate the manifests in the deploy/k8s
directory.