Skip to content

Commit

Permalink
Improve build setup and docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ThomasVitale committed Jan 28, 2024
1 parent b2dbb82 commit 174e831
Show file tree
Hide file tree
Showing 26 changed files with 73 additions and 33 deletions.
6 changes: 6 additions & 0 deletions .sdkmanrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Use sdkman to run "sdk env" to initialize with correct JDK version
# Enable auto-env through the sdkman_auto_env config
# See https://sdkman.io/usage#config
# A summary is to add the following to ~/.sdkman/etc/config
# sdkman_auto_env=true
java=21.0.2-tem
4 changes: 2 additions & 2 deletions 01-chat-models/chat-models-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class ChatController {

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -44,7 +44,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 01-chat-models/chat-models-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 3 additions & 1 deletion 01-chat-models/chat-models-openai/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 02-prompts/prompts-basics-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Prompting using simple text with LLMs via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 02-prompts/prompts-basics-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 3 additions & 1 deletion 02-prompts/prompts-basics-openai/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 02-prompts/prompts-messages-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Prompting using structured messages and roles with LLMs via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 02-prompts/prompts-messages-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 3 additions & 1 deletion 02-prompts/prompts-messages-openai/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 02-prompts/prompts-templates-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Prompting using templates with LLMs via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 02-prompts/prompts-templates-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 3 additions & 1 deletion 02-prompts/prompts-templates-openai/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 03-output-parsers/output-parsers-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Parsing the LLM output as structured objects (Beans, Map, List) via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 03-output-parsers/output-parsers-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 3 additions & 1 deletion 03-output-parsers/output-parsers-openai/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 04-embedding-models/embedding-models-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ class EmbeddingController {

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -45,7 +45,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 04-embedding-models/embedding-models-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 3 additions & 1 deletion 04-embedding-models/embedding-models-openai/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 05-document-readers/document-readers-json-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Reading and vectorizing JSON documents with LLMs via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 05-document-readers/document-readers-pdf-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Reading and vectorizing PDF documents with LLMs via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
4 changes: 3 additions & 1 deletion 05-document-readers/document-readers-pdf-ollama/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 2 additions & 2 deletions 05-document-readers/document-readers-text-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Reading and vectorizing text documents with LLMs via Ollama.

The application relies on Ollama for providing LLMs. You can either run Ollama locally on your laptop (macOS or Linux), or rely on the Testcontainers support in Spring Boot to spin up an Ollama service automatically.

### When using Ollama
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop (macOS or Linux).
Then, use Ollama to run the _llama2_ large language model.
Expand All @@ -21,7 +21,7 @@ Finally, run the Spring Boot application.
./gradlew bootRun
```

### When using Docker/Podman
### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama2_ model at startup time.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,9 @@ group = 'com.thomasvitale'
version = '0.0.1-SNAPSHOT'

java {
sourceCompatibility = '21'
toolchain {
languageVersion = JavaLanguageVersion.of(21)
}
}

repositories {
Expand Down
4 changes: 4 additions & 0 deletions settings.gradle
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
plugins {
id "org.gradle.toolchains.foojay-resolver-convention" version '0.7.0'
}

rootProject.name = 'llm-apps-java-spring-ai'

include '01-chat-models:chat-models-ollama'
Expand Down

0 comments on commit 174e831

Please sign in to comment.