Skip to content

Commit

Permalink
Use Mistral as default model with Ollama
Browse files Browse the repository at this point in the history
  • Loading branch information
ThomasVitale committed Jun 19, 2024
1 parent 2d54fb0 commit 5a5c6b0
Show file tree
Hide file tree
Showing 35 changed files with 73 additions and 73 deletions.
8 changes: 4 additions & 4 deletions 01-chat-models/chat-models-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model. That's what we'll use in this example.
Then, use Ollama to run the _mistral_ large language model. That's what we'll use in this example.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -45,15 +45,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to generate text based on a default prompt.
You can now call the application that will use Ollama and _mistral_ to generate text based on a default prompt.
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ String chatWithGenericOptions(@RequestParam(defaultValue = "What did Gandalf say
@GetMapping("/chat/ollama-options")
String chatWithOllamaOptions(@RequestParam(defaultValue = "What did Gandalf say to the Balrog?") String message) {
return chatModel.call(new Prompt(message, OllamaOptions.create()
.withModel("llama3")
.withModel("mistral")
.withRepeatPenalty(1.5f)))
.getResult().getOutput().getContent();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
temperature: 0.7
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestChatModelsOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 02-prompts/prompts-basics-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -23,15 +23,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to generate an answer to your questions.
You can now call the application that will use Ollama and _mistral_ to generate an answer to your questions.
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
temperature: 0.7
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestPromptBasicsOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 02-prompts/prompts-messages-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -23,15 +23,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to generate an answer to your questions.
You can now call the application that will use Ollama and _mistral_ to generate an answer to your questions.
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
temperature: 0.7
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestPromptsMessagesOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 02-prompts/prompts-templates-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -23,15 +23,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to generate an answer to your questions.
You can now call the application that will use Ollama and _mistral_ to generate an answer to your questions.
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
temperature: 0.7
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestPromptsTemplatesOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 03-output-converters/output-converters-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -23,15 +23,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to generate an answer to your questions.
You can now call the application that will use Ollama and _mistral_ to generate an answer to your questions.
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,5 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
temperature: 0.7
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestOutputParsersOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 04-embedding-models/embedding-models-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,10 +32,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -46,15 +46,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and llama3 to generate a vector representation (embeddings) of a default text.
You can now call the application that will use Ollama and mistral to generate a vector representation (embeddings) of a default text.
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ String embed(@RequestParam(defaultValue = "And Gandalf yelled: 'You shall not pa
@GetMapping("/embed/ollama-options")
String embedWithOllamaOptions(@RequestParam(defaultValue = "And Gandalf yelled: 'You shall not pass!'") String message) {
var embeddings = embeddingModel.call(new EmbeddingRequest(List.of(message), OllamaOptions.create()
.withModel("llama3")))
.withModel("mistral")))
.getResult().getOutput();
return "Size of the embedding vector: " + embeddings.size();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ spring:
ollama:
embedding:
options:
model: llama3
model: mistral
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestEmbeddingModelsOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 05-etl-pipeline/document-readers-json-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -23,15 +23,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to load JSON documents as embeddings and generate an answer to your questions based on those documents (RAG pattern).
You can now call the application that will use Ollama and _mistral_ to load JSON documents as embeddings and generate an answer to your questions based on those documents (RAG pattern).
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
embedding:
options:
model: llama3
model: mistral
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestDocumentReadersJsonOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
8 changes: 4 additions & 4 deletions 05-etl-pipeline/document-readers-pdf-ollama/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ The application relies on Ollama for providing LLMs. You can either run Ollama l
### Ollama as a native application

First, make sure you have [Ollama](https://ollama.ai) installed on your laptop.
Then, use Ollama to run the _llama3_ large language model.
Then, use Ollama to run the _mistral_ large language model.

```shell
ollama run llama3
ollama run mistral
```

Finally, run the Spring Boot application.
Expand All @@ -23,15 +23,15 @@ Finally, run the Spring Boot application.

### Ollama as a dev service with Testcontainers

The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _llama3_ model at startup time.
The application relies on the native Testcontainers support in Spring Boot to spin up an Ollama service with a _mistral_ model at startup time.

```shell
./gradlew bootTestRun
```

## Calling the application

You can now call the application that will use Ollama and _llama3_ to load PDF documents as embeddings and generate an answer to your questions based on those documents (RAG pattern).
You can now call the application that will use Ollama and _mistral_ to load PDF documents as embeddings and generate an answer to your questions based on those documents (RAG pattern).
This example uses [httpie](https://httpie.io) to send HTTP requests.

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ spring:
ollama:
chat:
options:
model: llama3
model: mistral
embedding:
options:
model: llama3
model: mistral
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public class TestDocumentReadersPdfOllamaApplication {
@RestartScope
@ServiceConnection
OllamaContainer ollama() {
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-llama3")
return new OllamaContainer(DockerImageName.parse("ghcr.io/thomasvitale/ollama-mistral")
.asCompatibleSubstituteFor("ollama/ollama"));
}

Expand Down
Loading

0 comments on commit 5a5c6b0

Please sign in to comment.