Releases: pepperoni21/ollama-rs
Releases · pepperoni21/ollama-rs
v0.2.1
What's Changed
- Add streaming feature with chat history by @izyuumi in #55
- Store chat history with stream update by @ushinnary in #56
- Llama3.1 Function Calling + New Tools by @andthattoo in #59
- Create images_to_ollama.rs by @heydocode in #63
- Update embeddings generation to /api/embed endpoint and allow for batch embedding by @pepperoni21 in #61
- Update readme to provide compilable embeddings examples by @ephraimkunz in #69
- feat: Added suffix in completion request by @SpeedCrash100 in #68
- Fixed GenerateEmbeddingsResponse float type by @pepperoni21 in #71
New Contributors
- @izyuumi made their first contribution in #55
- @heydocode made their first contribution in #63
- @ephraimkunz made their first contribution in #69
- @SpeedCrash100 made their first contribution in #68
Full Changelog: v0.2.0...v0.2.1
v0.2.0
What's Changed
- Replace host string and port with
Url
by @zeozeozeo in #45 - Flatten final response data by @erhant in #47
- Examples folder should contain working versions of the current release by @yasinldev in #48
- Revert "Examples folder should contain working versions of the current release" by @pepperoni21 in #49
- Ollama Function Calling by @andthattoo in #51
- OllamaError for function-calling by @andthattoo in #54
New Contributors
- @erhant made their first contribution in #47
- @yasinldev made their first contribution in #48
- @andthattoo made their first contribution in #51
Full Changelog: v0.1.9...v0.2.0
v0.1.9
What's Changed
- Add Serialize Support for Further Stucts by @bencevans in #36
- fix: Adds back custom
Result
type for generate function by @jpmcb in #37 - update: update README streaming example by @milosgajdos in #39
- make
ModelInfo
fields empty by default by @zeozeozeo in #40 - make GenerationContext with pub field by @Vebnik in #43
New Contributors
- @bencevans made their first contribution in #36
- @jpmcb made their first contribution in #37
- @milosgajdos made their first contribution in #39
- @zeozeozeo made their first contribution in #40
- @Vebnik made their first contribution in #43
Full Changelog: v0.1.8...v0.1.9
v0.1.8
What's Changed
- feat: make GenerationOptions deserializable by @AlexisTM in #28
- feat: Add an example to show how to load GenerationOptions from a json string by @AlexisTM in #30
- Derive Eq, PartialEq on MessageRole by @Sir-Photch in #33
- Re-add Send for GenerationResponseStream by @functorism in #34
- feat: Store chat history by @ushinnary in #32
New Contributors
- @AlexisTM made their first contribution in #28
- @Sir-Photch made their first contribution in #33
- @ushinnary made their first contribution in #32
Full Changelog: v0.1.7...v0.1.8
v0.1.7
What's Changed
- Adds example on how to use completion options to README and /examples by @dezoito in #23
- Derives 'serialize' in GenerationResponse and GenerationFinalResponse by @dezoito in #25
- GenerationOptions 'stop' should be a string array, not string by @functorism in #26
New Contributors
- @dezoito made their first contribution in #23
- @functorism made their first contribution in #26
Full Changelog: v0.1.6...v0.1.7
v0.1.6
What's Changed
- Added the
Send
trait bound to all stream types by @varonroy in #14 - Fix return type for generate function by @SimonCW in #16
- Fix chatbot example link by @Rushmore75 in #18
- Fixed deserialization when streaming by @pepperoni21 in #13
New Contributors
- @varonroy made their first contribution in #14
- @SimonCW made their first contribution in #16
- @Rushmore75 made their first contribution in #18
Full Changelog: v0.1.5...v0.1.6
v0.1.5
Added image supports:
- images are represented with the
Image
struct which is a wrapper for the base64 encoding of the image - using
images
andadd_image
methods ofGenerationRequest
- using
with_images
andadd_image
methods ofChatMessage
v0.1.4
- Added chat endpoint support with
send_chat_messages
andsend_chat_messages_stream
methods (see https://github.com/pepperoni21/ollama-rs/blob/master/examples/chat_api_chatbot.rs) - Fixed final data not being parsed in the
GenerationResponse
due to a change from Ollama
v0.1.3
-
Added
rustls
support through therustls
feature -
create_model
andcreate_model_stream
methods now take aCreateModelRequest
as the parameter, allowing to create a model by passing the content of the Modelfile as parameter.For example, this:
ollama.create_model("model".into(), "/tmp/Modelfile.example".into())
becomes:
ollama.create_model(CreateModelRequest::path("model".into(), "/tmp/Modelfile.example".into()))
v0.1.2
- Fixed
GenerateEmbeddingsResponse
struct by making theembeddings
field accessible