Lots of changes that includes the RC releases as well.
What's Changed
- fix openai content part media by @brainlid in #112
- ContentPart image media option updates by @brainlid in #113
- Updates for ContentPart images with messages to support ChatGPT's "detail" level option by @brainlid in #114
- add openai image endpoint support (aka DALL-E-2 & DALL-E-3) by @brainlid in #116
- allow PromptTemplates to convert to ContentParts by @brainlid in #117
- Fix elixir 1.17 warnings by @MrYawe in #123
- updates to README by @petrus-jvrensburg in #125
- Add ChatVertexAI by @raulchedrese in #124
- Major update. Preparing for v0.3.0-rc.0 - breaking changes by @brainlid in #131
- update calculator tool by @brainlid in #132
- support receiving rate limit info by @brainlid in #133
- upgrade abacus dep by @brainlid in #134
- add support for TokenUsage through callbacks by @brainlid in #137
- Big update - RC ready by @brainlid in #138
- Improvements to docs by @brainlid in #145
- ChatGoogleAI fixes and updates by @brainlid in #152
- fix: typespec error on Message.new_user/1 by @bwan-nan in #151
- Convert to use mimic for mocking calls by @brainlid in #155
- Remove ApiOverride reference in mix.exs project.docs by @stevehodgkiss in #157
- Fix OpenAI chat stream hanging by @stevehodgkiss in #156
- Fix streaming error when using Azure OpenAI Service by @stevehodgkiss in #158
- Update Azure OpenAI Service streaming fix by @stevehodgkiss in #161
- Fix ChatOllamaAI streaming response by @alappe in #162
- Fix PromptTemplate example by @joelpaulkoch in #167
- adds OpenAI project authentication. by @fbettag in #166
- Anthropic support for streamed tool calls with parameters by @brainlid in #169
- change return of LLMChain.run/2 - breaking change by @brainlid in #170
- 🐛 cast tool_calls arguments correctly inside message_deltas by @rparcus in #175
- Do not duplicate tool call parameters if they are identical by @michalwarda in #174
- Structured Outputs by supplying
strict: true
in #173 - feat: add OpenAI's new structured output API by @monotykamary in #180
- Support system instructions for Google AI by @elliotb in #182
- Handle empty text parts from GoogleAI responses by @elliotb in #181
- Handle missing token usage fields for Google AI by @elliotb in #184
- Handle functions with no parameters for Google AI by @elliotb in #183
- Add AWS Bedrock support to ChatAnthropic by @stevehodgkiss in #154
- Handle all possible finishReasons for ChatGoogleAI by @elliotb in #188
- Remove unused assignment from ChatGoogleAI by @elliotb in #187
- Add support for passing safety settings to Google AI by @elliotb in #186
- Add tool_choice for OpenAI and Anthropic by @avergin in #142
- add support for examples to title chain by @brainlid in #191
- add "processed_content" to ToolResult struct and support storing Elixir data from function results by @brainlid in #192
- Revamped error handling and handles Anthropic's "overload_error" by @brainlid in #194
- Documenting AWS Bedrock support with Anthropic Claude by @brainlid in #195
- Cancel a message delta when we receive "overloaded" error by @brainlid in #196
- implement initial support for fallbacks by @brainlid in #207
- Fix content-part encoding and decoding for Google API. by @vkryukov in #212
- Fix specs and examples by @vkryukov in #211
- Ability to Summarize an LLM Conversation by @brainlid in #216
- Prepare for v0.3.0-rc.1 by @brainlid in #217
- add explicit message support in summarizer by @brainlid in #220
- Change abacus to optional dep by @nallwhy in #223
- Remove constraint of alternating user, assistant by @GenericJam in #222
- Breaking change: consolidate LLM callback functions by @brainlid in #228
- feat: Enable :inet6 for Req.new for Ollama by @mpope9 in #227
- fix: enable verbose_deltas in #197
- Prep for v0.3.0-rc.2 - update version and docs outline by @brainlid in #229
- Add Bumblebee Phi-4 by @marcnnn in #233
- feat: apply chat template from callback by @joelpaulkoch in #231
- support for o1 OpenAI model by @brainlid in #234
- feat: Support for Ollama keep_alive API parameter by @mpope9 in #237
- Add prompt caching support for Claude. by @montebrown in #226
- Add raw field to TokenUsage by @nallwhy in #236
- Add LLAMA 3.1 Json tool call with Bumblebee by @marcnnn in #198
- prep for v0.3.0 release by @brainlid in #238
New Contributors
- @MrYawe made their first contribution in #123
- @petrus-jvrensburg made their first contribution in #125
- @raulchedrese made their first contribution in #124
- @bwan-nan made their first contribution in #151
- @stevehodgkiss made their first contribution in #157
- @alappe made their first contribution in #162
- @joelpaulkoch made their first contribution in #167
- @fbettag made their first contribution in #166
- @rparcus made their first contribution in #175
- @monotykamary made their first contribution in #180
- @elliotb made their first contribution in #182
- @avergin made their first contribution in #142
- @vkryukov made their first contribution in #212
- @nallwhy made their first contribution in #223
- @GenericJam made their first contribution in #222
- @mpope9 made their first contribution in #227
- @marcnnn made their first contribution in #233
- @montebrown made their first contribution in #226
Full Changelog: v0.2.0...v0.3.0
Thanks to all the contributors!