All notable changes to this project will be documented in this file.
This project adheres to Semantic Versioning.
Note
We have updated our changelog format!
The changes related to the Colang language and runtime have moved to CHANGELOG-Colang file.
- Support Output Rails Streaming (#966, #1003)
- Add unified output mapping for actions (#965)
- Add output rails support to activefence integration (#940)
- Add Prompt Security integration (#920)
- Add pii masking capability to PrivateAI integration (#901)
- Add embedding_params to BasicEmbeddingsIndex (#898)
- Add score threshold to AnalyzerEngine (#845)
- Fix dependency resolution issues in AlignScore Dockerfile(#1002, #982)
- Fix JailbreakDetect docker files(#981, #1001)
- Fix TypeError from attempting to unpack already-unpacked dictionary. (#959)
- Fix token stats usage in LLM call info. (#953)
- Handle unescaped quotes in generate_value using safe_eval (#946)
- Handle non-relative file paths (#897)
- Set workdir to models and specify entrypoint explicitly (#1001).
- Output streaming (#976)
- Fix typos with oauthtoken (#957)
- Fix broken link in prompt security (#978)
- Update advanced user guides per v0.11.1 doc release (#937)
- ContentSafety: Add ContentSafety NIM connector (#930) by @prasoonvarshney
- TopicControl: Add TopicControl NIM connector (#930) by @makeshn
- JailbreakDetect: Add jailbreak detection NIM connector (#930) by @erickgalinkin
- AutoAlign Integration: Add further enhancements and refactoring to AutoAlign integration (#867) by @KimiJL
- PrivateAI Integration: Fix Incomplete URL substring sanitization Error (#883) by @NJ-186
-
NVIDIA Blueprint: Add Safeguarding AI Virtual Assistant NIM Blueprint NemoGuard NIMs (#932) by @abodhankar
-
ActiveFence Integration: Fix flow definition in community docs (#890) by @noamlevy81
- Observability: Add observability support with support for different backends (#844) by @Pouyanpi
- Private AI Integration: Add Private AI Integration (#815) by @letmerecall
- Patronus Evaluate API Integration: Patronus Evaluate API Integration (#834) by @varjoshi
- railsignore: Add support for .railsignore file (#790) by @ajanitshimanga
- Sandboxed Environment in Jinja2: Add sandboxed environment in Jinja2 (#799) by @Pouyanpi
- Langchain 3 support: Upgrade LangChain to Version 0.3 (#784) by @Pouyanpi
- Python 3.8: Drop support for Python 3.8 (#803) by @Pouyanpi
- vllm: Bump vllm from 0.2.7 to 0.5.5 for llama_guard and patronusai(#836)
- Guardrails Library documentation": Fix a typo in guardrails library documentation (#793) by @vedantnaik19
- Contributing Guide: Fix incorrect folder name & pre-commit setup in CONTRIBUTING.md (#800)
- Contributing Guide: Added correct Python command version in documentation(#801) by @ravinder-tw
- retrieve chunk action: Fix presence of new line in retrieve chunk action (#809) by @Pouyanpi
- Standard Library import: Fix guardrails standard library import path in Colang 2.0 (#835) by @Pouyanpi
- AlignScore Dockerfile: Add nltk's punkt_tab in align_score Dockerfile (#841) by @yonromai
- Eval dependencies: Make pandas version constraint explicit for eval optional dependency (#847) by @Pouyanpi
- tests: Mock PromptSession to prevent console error (#851) by @Pouyanpi
- *Streaming: Handle multiple output parsers in generation (#854) by @Pouyanpi
- User Guide: Update role from bot to assistant (#852) by @Pouyanpi
- Installation Guide: Update optional dependencies install (#853) by @Pouyanpi
- Documentation Restructuring: Restructure the docs and several style enhancements (#855) by @Pouyanpi
- Got It AI deprecation: Add deprecation notice for Got It AI integration (#857) by @mlmonk
- Colang 2.0-beta.4 patch
- content safety: Implement content safety module (#674) by @Pouyanpi
- migration tool: Enhance migration tool capabilities (#624) by @Pouyanpi
- Cleanlab Integration: Add Cleanlab's Trustworthiness Score (#572) by @AshishSardana
- Colang 2: LLM chat interface development (#709) by @schuellc-nvidia
- embeddings: Add relevant chunk support to Colang 2 (#708) by @Pouyanpi
- library: Migrate Cleanlab to Colang 2 and add exception handling (#714) by @Pouyanpi
- Colang debug library: Develop debugging tools for Colang (#560) by @schuellc-nvidia
- debug CLI: Extend debugging command-line interface (#717) by @schuellc-nvidia
- embeddings: Add support for embeddings only with search threshold (#733) by @Pouyanpi
- embeddings: Add embedding-only support to Colang 2 (#737) by @Pouyanpi
- embeddings: Add relevant chunks prompts (#745) by @Pouyanpi
- gcp moderation: Implement GCP-based moderation tools (#727) by @kauabh
- migration tool: Sample conversation syntax conversion (#764) by @Pouyanpi
- llmrails: Add serialization support for LLMRails (#627) by @Pouyanpi
- exceptions: Initial support for exception handling (#384) by @drazvan
- evaluation tooling: Develop new evaluation tools (#677) by @drazvan
- Eval UI: Add support for tags in the Evaluation UI (#731) by @drazvan
- guardrails library: Launch Colang 2.0 Guardrails Library (#689) by @drazvan
- configuration: Revert abc bot to Colang v1 and separate v2 configuration (#698) by @drazvan
-
api: Update Pydantic validators (#688) by @Pouyanpi
-
standard library: Refactor and migrate standard library components (#625) by @Pouyanpi
-
Upgrade langchain-core and jinja2 dependencies (#766) by @Pouyanpi
- documentation: Fix broken links (#670) by @buvnswrn
- hallucination-check: Correct hallucination-check functionality (#679) by @Pouyanpi
- streaming: Fix NVIDIA AI endpoints streaming issues (#654) by @Pouyanpi
- hallucination-check: Resolve non-OpenAI hallucination check issue (#681) by @Pouyanpi
- import error: Fix Streamlit import error (#686) by @Pouyanpi
- prompt override: Fix override prompt self-check facts (#621) by @Pouyanpi
- output parser: Resolve deprecation warning in output parser (#691) by @Pouyanpi
- patch: Fix langchain_nvidia_ai_endpoints patch (#697) by @Pouyanpi
- runtime issues: Address Colang 2 runtime issues (#699) by @schuellc-nvidia
- send event: Change 'send event' to 'send' (#701) by @Pouyanpi
- output parser: Fix output parser validation (#704) by @Pouyanpi
- passthrough_fn: Pass config and kwargs to passthrough_fn runnable (#695) by @vpr1995
- rails exception: Fix rails exception migration (#705) by @Pouyanpi
- migration: Replace hyphens and apostrophes in migration (#725) by @Pouyanpi
- flow generation: Fix LLM flow continuation generation (#724) by @schuellc-nvidia
- server command: Fix CLI server command (#723) by @Pouyanpi
- embeddings filesystem: Fix cache embeddings filesystem (#722) by @Pouyanpi
- outgoing events: Process all outgoing events (#732) by @sklinglernv
- generate_flow: Fix a small bug in the generate_flow action for Colang 2 (#710) by @drazvan
- triggering flow id: Fix the detection of the triggering flow id (#728) by @drazvan
- LLM output: Fix multiline LLM output syntax error for dynamic flow generation (#748) by @radinshayanfar
- scene form: Fix the scene form and choice flows in the Colang 2 standard library (#741) by @sklinglernv
- Cleanlab: Update community documentation for Cleanlab integration (#713) by @Pouyanpi
- rails exception handling: Add notes for Rails exception handling in Colang 2.x (#744) by @Pouyanpi
- LLM per task: Document LLM per task functionality (#676) by @Pouyanpi
- relevant_chunks: Add the
relevant_chunks
to the GPT-3.5 general prompt template (#678) by @drazvan - flow names: Ensure flow names don't start with keywords (#637) by @schuellc-nvidia
- #650 Fix gpt-3.5-turbo-instruct prompts #651.
- Colang version 2.0-beta.2
- #370 Add Got It AI's Truthchecking service for RAG applications by @mlmonk.
- #543 Integrating AutoAlign's guardrail library with NeMo Guardrails by @abhijitpal1247.
- #566 Autoalign factcheck examples by @abhijitpal1247.
- #518 Docs: add example config for using models with ollama by @vedantnaik19.
- #538 Support for
--default-config-id
in the server. - #539 Support for
LLMCallException
. - #548 Support for custom embedding models.
- #617 NVIDIA AI Endpoints embeddings.
- #462 Support for calling embedding models from langchain-nvidia-ai-endpoints.
- #622 Patronus Lynx Integration.
- #597 Make UUID generation predictable in debug-mode.
- #603 Improve chat cli logging.
- #551 Upgrade to Langchain 0.2.x by @nicoloboschi.
- #611 Change default templates.
- #545 NVIDIA API Catalog and NIM documentation update.
- #463 Do not store pip cache during docker build by @don-attilio.
- #629 Move community docs to separate folder.
- #647 Documentation updates.
- #648 Prompt improvements for Llama-3 models.
- #482 Update README.md by @curefatih.
- #530 Improve the test serialization test to make it more robust.
- #570 Add support for FacialGestureBotAction by @elisam0.
- #550 Fix issue #335 - make import errors visible.
- #547 Fix LLMParams bug and add unit tests (fixes #158).
- #537 Fix directory traversal bug.
- #536 Fix issue #304 NeMo Guardrails packaging.
- #539 Fix bug related to the flow abort logic in Colang 1.0 runtime.
- #612 Follow-up fixes for the default prompt change.
- #585 Fix Colang 2.0 state serialization issue.
- #486 Fix select model type and custom prompts task.py by @cyun9601.
- #487 Fix custom prompts configuration manual.md.
- #479 Fix static method and classmethod action decorators by @piotrm0.
- #544 Fix issue #216 bot utterance.
- #616 Various fixes.
- #623 Fix path traversal check.
- #461 Feature/ccl cleanup.
- #483 Fix dictionary expression evaluation bug.
- #467 Feature/colang doc related cleanups.
- #484 Enable parsing of
..."<NLD>"
expressions. - #478 Fix #420 - evaluate not working with chat models.
- #453 Update documentation for NVIDIA API Catalog example.
- #382 Fix issue with
lowest_temperature
in self-check and hallucination rails. - #454 Redo fix for #385.
- #442 Fix README type by @dileepbapat.
- #402 Integrate Vertex AI Models into Guardrails by @aishwaryap.
- #403 Add support for NVIDIA AI Endpoints by @patriciapampanelli
- #396 Docs/examples nv ai foundation models.
- #438 Add research roadmap documentation.
- #389 Expose the
verbose
parameter throughRunnableRails
by @d-mariano. - #415 Enable
print(...)
andlog(...)
. - #389 Expose verbose arg in RunnableRails by @d-mariano.
- #414 Feature/colang march release.
- #416 Refactor and improve the verbose/debug mode.
- #418 Feature/colang flow context sharing.
- #425 Feature/colang meta decorator.
- #427 Feature/colang single flow activation.
- #426 Feature/colang 2.0 tutorial.
- #428 Feature/Standard library and examples.
- #431 Feature/colang various improvements.
- #433 Feature/Colang 2.0 improvements: generate_async support, stateful API.
- #412 Fix #411 - explain rails not working for chat models.
- #413 Typo fix: Comment in llm_flows.co by @habanoz.
- #420 Fix typo for hallucination message.
- #377 Add example for streaming from custom action.
- #380 Update installation guide for OpenAI usage.
- #401 Replace YAML import with new import statement in multi-modal example.
- #398 Colang parser fixes and improvements.
- #394 Fixes and improvements for Colang 2.0 runtime.
- #381 Fix typo by @serhatgktp.
- #379 Fix missing prompt in verbose mode for chat models.
- #400 Fix Authorization header showing up in logs for NeMo LLM.
- #292 Jailbreak heuristics by @erickgalinkin.
- #256 Support generation options.
- #307 Added support for multi-config api calls by @makeshn.
- #293 Adds configurable stop tokens by @zmackie.
- #334 Colang 2.0 - Preview by @schuellc.
- #208 Implement cache embeddings (resolves #200) by @Pouyanpi.
- #331 Huggingface pipeline streaming by @trebedea.
Documentation:
- #311 Update documentation to demonstrate the use of output rails when using a custom RAG by @niels-garve.
- #347 Add detailed logging docs by @erickgalinkin.
- #354 Input and output rails only guide by @trebedea.
- #359 Added user guide for jailbreak detection heuristics by @makeshn.
- #363 Add multi-config API call user guide.
- #297 Example configurations for using only the guardrails, without LLM generation.
- #309 Change the paper citation from ArXiV to EMNLP 2023 by @manuelciosici
- #319 Enable embeddings model caching.
- #267 Make embeddings computing async and add support for batching.
- #281 Follow symlinks when building knowledge base by @piotrm0.
- #280 Add more information to results of
retrieve_relevant_chunks
by @piotrm0. - #332 Update docs for batch embedding computations.
- #244 Docs/edit getting started by @DougAtNvidia.
- #333 Follow-up to PR 244.
- #341 Updated 'fastembed' version to 0.2.2 by @NirantK.
- #286 Fixed #285 - using the same evaluation set given a random seed for topical rails by @trebedea.
- #336 Fix #320. Reuse the asyncio loop between sync calls.
- #337 Fix stats gathering in a parallel async setup.
- #342 Fixes OpenAI embeddings support.
- #346 Fix issues with KB embeddings cache, bot intent detection and config ids validator logic.
- #349 Fix multi-config bug, asyncio loop issue and cache folder for embeddings.
- #350 Fix the incorrect logging of an extra dialog rail.
- #358 Fix Openai embeddings async support.
- #362 Fix the issue with the server being pointed to a folder with a single config.
- #352 Fix a few issues related to jailbreak detection heuristics.
- #356 Redo followlinks PR in new code by @piotrm0.
- #288 Replace SentenceTransformers with FastEmbed.
- #254 Support for Llama Guard input and output content moderation.
- #253 Support for server-side threads.
- #235 Improved LangChain integration through
RunnableRails
. - #190 Add example for using
generate_events_async
with streaming. - Support for Python 3.11.
- #286 Fixed not having the same evaluation set given a random seed for topical rails.
- #239 Fixed logging issue where
verbose=true
flag did not trigger expected log output. - #228 Fix docstrings for various functions.
- #242 Fix Azure LLM support.
- #225 Fix annoy import, to allow using without.
- #209 Fix user messages missing from prompt.
- #261 Fix small bug in
print_llm_calls_summary
. - #252 Fixed duplicate loading for the default config.
- Fixed the dependencies pinning, allowing a wider range of dependencies versions.
- Fixed sever security issues related to uncontrolled data used in path expression and information exposure through an exception.
- Support for
--version
flag in the CLI.
- Upgraded
langchain
to0.0.352
. - Upgraded
httpx
to0.24.1
. - Replaced deprecated
text-davinci-003
model withgpt-3.5-turbo-instruct
.
- #191: Fix chat generation chunk issue.
- Support for explicit definition of input/output/retrieval rails.
- Support for custom tasks and their prompts.
- Support for fact-checking using AlignScore.
- Support for NeMo LLM Service as an LLM provider.
- Support for making a single LLM call for both the guardrails process and generating the response (by setting
rails.dialog.single_call.enabled
toTrue
). - Support for sensitive data detection guardrails using Presidio.
- Example using NeMo Guardrails with the LLaMa2-13B model.
- Dockerfile for building a Docker image.
- Support for prompting modes using
prompting_mode
. - Support for TRT-LLM as an LLM provider.
- Support for streaming the LLM responses when no output rails are used.
- Integration of ActiveFence ActiveScore API as an input rail.
- Support for
--prefix
and--auto-reload
in the guardrails server. - Example authentication dialog flow.
- Example RAG using Pinecone.
- Support for loading a configuration from dictionary, i.e.
RailsConfig.from_content(config=...)
. - Guidance on LLM support.
- Support for
LLMRails.explain()
(see the Getting Started guide for sample usage).
- Allow context data directly in the
/v1/chat/completion
using messages with the type"role"
. - Allow calling a subflow whose name is in a variable, e.g.
do $some_name
. - Allow using actions which are not
async
functions. - Disabled pretty exceptions in CLI.
- Upgraded dependencies.
- Updated the Getting Started Guide.
- Main README now provides more details.
- Merged original examples into a single ABC Bot and removed the original ones.
- Documentation improvements.
- Fix going over the maximum prompt length using the
max_length
attribute in Prompt Templates. - Fixed problem with
nest_asyncio
initialization. - #144 Fixed TypeError in logging call.
- #121 Detect chat model using openai engine.
- #109 Fixed minor logging issue.
- Parallel flow support.
- Fix
HuggingFacePipeline
bug related to LangChain version upgrade.
- Support for custom configuration data.
- Example for using custom LLM and multiple KBs
- Support for
PROMPTS_DIR
. - #101 Support for using OpenAI embeddings models in addition to SentenceTransformers.
- First set of end-to-end QA tests for the example configurations.
- Support for configurable embedding search providers
- Moved to using
nest_asyncio
for implementing the blocking API. Fixes #3 and #32. - Improved event property validation in
new_event_dict
. - Refactored imports to allow installing from source without Annoy/SentenceTransformers (would need a custom embedding search provider to work).
- Fixed when the
init
function fromconfig.py
is called to allow custom LLM providers to be registered inside. - #93: Removed redundant
hasattr
check innemoguardrails/llm/params.py
. - #91: Fixed how default context variables are initialized.
- Event-based API for guardrails.
- Support for message with type "event" in
LLMRails.generate_async
. - Support for bot message instructions.
- Support for using variables inside bot message definitions.
- Support for
vicuna-7b-v1.3
andmpt-7b-instruct
. - Topical evaluation results for
vicuna-7b-v1.3
andmpt-7b-instruct
. - Support to use different models for different LLM tasks.
- Support for red-teaming using challenges.
- Support to disable the Chat UI when running the server using
--disable-chat-ui
. - Support for accessing the API request headers in server mode.
- Support to enable CORS settings for the guardrails server.
- Changed the naming of the internal events to align to the upcoming UMIM spec (Unified Multimodal Interaction Management).
- If there are no user message examples, the bot messages examples lookup is disabled as well.
- #58: Fix install on Mac OS 13.
- #55: Fix bug in example causing config.py to crash on computers with no CUDA-enabled GPUs.
- Fixed the model name initialization for LLMs that use the
model
kwarg. - Fixed the Cohere prompt templates.
- #55: Fix bug related to LangChain callbacks initialization.
- Fixed generation of "..." on value generation.
- Fixed the parameters type conversion when invoking actions from Colang (previously everything was string).
- Fixed
model_kwargs
property for theWrapperLLM
. - Fixed bug when
stop
was used inside flows. - Fixed Chat UI bug when an invalid guardrails configuration was used.
- Support for defining subflows.
- Improved support for customizing LLM prompts
- Support for using filters to change how variables are included in a prompt template.
- Output parsers for prompt templates.
- The
verbose_v1
formatter and output parser to be used for smaller models that don't understand Colang very well in a few-shot manner. - Support for including context variables in prompt templates.
- Support for chat models i.e. prompting with a sequence of messages.
- Experimental support for allowing the LLM to generate multi-step flows.
- Example of using Llama Index from a guardrails configuration (#40).
- Example for using HuggingFace Endpoint LLMs with a guardrails configuration.
- Example for using HuggingFace Pipeline LLMs with a guardrails configuration.
- Support to alter LLM parameters passed as
model_kwargs
in LangChain. - CLI tool for running evaluations on the different steps (e.g., canonical form generation, next steps, bot message) and on existing rails implementation (e.g., moderation, jailbreak, fact-checking, and hallucination).
- Initial evaluation results for
text-davinci-003
andgpt-3.5-turbo
. - The
lowest_temperature
can be set through the guardrails config (to be used for deterministic tasks).
- The core templates now use Jinja2 as the rendering engines.
- Improved the internal prompting architecture, now using an LLM Task Manager.
- Fixed bug related to invoking a chain with multiple output keys.
- Fixed bug related to tracking the output stats.
- #51: Bug fix - avoid str concat with None when logging user_intent.
- #54: Fix UTF-8 encoding issue and add embedding model configuration.
- Support to connect any LLM that implements the BaseLanguageModel interface from LangChain.
- Support for customizing the prompts for specific LLM models.
- Support for custom initialization when loading a configuration through
config.py
. - Support to extract user-provided values from utterances.
- Improved the logging output for Chat CLI (clear events stream, prompts, completion, timing information).
- Updated system actions to use temperature 0 where it makes sense, e.g., canonical form generation, next step generation, fact checking, etc.
- Excluded the default system flows from the "next step generation" prompt.
- Updated langchain to 0.0.167.
- Fixed initialization of LangChain tools.
- Fixed the overriding of general instructions #7.
- Fixed action parameters inspection bug #2.
- Fixed bug related to multi-turn flows #13.
- Fixed Wolfram Alpha error reporting in the sample execution rail.
- First alpha release.