Skip to content

Commit

Permalink
Merge pull request #1010 from guardrails-ai/ae/docs-in-code-installs
Browse files Browse the repository at this point in the history
Docs: Installing validators concepts page with incode examples
  • Loading branch information
dtam authored Aug 15, 2024
2 parents 6f60cd4 + 93205ac commit a8a70cf
Showing 1 changed file with 86 additions and 0 deletions.
86 changes: 86 additions & 0 deletions docs/concepts/validators.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,4 +107,90 @@ git clone [email protected]:guardrails-ai/validator-template.git
Once the repository is cloned and the validator is created, you can register the validator via this [Google Form](https://forms.gle/N6UaE9611niuMxZj7).


## Installing Validators

### Guardrails Hub

Validators can be combined together into Input and Output Guards that intercept the inputs and outputs of LLMs. There are a large collection of Validators which can be found at the [Guardrails Hub](https://hub.guardrailsai.com/).

<div align="center">
<img src="https://raw.githubusercontent.com/guardrails-ai/guardrails/main/docs/img/guardrails_hub.gif" alt="Guardrails Hub gif" width="600px" />
</div>

Once you have found a Validator on the hub, you can click on the Validator `README` to find the install link.

### Using CLI

You can install a validator using the Guardrails CLI. For example the [Toxic Language](https://hub.guardrailsai.com/validator/guardrails/toxic_language) validator can be installed with:

```bash
guardrails hub install hub://guardrails/toxic_language
```

> This will not download local models if you opted into remote inferencing during `guardrails configure`
> If you want to control if associated models are downloaded or not you can use the `--install-local-models` or `--no-install-local-models` flags respectively during `guardrails hub install`
After installing the validator with the CLI you can start to use the validator in your guards:

```python
from guardrails.hub import ToxicLanguage
from guardrails import Guard

guard = Guard().use(
ToxicLanguage, threshold=0.5, validation_method="sentence", on_fail="exception"
)

guard.validate("My landlord is an asshole!")
```

### In Code Installs

You can also install validators using the Guardrails SDK which simplifies development particularly when using Jupyter Notebooks.

```python
from guardrails import install

install(
"hub://guardrails/toxic_language",
install_local_models=True, # defaults to `None` - which will not download local models if you opted into remote inferencing.
quiet=False # defaults to `True`
)
```

### In Code Installs - Pattern A

After an `install` invocation you can import a validator as you typically would:

```python
from guardrails import install

install("hub://guardrails/toxic_language")

from guardrails.hub import ToxicLanguage

guard = Guard().use(
ToxicLanguage, threshold=0.5, validation_method="sentence", on_fail="exception"
)

guard.validate("My landlord is an asshole!")
```

### In Code Installs - Pattern B

You can also extract the validator directly from the installed module as follows:

```python
from guardrails import install

ToxicLanguage = install("hub://guardrails/toxic_language").ToxicLanguage

guard = Guard().use(
ToxicLanguage, threshold=0.5, validation_method="sentence", on_fail="exception"
)

guard.validate("My landlord is an asshole!")
```


> Note: Invoking the `install` SDK always installs the validator module so it's recommended for the install to be in a separate code block when using Notebooks.

0 comments on commit a8a70cf

Please sign in to comment.