Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
bhutchins authored Aug 5, 2024
1 parent ad8afb6 commit f6622c7
Showing 1 changed file with 3 additions and 6 deletions.
9 changes: 3 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,17 @@
# Firewall for AI APIs: Sanitizing GenAI Prompts

## Overview
Firewall for AI prevents sensitive data disclosure in GenAI applications, analytics, and data pipelines. It sanitizes PII, PCI, banking info, PHI, and intellectual property using state-of-the-art DLP models and fine-tuned LLMs, offering superior accuracy and speed over traditional regex and heuristic methods.
Firewall for AI prevents sensitive data disclosure in GenAI applications, data pipelines, and autoamted workflows. It sanitizes PII, PCI, banking info, PHI, and intellectual property using state-of-the-art DLP models and fine-tuned LLMs, offering superior accuracy and speed over traditional regex and heuristic methods.

This repository provides examples of using Firewall for AI APIs to sanitize GenAI prompts for popular LLM services and frameworks using Python.

Implementing robust content filtering is crucial for data protection and regulatory compliance.

### Real-world Scenarios

**Support Chatbots:**
Users may overshare sensitive information like credit card numbers with LLM chatbots. Filtering this content before it reaches the LLM prevents potential data breaches.

**Healthcare Apps:**
Sensitive PHI shared in LLM-powered healthcare apps should be filtered to comply with data protection regulations and prevent unnecessary data transmission.

Implementing robust content filtering is crucial for data protection and regulatory compliance.

## Examples

The repository includes three examples:
Expand Down

0 comments on commit f6622c7

Please sign in to comment.