This script leverages the Eden AI moderation workflow to evaluate and categorize text content for appropriateness. Results include rejection chances, categories, and validation status for better content management.
- Python: Version 3.8 or later.
- Dependencies: Install
requests
,python-dotenv
,tqdm
, andpandas
.
git clone <repository-url>
cd <repository-directory>
Create a .env
file at the project root:
API_URL_POST=https://api.edenai.run/v2/workflow/WORKFLOW_ID/execution/
API_URL_GET=https://api.edenai.run/v2/workflow/WORKFLOW_ID/execution/{execution_id}/
API_KEY=YOUR_API_KEY_HERE
Create and activate a virtual environment, then install requirements:
python -m venv env
env\Scripts\activate # ON WINDOWS
pip install -r requirements.txt
- Run the script:
python scripts.py
- Choose an option:
- Option 1: Process an entire Excel file with batch texts.
- Option 2: Test a single text manually.
- For batch mode, provide the input Excel file path and a name for the output file.
Chance of Rejection (%): 80.60
Category: Violence
Status: Rejected