Skip to content

Latest commit

 

History

History
28 lines (18 loc) · 1.95 KB

README.md

File metadata and controls

28 lines (18 loc) · 1.95 KB

ShadesofBias

This repository provides scripts and code use in the Shades of Bias in Text Dataset. It includes code for processing the data, and for evaluation to measure bias in Language Models across languages.

Data Processing

process_dataset/map_dataset.py takes https://huggingface.co/datasets/LanguageShades/BiasShadesRaw and normalizes/formats to produce https://huggingface.co/datasets/LanguageShades/BiasShadesRaw

process_dataset/extract_vocabulary.py takes https://huggingface.co/datasets/LanguageShades/BiasShadesRaw and aligns each statement to its corresponding template slots, printing out results -- and how well the alignment worked -- in https://huggingface.co/datasets/LanguageShades/LanguageCorrections

Evaluation

HF Endpoints

To use HF Endpoint navigate to Shades if you have access. If not copy the .env file in your root directory.

Example Script

Run example_logprob_evaluate.py to iterate through the dataset for a given model and compute log probability of biased sentences. If you have the .env, load_endpoint_url(model_name) will load the model if it has been created for that model.

Run generation_evaluate.py to iterate through the dataset, with each instance formatted with a specified prompt from prompts/. It is possible to specify a prompt language that is different from the original language. Prompt language will be set to Enlish unless further specified. If you have the .env, load_endpoint_url(model_name) will load the model if it has been created for that model.

Add more prompts

Follow the examples in prompts/ to create a .txt file for new prompt. Input field should be indicated with {input} in the text file.

Base Models

Current Proposed Model List

'Aligned' models

Todo