From 79a246a58ef67db693dee584deebcc0aea8f47e8 Mon Sep 17 00:00:00 2001 From: Jeff Wu Date: Wed, 6 Mar 2019 12:15:51 -0800 Subject: [PATCH] add contributors md and move dev docs out --- CONTRIBUTORS.md | 17 ++++++++++ DEVELOPERS.md | 85 ++++++++++++++++++++++++++++++++++++++++++++++++ README.md | 86 +++---------------------------------------------- 3 files changed, 106 insertions(+), 82 deletions(-) create mode 100644 CONTRIBUTORS.md create mode 100644 DEVELOPERS.md diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md new file mode 100644 index 000000000..ffac2e7fc --- /dev/null +++ b/CONTRIBUTORS.md @@ -0,0 +1,17 @@ +# Contributors (alphabetically) + +* **[madisonmay](https://github.com/madisonmay)** + + Added Dockerfiles + +* **[Margaret Mitchell et al](https://arxiv.org/abs/1810.03993)** + + Our [usage](./readme#usage) writeup was loosely inspired by the paper + [Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993) + and related conversations with some of the authors. + +* **[webproduktion01](https://github.com/webproduktion01)** + + Ported download script to python. + +**[Full code contributors list](https://github.com/openai/gpt-2/contributors).** diff --git a/DEVELOPERS.md b/DEVELOPERS.md new file mode 100644 index 000000000..078999b0e --- /dev/null +++ b/DEVELOPERS.md @@ -0,0 +1,85 @@ +# Installation + +Git clone this repository, and `cd` into directory for remaining commands +``` +git clone https://github.com/openai/gpt-2.git && cd gpt-2 +``` + +Then, follow instructions for either native or Docker installation. + +## Native Installation + +All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`. + +Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster) +``` +pip3 install tensorflow==1.12.0 +``` +or +``` +pip3 install tensorflow-gpu==1.12.0 +``` + +Install other python packages: +``` +pip3 install -r requirements.txt +``` + +Download the model data +``` +python3 download_model.py 117M +``` + +## Docker Installation + +Build the Dockerfile and tag the created image as `gpt-2`: +``` +docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu +``` + +Start an interactive bash session from the `gpt-2` docker image. + +You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU +and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)). +``` +docker run --runtime=nvidia -it gpt-2 bash +``` + +# Running + +| WARNING: Samples are unfiltered and may contain offensive content. | +| --- | + +Some of the examples below may include Unicode text characters. Set the environment variable: +``` +export PYTHONIOENCODING=UTF-8 +``` +to override the standard stream settings in UTF-8 mode. + +## Unconditional sample generation + +To generate unconditional samples from the small model: +``` +python3 src/generate_unconditional_samples.py | tee /tmp/samples +``` +There are various flags for controlling the samples: +``` +python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples +``` + +To check flag descriptions, use: +``` +python3 src/generate_unconditional_samples.py -- --help +``` + +## Conditional sample generation + +To give the model custom prompts, you can use: +``` +python3 src/interactive_conditional_samples.py --top_k 40 +``` + +To check flag descriptions, use: +``` +python3 src/interactive_conditional_samples.py -- --help +``` diff --git a/README.md b/README.md index 8425e9a80..c1be039bc 100644 --- a/README.md +++ b/README.md @@ -22,91 +22,13 @@ Please [let us know](mailto:languagequestions@openai.com) if you’re doing inte - Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text) - The extent of problematic content (e.g. bias) being baked into the models and effective mitigations -## Installation +## Development -Git clone this repository, and `cd` into directory for remaining commands -``` -git clone https://github.com/openai/gpt-2.git && cd gpt-2 -``` - -Then, follow instructions for either native or Docker installation. - -### Native Installation - -All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`. - -Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster) -``` -pip3 install tensorflow==1.12.0 -``` -or -``` -pip3 install tensorflow-gpu==1.12.0 -``` +See [DEVELOPERS.md](./DEVELOPERS.md) -Install other python packages: -``` -pip3 install -r requirements.txt -``` - -Download the model data -``` -python3 download_model.py 117M -``` - -### Docker Installation - -Build the Dockerfile and tag the created image as `gpt-2`: -``` -docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu -``` - -Start an interactive bash session from the `gpt-2` docker image. - -You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU -and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)). -``` -docker run --runtime=nvidia -it gpt-2 bash -``` +## Contributors -## Sampling scripts - -| WARNING: Samples are unfiltered and may contain offensive content. | -| --- | - -Some of the examples below may include Unicode text characters. Set the environment variable: -``` -export PYTHONIOENCODING=UTF-8 -``` -to override the standard stream settings in UTF-8 mode. - -### Unconditional sample generation - -To generate unconditional samples from the small model: -``` -python3 src/generate_unconditional_samples.py | tee /tmp/samples -``` -There are various flags for controlling the samples: -``` -python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples -``` - -To check flag descriptions, use: -``` -python3 src/generate_unconditional_samples.py -- --help -``` - -### Conditional sample generation - -To give the model custom prompts, you can use: -``` -python3 src/interactive_conditional_samples.py --top_k 40 -``` - -To check flag descriptions, use: -``` -python3 src/interactive_conditional_samples.py -- --help -``` +See [CONTRIBUTORS.md](./CONTRIBUTORS.md) ## GPT-2 samples