forked from openai/gpt-2
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add contributors md and move dev docs out
- Loading branch information
1 parent
953530f
commit 79a246a
Showing
3 changed files
with
106 additions
and
82 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
# Contributors (alphabetically) | ||
|
||
* **[madisonmay](https://github.com/madisonmay)** | ||
|
||
Added Dockerfiles | ||
|
||
* **[Margaret Mitchell et al](https://arxiv.org/abs/1810.03993)** | ||
|
||
Our [usage](./readme#usage) writeup was loosely inspired by the paper | ||
[Model Cards for Model Reporting](https://arxiv.org/abs/1810.03993) | ||
and related conversations with some of the authors. | ||
|
||
* **[webproduktion01](https://github.com/webproduktion01)** | ||
|
||
Ported download script to python. | ||
|
||
**[Full code contributors list](https://github.com/openai/gpt-2/contributors).** |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
# Installation | ||
|
||
Git clone this repository, and `cd` into directory for remaining commands | ||
``` | ||
git clone https://github.com/openai/gpt-2.git && cd gpt-2 | ||
``` | ||
|
||
Then, follow instructions for either native or Docker installation. | ||
|
||
## Native Installation | ||
|
||
All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`. | ||
|
||
Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster) | ||
``` | ||
pip3 install tensorflow==1.12.0 | ||
``` | ||
or | ||
``` | ||
pip3 install tensorflow-gpu==1.12.0 | ||
``` | ||
|
||
Install other python packages: | ||
``` | ||
pip3 install -r requirements.txt | ||
``` | ||
|
||
Download the model data | ||
``` | ||
python3 download_model.py 117M | ||
``` | ||
|
||
## Docker Installation | ||
|
||
Build the Dockerfile and tag the created image as `gpt-2`: | ||
``` | ||
docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu | ||
``` | ||
|
||
Start an interactive bash session from the `gpt-2` docker image. | ||
|
||
You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU | ||
and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)). | ||
``` | ||
docker run --runtime=nvidia -it gpt-2 bash | ||
``` | ||
|
||
# Running | ||
|
||
| WARNING: Samples are unfiltered and may contain offensive content. | | ||
| --- | | ||
|
||
Some of the examples below may include Unicode text characters. Set the environment variable: | ||
``` | ||
export PYTHONIOENCODING=UTF-8 | ||
``` | ||
to override the standard stream settings in UTF-8 mode. | ||
|
||
## Unconditional sample generation | ||
|
||
To generate unconditional samples from the small model: | ||
``` | ||
python3 src/generate_unconditional_samples.py | tee /tmp/samples | ||
``` | ||
There are various flags for controlling the samples: | ||
``` | ||
python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples | ||
``` | ||
|
||
To check flag descriptions, use: | ||
``` | ||
python3 src/generate_unconditional_samples.py -- --help | ||
``` | ||
|
||
## Conditional sample generation | ||
|
||
To give the model custom prompts, you can use: | ||
``` | ||
python3 src/interactive_conditional_samples.py --top_k 40 | ||
``` | ||
|
||
To check flag descriptions, use: | ||
``` | ||
python3 src/interactive_conditional_samples.py -- --help | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -22,91 +22,13 @@ Please [let us know](mailto:[email protected]) if you’re doing inte | |
- Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text) | ||
- The extent of problematic content (e.g. bias) being baked into the models and effective mitigations | ||
|
||
## Installation | ||
## Development | ||
|
||
Git clone this repository, and `cd` into directory for remaining commands | ||
``` | ||
git clone https://github.com/openai/gpt-2.git && cd gpt-2 | ||
``` | ||
|
||
Then, follow instructions for either native or Docker installation. | ||
|
||
### Native Installation | ||
|
||
All steps can optionally be done in a virtual environment using tools such as `virtualenv` or `conda`. | ||
|
||
Install tensorflow 1.12 (with GPU support, if you have a GPU and want everything to run faster) | ||
``` | ||
pip3 install tensorflow==1.12.0 | ||
``` | ||
or | ||
``` | ||
pip3 install tensorflow-gpu==1.12.0 | ||
``` | ||
See [DEVELOPERS.md](./DEVELOPERS.md) | ||
|
||
Install other python packages: | ||
``` | ||
pip3 install -r requirements.txt | ||
``` | ||
|
||
Download the model data | ||
``` | ||
python3 download_model.py 117M | ||
``` | ||
|
||
### Docker Installation | ||
|
||
Build the Dockerfile and tag the created image as `gpt-2`: | ||
``` | ||
docker build --tag gpt-2 -f Dockerfile.gpu . # or Dockerfile.cpu | ||
``` | ||
|
||
Start an interactive bash session from the `gpt-2` docker image. | ||
|
||
You can opt to use the `--runtime=nvidia` flag if you have access to a NVIDIA GPU | ||
and a valid install of [nvidia-docker 2.0](https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)). | ||
``` | ||
docker run --runtime=nvidia -it gpt-2 bash | ||
``` | ||
## Contributors | ||
|
||
## Sampling scripts | ||
|
||
| WARNING: Samples are unfiltered and may contain offensive content. | | ||
| --- | | ||
|
||
Some of the examples below may include Unicode text characters. Set the environment variable: | ||
``` | ||
export PYTHONIOENCODING=UTF-8 | ||
``` | ||
to override the standard stream settings in UTF-8 mode. | ||
|
||
### Unconditional sample generation | ||
|
||
To generate unconditional samples from the small model: | ||
``` | ||
python3 src/generate_unconditional_samples.py | tee /tmp/samples | ||
``` | ||
There are various flags for controlling the samples: | ||
``` | ||
python3 src/generate_unconditional_samples.py --top_k 40 --temperature 0.7 | tee /tmp/samples | ||
``` | ||
|
||
To check flag descriptions, use: | ||
``` | ||
python3 src/generate_unconditional_samples.py -- --help | ||
``` | ||
|
||
### Conditional sample generation | ||
|
||
To give the model custom prompts, you can use: | ||
``` | ||
python3 src/interactive_conditional_samples.py --top_k 40 | ||
``` | ||
|
||
To check flag descriptions, use: | ||
``` | ||
python3 src/interactive_conditional_samples.py -- --help | ||
``` | ||
See [CONTRIBUTORS.md](./CONTRIBUTORS.md) | ||
|
||
## GPT-2 samples | ||
|
||
|