Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gpt #301

Open
ghost opened this issue Aug 15, 2022 · 1 comment
Open

gpt #301

ghost opened this issue Aug 15, 2022 · 1 comment

Comments

@ghost
Copy link

ghost commented Aug 15, 2022

Thank you

@enochlev
Copy link

enochlev commented Dec 30, 2022

Heres a start

!pip install transformers
import transformers
config = transformers.GPT2Config.from_pretrained('/content/checkpoint/run3/hparams.json')
tokenizer = transformers.GPT2Tokenizer("checkpoint/run3/encoder.json", "checkpoint/run3/vocab.bpe")
model = transformers.GPT2Model.from_pretrained('/content/checkpoint/run3/model-1500.index',from_tf=True,config=config)

model.save_pretrained('gpt2_')
tokenizer.save_pretrained('gpt2_')

from transformers import pipeline
fill_masker = pipeline(task ='text-generation', model="/content/gpt2_")

Heres how you implement greedy

fill_masker("hello my name is inigo montoya you",max_length=20)

maybe you can ask the huggingface community for alternative searching methods


Also here how you do beam search with huggingface

fill_masker("hello my name is inigo montoya you",num_return_sequences=20, num_beams=60,batch_size=20)

refer to this link to explore different searches
https://huggingface.co/docs/transformers/main_classes/text_generation

@ghost ghost changed the title change the decode mode from Beam search to something else gpt Jul 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant