forked from huggingface/transformers
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge with master #5
Open
xiaoda99
wants to merge
9,112
commits into
caiyunapp:master
Choose a base branch
from
huggingface:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
julien-c
force-pushed
the
master
branch
2 times, most recently
from
May 8, 2020 16:40
9539397
to
274d850
Compare
* Fix reshape * Apply suggestion from code review Co-authored-by: Niels Rogge <[email protected]>
* Update delete-dev-doc job to match build-dev-doc * More debug info * More debug info * Stash if needed * Remove the comment update * Fix paths * Wtf is going on.. * Fix git status test * Try another way * I don't understand what's happening * Bash shell * What's happening now... * What's happening now... * Try like this * Back to trying to use bash * And like that? * Refine tests * Stash after adding new files * Stash after adding new files * Proper commit sha and PR number * Address review comments
* send PyTorch inputs to the correct device * Fix: TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. Co-authored-by: ydshieh <[email protected]>
* Draft * Add test * Update src/transformers/models/realm/modeling_realm.py * Apply suggestion * Add block_mask * Update * Update * Add block_embedding_to * Remove no_grad * Use AutoTokenizer * Remove model.to overridding
* Freeze FlaxWav2Vec2 Feature Encoder * add to all module apply * add backprop test
in the scoring (which is more correct)
* finish speech doc tests * finish * boom * Update src/transformers/models/speech_to_text/modeling_speech_to_text.py Co-authored-by: Sylvain Gugger <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
* Enabling MaskFormer in ppipelines No AutoModel though :( * Ooops local file.
* Add vision models to doc tests * Apply suggestions from code review * Add more models Co-authored-by: Niels Rogge <[email protected]>
* Fix to support fast tokenizer with `CLIPProcessor` * Update CLIPProcessor test for fast tokenizer * Fix Docstring Style * Rename into meaningful Variable name in test code
Linked to #15826
Small adjustments. Adding in type hint. Last fix ? Only include the default dict thing, not the pipelines.
* Adding Flax XLM-RoBERTa * Add Flax to __init__ * Adding doc and dummy objects * Add tests * Add Flax XLM-R models autodoc * Fix tests * Add Flask XLM-RoBERTa to TEST_FILES_WITH_NO_COMMON_TESTS * Update src/transformers/models/xlm_roberta/modeling_flax_xlm_roberta.py Co-authored-by: Suraj Patil <[email protected]> * Update tests/xlm_roberta/test_modeling_flax_xlm_roberta.py Co-authored-by: Suraj Patil <[email protected]> * Update tests/xlm_roberta/test_modeling_flax_xlm_roberta.py Co-authored-by: Suraj Patil <[email protected]> * Remove test on large Flask XLM-RoBERTa * Add tokenizer to the test Co-authored-by: Suraj Patil <[email protected]>
…15918) * Do not change the output from tuple to list - to match PT's version * Fix the same issues for 5 other models and the template Co-authored-by: ydshieh <[email protected]>
* proper tests for post_process*** methods in feature extractor * mask th == 0 * Update tests/maskformer/test_feature_extraction_maskformer.py Co-authored-by: Sylvain Gugger <[email protected]> * make style Co-authored-by: Sylvain Gugger <[email protected]>
* Add typing hints for base model class * Add typing hints for causal LM model class * Add typing hints for double heads model class * Add typing hints for sequence classification model class * Add typing hints for Main Layer * Run fixup
* Added type hints for PyTorch T5 model * removed a type hint * ran make style
* Remove unused attributes * Add link to blog and add clarification about input size * Improve readability of the code Co-authored-by: Niels Rogge <[email protected]>
* 📝 first draft * 🖍 apply feedback
Co-authored-by: ydshieh <[email protected]>
* Indent Seq2Seq Train Args docs * Add Args keyword to Seq2Seq Train Args docs
* update results * per-language metrics * Format the per-language metrics
* Add Flaubert to ONNX to make it available for conversion. * Fixed features for FlauBERT. fixup command remove flaubert to docs list. Co-authored-by: ChainYo <[email protected]>
) * Add type annotations for TF Longformer * Update docstring data types to include numpy array * Implement unpack_inputs decorator * fixup after decorator updates * Numpy array -> np.ndarray in docstring Co-authored-by: Johnny Greco <[email protected]>
* First draft * Fix logits calculation * Improve tests * Add copied from statements * Fix base_model_prefix * Improve implementation, upload new models * Update design * Fix integration test * Add model to README and toctree * Add document image * Apply suggestions from code review * Apply suggestions from code review Co-authored-by: Sylvain Gugger <[email protected]> * Add decoder_hidden_size attribute * Update design of decoder * Add DepthEstimatorOutput class * Rename in_index to head_in_index and add feature extractor tests * Apply suggestions from code review * Apply suggestions from code review * Update pretrained model name and add to doc tests * Remove test.py script * Update copied from statements and clean up Co-authored-by: Niels Rogge <[email protected]> Co-authored-by: Sylvain Gugger <[email protected]>
* add xglm conversion script * style * update script
* Fix bugs for argument typo and positional embedding weight loading * Reflect code review suggestion to cover different missing keys cases
* add pt funnel type hints * add tf funnel type hints
* Add link to notebook * Add link * Fix bug Co-authored-by: Niels Rogge <[email protected]>
* Added type hinting for forward functions in pytorch marian * typo correction * Removed type hints on functions from BART per Suraj Patil request * fix import pb * fix typo * corrected tuple call * ran black * after fix-copies Some optional tags on primitives were removed, past_key_values in MarianForCausalLM changed from Tuple of Tuple to List * Fixing copies to roformer and pegasus Co-authored-by: Clementine Fourrier <[email protected]> Co-authored-by: matt <[email protected]>
* undo black autoformat * minor fix to rembert forward with default * make fix-copies, make quality * Adding types to template model * Removing List from the template types * Remove `Optional` from a couple of types that don't accept `None` Co-authored-by: matt <[email protected]>
* ✨ refactor code samples with framework-specific blocks * ✨ update training.mdx * 🖍 apply feedback
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.