- 22 Sep, 2021 2 commits
-
-
Lysandre Debut authored
* Patch training arguments issue * Update src/transformers/training_args.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
- 10 Sep, 2021 6 commits
-
-
Patrick von Platen authored
* fix * 2nd fix
-
Patrick von Platen authored
-
patrickvonplaten authored
-
Nicolas Patry authored
* Fixing #13381 * Enabling automatic LED models.
-
Nicolas Patry authored
-
Patrick von Platen authored
* finalize * Apply suggestions from code review * finish cleaner implementation * more tests * small fix * finish * up
-
- 31 Aug, 2021 17 commits
-
-
Patrick von Platen authored
* correct * also comment out multi-gpu test push
-
Sylvain Gugger authored
* Add generate kwargs to Seq2SeqTrainingArguments * typo * Address review comments + doc * Style
-
Matt authored
-
Lysandre authored
-
Matt authored
* Adding a TF variant of the DataCollatorForTokenClassification to get feedback * Added a Numpy variant and a post_init check to fail early if a missing import is found * Fixed call to Numpy variant * Added a couple more of the collators * Update src/transformers/data/data_collator.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fixes, style pass, finished DataCollatorForSeqToSeq * Added all the LanguageModeling DataCollators, except SOP and PermutationLanguageModeling * Adding DataCollatorForPermutationLanguageModeling * Style pass * Add missing `__call__` for PLM * Remove `post_init` checks for frameworks because the imports inside them were making us fail code quality checks * Remove unused imports * First attempt at some TF tests * A second attempt to make any of those tests actually work * TF tests, round three * TF tests, round four * TF tests, round five * TF tests, all enabled! * Style pass * Merging tests into `test_data_collator.py` * Merging tests into `test_data_collator.py` * Fixing up test imports * Fixing up test imports * Trying shuffling the conditionals around * Commenting out non-functional old tests * Completed all tests for all three frameworks * Style pass * Fixed test typo * Style pass * Move standard `__call__` method to mixin * Rearranged imports for `test_data_collator` * Fix data collator typo "torch" -> "pt" * Fixed the most embarrassingly obvious bug * Update src/transformers/data/data_collator.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Renaming mixin * Updating docs Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Dalton Walker <dalton_walker@icloud.com> Co-authored-by:
Andrew Romans <andrew.romans@hotmail.com>
-
Sylvain Gugger authored
-
Jongheon Kim authored
Set missing seq_length variable when using inputs_embeds with ALBERT & Remove code duplication (#13152) * Set seq_length variable when using inputs_embeds * remove code duplication
-
Jake Tae authored
`at` should be `a1`
-
Stas Bekman authored
fix a few implementation links
-
Sylvain Gugger authored
-
Kamal Raj authored
* Deberta_v2 tf * added new line at the end of file, make style * +V2, typo * remove never executed branch of code * rm cmnt and fixed typo in url filter * cleanup according to review comments * added #Copied from
-
Apoorv Garg authored
-
tucan9389 authored
* Add GPT2ForTokenClassification * Fix dropout exception for GPT2 NER * Remove sequence label in test * Change TokenClassifierOutput to TokenClassifierOutputWithPast * Fix for black formatter * Remove dummy * Update docs for GPT2ForTokenClassification * Fix check_inits ci fail * Update dummy_pt_objects after make fix-copies * Remove TokenClassifierOutputWithPast * Fix tuple input issue Co-authored-by:
danielsejong55@gmail.com <danielsejong55@gmail.com>
-
Serhiy-Shekhovtsov authored
-
Patrick von Platen authored
* up * finish * Apply suggestions from code review * apply Lysandres suggestions * adapt circle ci as well * finish * Update setup.py
-
Sylvain Gugger authored
* Incorporate tests dependencies in tests_fetcher * Harder modif * Debug * Loop through all files * Last modules * Remove debug statement
- 30 Aug, 2021 15 commits
-
-
Olatunji Ruwase authored
* Use DS callable API to allow hf_scheduler + ds_optimizer * Preserve backward-compatibility * Restore backward compatibility * Tweak arg positioning * Tweak arg positioning * bump the required version * Undo indent * Update src/transformers/trainer.py * style Co-authored-by:
Stas Bekman <stas@stason.org> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com>
-
Laura Hanu authored
* added missing __spec__ to _LazyModule * test __spec__ is not None after module import * changed module_spec arg to be optional in _LazyModule * fix style issue * added module spec test to test_file_utils
-
Sylvain Gugger authored
* Fix release utils * Update docs/source/conf.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Sylvain Gugger authored
* Fix AutoTokenizer when a tokenizer has no fast version * Add test
-
Li-Huai (Allan) Lin authored
* Correct outdated function signatures on website. * Upgrade sphinx to 3.5.4 (latest 3.x) * Test * Test * Test * Test * Test * Test * Revert unnecessary changes. * Change sphinx version to 3.5.4" * Test python 3.7.11
-
Kamal Raj authored
* albert flax * year -> 2021 * docstring updated for flax * removed head_mask * removed from_pt * removed passing attention_mask to embedding layer
-
Ben Nimmo authored
-
Maxwell Forbes authored
-
Nathan Raw authored
*
fix small model card bugs * style -
Sylvain Gugger authored
-
fcakyon authored
when `NEPTUNE_RUN_ID` environmetnt variable is set, neptune will log into the previous run with id `NEPTUNE_RUN_ID`
-
Sylvain Gugger authored
-
Li-Huai (Allan) Lin authored
* Check None before going through iteration * Format
-
Kamal Raj authored
* distilbert-flax * added missing self * docs fix * removed tied kernal extra init * updated docs * x -> hidden states * removed head_mask * removed from_pt, +FLAX * updated year
-
arfy slowy authored
* fix: typo spelling grammar * fix: make fixup
-