- 17 Nov, 2023 5 commits
-
-
Nicolas Patry authored
-
Joao Gante authored
-
Joao Gante authored
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
- 16 Nov, 2023 13 commits
-
-
jiaqiw09 authored
* translate * update * update
-
Nathaniel Egwu authored
* Updated albert.md doc for ALBERT model * Update docs/source/en/model_doc/albert.md Fixed Resources heading Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update the ALBERT model doc resources Fixed resource example for fine-tuning the ALBERT sentence-pair classification. Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/albert.md Removed resource duplicate Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Updated albert.md doc with reviewed changes * Updated albert.md doc for ALBERT * Update docs/source/en/model_doc/albert.md Removed duplicates from updated docs/source/en/model_doc/albert.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/albert.md --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Joao Gante authored
-
Arthur authored
* try to stylify using ruff * might need to remove these changes? * use ruf format andruff check * use isinstance instead of type comparision * use # fmt: skip * use # fmt: skip * nits * soem styling changes * update ci job * nits isinstance * more files update * nits * more nits * small nits * check and format * revert wrong changes * actually use formatter instead of checker * nits * well docbuilder is overwriting this commit * revert notebook changes * try to nuke docbuilder * style * fix feature exrtaction test * remve `indent-width = 4` * fixup * more nits * update the ruff version that we use * style * nuke docbuilder styling * leve the print for detected changes * nits * Remove file I/O Co-authored-by: charliermarsh <charlie.r.marsh@gmail.com> * style * nits * revert notebook changes * Add # fmt skip when possible * Add # fmt skip when possible * Fix * More ` # fmt: skip` usage * More ` # fmt: skip` usage * More ` # fmt: skip` usage * NIts * more fixes * fix tapas * Another way to skip * Recommended way * Fix two more fiels * Remove asynch Remove asynch --------- Co-authored-by:
charliermarsh <charlie.r.marsh@gmail.com>
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Marc Sun authored
add error msg
-
Lucain authored
* Set usedforsecurity=False in hashlib methods (FIPS compliance) * trigger ci * tokenizers version * deps * bump hfh version * let's try this
-
Patrick von Platen authored
* Revert "add attention_mask and position_ids in assisted model (#26892)" This reverts commit 184f60dc. * more debug
-
Matt authored
* Move the TF pin for 2.15 * make fixup
-
Phuc Van Phan authored
-
Arthur authored
add flash attn markers
-
Dean Wyatte authored
support onnx for causal lm sequence classification
-
Hz, Ji authored
* translate model.md to chinese * apply review suggestion Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 15 Nov, 2023 15 commits
-
-
Marc Sun authored
* fix * style * add test
-
JiangZhongqing authored
* Fix bug in handling varying encoder and decoder layers This commit resolves an issue where the script failed to convert T5x models to PyTorch models when the number of decoder layers differed from the number of encoder layers. I've addressed this issue by passing an additional 'num_decoder_layers' parameter to the relevant function. * Fix bug in handling varying encoder and decoder layers
-
Matt authored
* Remove the torch main_process_first context manager from TF examples * Correctly set num_beams=1 in our examples, and add a guard in GenerationConfig.validate() * Update src/transformers/generation/configuration_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Adam Louly authored
fix max pos issue Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
Yuki-Imajuku authored
* update _toctree.yml & add albert-autoformer * Fixed typo in docs/source/ja/model_doc/audio-spectrogram-transformer.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Delete duplicated sentence docs/source/ja/model_doc/autoformer.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Reflect reviews * delete untranslated models from toctree * delete all comments * add abstract translation --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Zach Mueller authored
Fix test
-
Arthur authored
* import hf error * nits * fixup * catch the error at the correct place * style * improve message a tiny bit * Update src/transformers/utils/hub.py Co-authored-by:
Lucain <lucainp@gmail.com> * add a test --------- Co-authored-by:
Lucain <lucainp@gmail.com>
-
Xin Qiu authored
* Fix beam score calculation issue for decoder-only models * Update beam search test and fix code quality issue * Fix beam_sample, group_beam_search and constrained_beam_search * Split test for pytorch and TF, add documentation --------- Co-authored-by:
Xin Qiu <xin.qiu@sentient.ai>
-
Arthur authored
* update `tokenizers` version pin * force tokenizers>=0.15 * use 0.14 Co-authored-by:
Lysandre <lysandre@huggingface.co> --------- Co-authored-by:
Lysandre <lysandre@huggingface.co>
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Arthur authored
* skip 4 tests * nits * style * wow it's not my day * skip new failing tests * style * skip for NLLB MoE as well * skip `test_assisted_decoding_sample` for everyone
-
Phyzer authored
thoroughly was misspelled thouroughly
-
NielsRogge authored
* Improve conversion scripts * Fix paths * Fix style
-
NielsRogge authored
* Add tests * Add integration test * More improvements * Fix tests * Fix style * Skip gradient checkpointing tests * Update script * Remove scripts * Remove Fuyu from auto mapping * Fix integration test * More improvements * Remove file * Add Fuyu to slow documentation tests * Address comments * Clarify comment
-
Arthur authored
* skip 4 tests * nits * style * wow it's not my day * skip new failing tests * style * skip for NLLB MoE as well
-
- 14 Nov, 2023 7 commits
-
-
Zach Mueller authored
* Add tokens seen * Address comments, add to TrainingArgs * Update log * Apply suggestions from code review Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Use self.args * Fix docstring Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
amyeroberts authored
-
Zach Mueller authored
* Have seq2seq just use gather * Change * Reset after * Make slow * Apply suggestions from code review Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Clean * Simplify and just use gather * Update tests/trainer/test_trainer_seq2seq.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * gather always for seq2seq --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Costa Huang authored
* Minor type annotation fix * Trigger Build
-
Joao Gante authored
-
Matt authored
* Update and reorder docs for chat templates * Fix Mistral docstring * Add section link and small fixes * Remove unneeded line in Mistral example * Add comment on saving memory * Fix generation prompts linl * Fix code block languages
-
Joao Gante authored
fix exponential doctest
-