- 31 Aug, 2023 9 commits
-
-
Yih-Dar authored
update Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
David Reguera authored
* Add type hints to `TFBlipTextModel` * Add missing type hints to DPR family models * Add type hints to `TFLEDModel` * Add type hints to `TFLxmertForPreTraining` * Add missing type hints to `TFMarianMTModel` and `TFMarianModel` * Add missing type hints to `TFRagModel` & `TFRagTokenForGeneration` * Make type hints annotations consistent
-
Younes Belkada authored
fix instructblip test
-
raghavanone authored
* Save image_processor while saving pipeline (ImageSegmentationPipeline) * Fix black issues
-
Arthur authored
* Fix coellama * style
-
Arthur authored
[`TokenizerFast`] `can_save_slow_tokenizer` as a property for when `vocab_file`'s folder was removed (#25626) * pad token should be None by default * fix tests * nits * check if isfile vocabfile * add warning if sp model folder was deleted * save SPM when missing folder for sloz * update the ` can_save_slow_tokenizer` to be a property * first batch * second batch * missing one
-
Vibhor Kumar authored
* Modify single-GPU efficient training doc with now-available adamw_bnb_8bit optimizer * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Sourab Mangrulkar authored
* fix ds z3 checkpointing when `stage3_gather_16bit_weights_on_model_save=False` * refactoring
-
qihqi authored
* For xla tensors, use an alternative way to get a unique id Because xla tensors don't have storage. * add is_torch_tpu_available check
-
- 30 Aug, 2023 11 commits
-
-
NielsRogge authored
Fix docstrings
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Joao Gante authored
-
Marc Sun authored
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Lysandre Debut authored
-
Juan Pizarro authored
* Add Blip2 model in VQA pipeline * use require_torch_gpu for test_large_model_pt_blip2 * use can_generate in vqa pipeline * test Blip2ForConditionalGeneration using float16 * remove custom can_generate from Blip2ForConditionalGeneration
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Aman Gupta Karmani authored
fix minor documentation typo
-
Nino Risteski authored
deleted unnecessary comma in the Adding a new model section.
-
- 29 Aug, 2023 20 commits
-
-
Joao Gante authored
-
Nino Risteski authored
_toctree.yml file. broken link, now fixed.
-
Haylee Schäfer authored
* support loading base64 images * add test * mention in docs * remove the logging * sort imports * update error message * Update tests/utils/test_image_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * restructure to catch base64 exception * doesn't like the newline * download files * format * optimize imports * guess it needs a space? * support loading base64 images * add test * remove the logging * sort imports * restructure to catch base64 exception * doesn't like the newline * download files * optimize imports * guess it needs a space? --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
amyeroberts authored
Allocate result array ahead of time
-
Sanchit Gandhi authored
-
Susnato Dhar authored
update checkpoints
-
Arthur authored
🤦 update warning to If you want to use the new behaviour, set `legacy=False`. instead of True -
Sohyun Sim authored
* docs: ko: community.md * feat: deepl draft * fix: manual edits * fix: resolve suggestions Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> --------- Co-authored-by:
Hyeonseo Yun <0525yhs@gmail.com> Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>
-
heuristicwave authored
* dos: ko: add_new_pipeline.mdx * feat: chatgpt draft * fix: manual edits * docs: ko: add_new_pipeline Update _toctree * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * Update docs/source/ko/add_new_pipeline.md Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> --------- Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com>
-
Joao Gante authored
Tests: detect lines removed from "utils/not_doctested.txt" and doctest ALL generation files (#25763)
-
Chau Nguyen authored
* Update trainer.py (error with checking steps in args.eval_accumulation_steps to gather tensors) While the deprecated code has the correct check (line 3772): "if args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0:" The current code does not (line 3196): "if args.eval_accumulation_steps is not None and self.accelerator.sync_gradients:" We need to check "(step + 1) % args.eval_accumulation_steps == 0". Hence, the line 3196 should be modified to: "if args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0 and self.accelerator.sync_gradients:" * Fix error with checking args.eval_accumulation_steps to gather tensors
-
MinJae Kang authored
* docs: ko-model_memory_anatomy.md * feat: chatgpt draft * feat: manual edits * feat: change document title * feat: manual edits * fix: resolve suggestion Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
heuristicwave <31366038+heuristicwave@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
heuristicwave <31366038+heuristicwave@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> * fix: resolve suggestion --------- Co-authored-by:
SeongWooChoi <46990061+nuatmochoi@users.noreply.github.com> Co-authored-by:
heuristicwave <31366038+heuristicwave@users.noreply.github.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com>
-
SeongWooChoi authored
* docs: ko: peft.mdx * feat: chatgpt draft * fix: manual edits * fix: resolve suggestions Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
heuristicwave <31366038+heuristicwave@users.noreply.github.com> * fix: resolve suggestions Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com> --------- Co-authored-by:
Wonhyeong Seo <wonhseo@kakao.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
heuristicwave <31366038+heuristicwave@users.noreply.github.com> Co-authored-by:
Sohyun Sim <96299403+sim-so@users.noreply.github.com>
-
Dongkeun Yoon authored
* fix warning triggering for xglm.embed_positions * Make TF variable a tf.constant to match (and fix some spelling) --------- Co-authored-by:
Matt <rocketknight1@gmail.com>
-
Arthur authored
* return when length is zero * Add tests Co-authored-by:
Avnish Narayan <38871737avnishn@users.noreply.github.com> * Co-authored-by: avnishn <38871737+avnishn@users.noreply.github.com> * codeLlama doc should not be on Main * update test --------- Co-authored-by:
Avnish Narayan <38871737avnishn@users.noreply.github.com>
-
Omar Sanseviero authored
* Update code_llama.md * Update code_llama.md
-
zspo authored
-
Younes Belkada authored
-
Sourab Mangrulkar authored
fix bug
-
NielsRogge authored
* First draft * More improvements * Fix all tests * More improvements * Add backbone test * Improve docstring * Address comments * Rename attribute * Remove expected output * Update src/transformers/models/dinov2/modeling_dinov2.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix style --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-