- 23 Nov, 2023 1 commit
-
-
Jialong Wu authored
* update d_kv'annotation in mt5'configuration * update d_kv'annotation in mt5'configuration * update d_kv'annotation in mt5'configuration
-
- 22 Nov, 2023 6 commits
-
-
Strive-for-excellence authored
Co-authored-by:
张兴言 <SENSETIME\zhangxingyan1@cn0214006377l.domain.sensetime.com>
-
dg845 authored
* initial commit * Add inital testing files and modify __init__ files to add UnivNet imports. * Fix some bugs * Add checkpoint conversion script and add references to transformers pre-trained model. * Add UnivNet entries for auto. * Add initial docs for UnivNet. * Handle input and output shapes in UnivNetGan.forward and add initial docstrings. * Write tests and make them pass. * Write docs. * Add UnivNet doc to _toctree.yml and improve docs. * fix typo * make fixup * make fix-copies * Add upsample_rates parameter to config and improve config documentation. * make fixup * make fix-copies * Remove unused upsample_rates config parameter. * apply suggestions from review * make style * Verify and add reason for skipped tests inherited from ModelTesterMixin. * Add initial UnivNetGan integration tests * make style * Remove noise_length input to UnivNetGan and improve integration tests. * Fix bug and make style * Make UnivNet integration tests pass * Add initial code for UnivNetFeatureExtractor. * make style * Add initial tests for UnivNetFeatureExtractor. * make style * Properly initialize weights for UnivNetGan * Get feature extractor fast tests passing * make style * Get feature extractor integration tests passing * Get UnivNet integration tests passing * make style * Add UnivNetGan usage example * make style and use feature extractor from hub in integration tests * Update tips in docs * apply suggestions from review * make style * Calculate padding directly instead of using get_padding methods. * Update UnivNetFeatureExtractor.to_dict to be UnivNet-specific. * Update feature extractor to support using model(**inputs) and add the ability to generate noise and pad the end of the spectrogram in __call__. * Perform padding before generating noise to ensure the shapes are correct. * Rename UnivNetGan.forward's noise_waveform argument to noise_sequence. * make style * Add tests to test generating noise and padding the end for UnivNetFeatureExtractor.__call__. * Add tests for checking batched vs unbatched inputs for UnivNet feature extractor and model. * Add expected mean and stddev checks to the integration tests and make them pass. * make style * Make it possible to use model(**inputs), where inputs is the output of the feature extractor. * fix typo in UnivNetGanConfig example * Calculate spectrogram_zero from other config values. * apply suggestions from review * make style * Refactor UnivNet conversion script to use load_state_dict (following persimmon). * Rename UnivNetFeatureExtractor to UnivNetGanFeatureExtractor. * make style * Switch to using torch.tensor and torch.testing.assert_close for testing expected values/slices. * make style * Use config in UnivNetGan modeling blocks. * make style * Rename the spectrogram argument of UnivNetGan.forward to input_features, following Whisper. * make style * Improving padding documentation. * Add UnivNet usage example to the docs. * apply suggestions from review * Move dynamic_range_compression computation into the mel_spectrogram method of the feature extractor. * Improve UnivNetGan.forward return docstring. * Update table in docs/source/en/index.md. * make fix-copies * Rename UnivNet components to have pattern UnivNet*. * make style * make fix-copies * Update docs * make style * Increase tolerance on flaky unbatched integration test. * Remove torch.no_grad decorators from UnivNet integration tests to try to avoid flax/Tensorflow test errors. * Add padding_mask argument to UnivNetModel.forward and add batch_decode feature extractor method to remove padding. * Update documentation and clean up padding code. * make style * make style * Remove torch dependency from UnivNetFeatureExtractor. * make style * Fix UnivNetModel usage example * Clean up feature extractor code/docstrings. * apply suggestions from review * make style * Add comments for tests skipped via ModelTesterMixin flags. * Add comment for model parallel tests skipped via the test_model_parallel ModelTesterMixin flag. * Add # Copied from statements to copied UnivNetFeatureExtractionTest tests. * Simplify UnivNetFeatureExtractorTest.test_batch_decode. * Add support for unbatched padding_masks in UnivNetModel.forward. * Refactor unbatched padding_mask support. * make style
-
Patrick von Platen authored
* [Whisper] Add seq gen * [Whisper] Add seq gen * more debug * Fix whisper logit processor * Improve whisper code further * Fix more * more debug * more debug * Improve further * Add tests * Prep for batch size > 1 * Get batch_size>1 working * Correct more * Add extensive tests * more debug * more debug * more debug * add more tests * more debug * Apply suggestions from code review * more debug * add comments to explain the code better * add comments to explain the code better * add comments to explain the code better * Add more examples * add comments to explain the code better * fix more * add comments to explain the code better * add comments to explain the code better * correct * correct * finalize * Apply suggestions from code review * Apply suggestions from code review
-
Quentin Gallouédec authored
* fix max_steps doc * Update src/transformers/training_args.py [ci skip] Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * propagate suggested change --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Wangyi Jiang authored
-
Arthur authored
* update pillow pins * Apply suggestions from code review * more freedomin pins
-
- 21 Nov, 2023 13 commits
-
-
Ziyu Chen authored
* Fix `resize_token_embeddings` about `requires_grad` The method `resize_token_embeddings` should keep `requires_grad` unchanged for all parameters in embeddings. Previously, `resize_token_embeddings` always set `requires_grad` to `True`. After fixed, `resize_token_embeddings` copy the `requires_grad` attribute in the old embeddings.
-
Lucain authored
* Harmonize HF environment variables + other cleaning * backward compat * switch from HUGGINGFACE_HUB_CACHE to HF_HUB_CACHE * revert
-
fxmarty authored
explicit use_cache=True
-
jiqing-feng authored
* tvp model for video grounding add tokenizer auto fix param in TVPProcessor add docs clear comments and enable different torch dtype add image processor test and model test and fix code style * fix conflict * fix model doc * fix image processing tests * fix tvp tests * remove torch in processor * fix grammar error * add more details on tvp.md * fix model arch for loss, grammar, and processor * add docstring and do not regard TvpTransformer, TvpVisionModel as individual model * use pad_image * update copyright * control first downsample stride * reduce first only works for ResNetBottleNeckLayer * fix param name * fix style * add testing * fix style * rm init_weight * fix style * add post init * fix comments * do not test TvpTransformer * fix warning * fix style * fix example * fix config map * add link in config * fix comments * fix style * rm useless param * change attention * change test * add notes * fix comments * fix tvp * import checkpointing * fix gradient checkpointing * Use a more accurate example in readme * update * fix copy * fix style * update readme * delete print * remove tvp test_forward_signature * remove TvpTransformer * fix test init model * merge main and make style * fix tests and others * fix image processor * fix style and model_input_names * fix tests
-
Hz, Ji authored
* remove deprecated method `init_git_repo` * make style
-
amyeroberts authored
* Enable tracing with DINOv2 model * ABC * Add note to model doc
-
fxmarty authored
* fix various bugs with flash attention * bump * fix test * fix mistral * use skiptest instead of return that may be misleading * fix on review
-
fxmarty authored
* add scheduled ci on amdgpu * fix likely typo * more tests, avoid parallelism * precise comment * fix report channel * trigger docker build on this branch * fix * fix * run rocm scheduled ci * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix * fix --------- Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Leo Tronchon authored
* fix image_attention gate in idefics modeling * update comment * cleaner gating * fix gate condition * create attention gate once * update comment * update doc of cross-attention forward * improve comment * bring back no_images * pass cross_attention_gate similarly to no_images gate * add information on gate shape * fix no_images placement * make tests for gate * take off no_images logic * update test based on comments * raise value error if cross_attention_gate is None * send cross_attention_gate to device * Revert "send cross_attention_gate to device" This reverts commit 054f84228405bfa2e75fecc502f6a96dc83cdc0b. * send cross_attention_gate to device * fix device in test + nit * fill hidden_states with zeros instead of multiplying with the gate * style * Update src/transformers/models/idefics/modeling_idefics.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/models/idefics/modeling_idefics.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Joao Gante authored
-
NielsRogge authored
* Improve convnext backbone * Fix convnext2
-
Younes Belkada authored
* add support for old GC method * add also disable * up * oops
-
Dave Berenbaum authored
* dvclive callback: warn instead of fail when logging non-scalars * tests: log lr as scalar
-
- 20 Nov, 2023 9 commits
-
-
amyeroberts authored
* Fix torch.fx import issue for torch 1.12 * Fix up * Python verion dependent import * Woops - fix * Fix
-
Yeonwoo Sung authored
Update Korean tutorial for using LLMs, and refactor the nested conditional statements in hr_argparser.py (#27489) docs: Update Korean LLM tutorial to use Mistral-7B, not Llama-v1
-
Dmitrii Mukhutdinov authored
* Enable large-v3 downloading and update language list * Fix type annotation * make fixup * Export Whisper feature extractor * Fix error after extractor loading * Do not use pre-computed mel filters * Save the full preprocessor properly * Update docs * Remove comment Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Add alignment heads consistent with each Whisper version * Remove alignment heads calculation * Save fast tokenizer format as well * Fix slow to fast conversion * Fix bos/eos/pad token IDs in the model config * Add decoder_start_token_id to config --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Said Taghadouini authored
* timm to pytorch conversion for vit model fix * remove unecessary print statments * Detect non-supported ViTs in transformers & better handle id2label mapping * detect non supported hybrid resnet-vit models in conversion script * remove check for overlap between cls token and pos embed
-
Younes Belkada authored
* add fa2 support for from_config * Update test_modeling_common.py
-
Mathias Nielsen authored
* Renamed variable extension to builder_name * If builder name is jsonl change to json to align with load_datasets * Apply suggestions from code review Co-authored-by:
Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> --------- Co-authored-by:
Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
-
Peter Pan authored
Signed-off-by:
Peter Pan <Peter.Pan@daocloud.io>
-
Xabier de Zuazo authored
Add `convert_hf_to_openai.py` script to Whisper documentation resources.
-
Joel Tang authored
* Load idx2sym from pretrained vocab file in Transformer XL When loading vocab file from a pretrained tokenizer for Transformer XL, although the pickled vocabulary file contains a idx2sym key, it isn't loaded, because it is discarded as the empty list already exists as an attribute. Solution is to explicitly take it into account, just like for sym2idx. * ran make style
-
- 19 Nov, 2023 1 commit
-
-
Rafael Padilla authored
Co-authored-by:
Rafael Padilla <rafael.padilla@huggingface.co>
-
- 18 Nov, 2023 1 commit
-
-
Omar Sanseviero authored
-
- 17 Nov, 2023 7 commits
-
-
jiaqiw09 authored
* translate deepspeed.md * update
-
V.Prasanna kumar authored
fixed the broken links belogs to dataset library of transformers
-
V.Prasanna kumar authored
-
Joao Gante authored
-
Joao Gante authored
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
- 16 Nov, 2023 2 commits
-
-
jiaqiw09 authored
* translate * update * update
-
Nathaniel Egwu authored
* Updated albert.md doc for ALBERT model * Update docs/source/en/model_doc/albert.md Fixed Resources heading Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update the ALBERT model doc resources Fixed resource example for fine-tuning the ALBERT sentence-pair classification. Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/albert.md Removed resource duplicate Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Updated albert.md doc with reviewed changes * Updated albert.md doc for ALBERT * Update docs/source/en/model_doc/albert.md Removed duplicates from updated docs/source/en/model_doc/albert.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/model_doc/albert.md --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-