- 16 Jan, 2024 10 commits
-
-
Joao Gante authored
-
inisis authored
* modify check_if_model_is_supported to return bool * add is_model_supported and have check_if_model_is_supported use that * Update src/transformers/utils/fx.py Fantastic Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
fxmarty authored
* clearer error for sdpa * better message
-
Arthur authored
[`SpeechT5Tokenization`] Add copied from and fix the `convert_tokens_to_string` to match the fast decoding scheme (#28522) * Add copied from and fix the `convert_tokens_to_string` to match the fast decoding scheme * fixup * add a small test * style test file * nites
-
Arthur authored
* cleanup * add a test * update the test * style * revert part that allows to pickle the tokenizer
-
Arthur authored
* fix adding special tokens when the token is already there. * add a test * add a test * nit * fix the test: make sure the order is preserved * Update tests/test_tokenization_common.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Nima Yaqmuri authored
* Fix bug in SpeechT5 speech decoder prenet's forward method - Removed redundant `repeat` operation on speaker_embeddings in the forward method. This line was erroneously duplicating the embeddings, leading to incorrect input size for concatenation and performance issues. - Maintained original functionality of the method, ensuring the integrity of the speech decoder prenet's forward pass remains intact. - This change resolves a critical bug affecting the model's performance in handling speaker embeddings. * Refactor SpeechT5 text to speech integration tests - Updated SpeechT5ForTextToSpeechIntegrationTests to accommodate the variability in sequence lengths due to dropout in the speech decoder pre-net. This change ensures that our tests are robust against random variations in generated speech, enhancing the reliability of our test suite. - Removed hardcoded dimensions in test assertions. Replaced with dynamic checks based on model configuration and seed settings, ensuring tests remain valid across different runs and configurations. - Added new test cases to thoroughly validate the shapes of generated spectrograms and waveforms. These tests leverage seed settings to ensure consistent and predictable behavior in testing, addressing potential issues in speech generation and vocoder processing. - Fixed existing test cases where incorrect assumptions about output shapes led to potential errors. * Fix bug in SpeechT5 speech decoder prenet's forward method - Removed redundant `repeat` operation on speaker_embeddings in the forward method. This line was erroneously duplicating the embeddings, leading to incorrect input size for concatenation and performance issues. - Maintained original functionality of the method, ensuring the integrity of the speech decoder prenet's forward pass remains intact. - This change resolves a critical bug affecting the model's performance in handling speaker embeddings. * Refactor SpeechT5 text to speech integration tests - Updated SpeechT5ForTextToSpeechIntegrationTests to accommodate the variability in sequence lengths due to dropout in the speech decoder pre-net. This change ensures that our tests are robust against random variations in generated speech, enhancing the reliability of our test suite. - Removed hardcoded dimensions in test assertions. Replaced with dynamic checks based on model configuration and seed settings, ensuring tests remain valid across different runs and configurations. - Added new test cases to thoroughly validate the shapes of generated spectrograms and waveforms. These tests leverage seed settings to ensure consistent and predictable behavior in testing, addressing potential issues in speech generation and vocoder processing. - Fixed existing test cases where incorrect assumptions about output shapes led to potential errors. * Enhance handling of speaker embeddings in SpeechT5 - Refined the generate and generate_speech functions in the SpeechT5 class to robustly handle two scenarios for speaker embeddings: matching the batch size (one embedding per sample) and one-to-many (a single embedding for all samples in the batch). - The update includes logic to repeat the speaker embedding when a single embedding is provided for multiple samples, and a ValueError is raised for any mismatched dimensions. - Also added corresponding test cases to validate both scenarios, ensuring complete coverage and functionality for diverse speaker embedding situations. * Improve Test Robustness with Randomized Speaker Embeddings
-
fxmarty authored
* fix mismatching behavior in from_pretrained with/without accelerate * meaningful refactor * remove added space * add test * fix model on the hub * comment * use tiny model * style
-
Hamza FILALI authored
* Improving Training Performance and Scaling documentation by adding PEFT techniques to suggestions to reduce memory requirements for training * Update docs/source/en/perf_train_gpu_one.md Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com> --------- Co-authored-by:
Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
-
regisss authored
* Remove `task` arg in `load_dataset` in image-classification example * Manage case where "train" is not in dataset * Add new args to manage image and label column names * Similar to audio-classification example * Fix README * Update tests
-
- 15 Jan, 2024 13 commits
-
-
amyeroberts authored
Add back in wrapper for safe importing
-
Timothy Cronin authored
* improve dev setup comments and hints * fix tests for new dev setup hints
-
Boris Dayma authored
-
Joao Gante authored
-
Matt authored
* Add a use_safetensors arg to TFPreTrainedModel.from_pretrained() * One more catch! * One more one more catch
-
Rishit Ratna authored
-
Marc Sun authored
* fix test * reduce length * smaller model
-
thedamnedrhino authored
* added args to the pipeline * added test * more sensical tests * fixup * docs * typo ; * docs * made changes to support named args * fixed test * docs update * styles * docs * docs
-
yuanwu2017 authored
* Add the XPU check for pipeline mode When setting xpu device for pipeline, It needs to use is_torch_xpu_available to load ipex and determine whether the device is available. Signed-off-by:
yuanwu <yuan.wu@intel.com> * Don't move model to device when hf_device_map isn't None 1. Don't move model to device when hf_device_map is not None 2. The device string maybe includes the device index, so use 'in'instead of equal Signed-off-by:
yuanwu <yuan.wu@intel.com> * Raise the error when xpu is not available Signed-off-by:
yuanwu <yuan.wu@intel.com> * Update src/transformers/pipelines/base.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update src/transformers/pipelines/base.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Modify the error message Signed-off-by:
yuanwu <yuan.wu@intel.com> * Change message format. Signed-off-by:
yuanwu <yuan.wu@intel.com> --------- Signed-off-by:
yuanwu <yuan.wu@intel.com> Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Younes Belkada authored
* v1 tags * remove unneeded conversion * v2 * rm unneeded warning * add more utility methods * Update src/transformers/utils/hub.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/utils/hub.py Co-authored-by:
Lucain <lucainp@gmail.com> * Update src/transformers/utils/hub.py Co-authored-by:
Lucain <lucainp@gmail.com> * more enhancements * oops * merge tags * clean up * revert unneeded change * add extensive docs * more docs * more kwargs * add test * oops * fix test * Update src/transformers/modeling_utils.py Co-authored-by:
Omar Sanseviero <osanseviero@gmail.com> * Update src/transformers/utils/hub.py Co-authored-by:
Lucain <lucainp@gmail.com> * Update src/transformers/modeling_utils.py * Update src/transformers/trainer.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update src/transformers/modeling_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * add more conditions * more logic --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> Co-authored-by:
Lucain <lucainp@gmail.com> Co-authored-by:
Omar Sanseviero <osanseviero@gmail.com>
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Tom Aarsen authored
Update warning, a word was missing
-
Francisco Kurucz authored
Fix URL to Ai Sweden Models reference and model loading
-
- 13 Jan, 2024 2 commits
-
-
Joao Gante authored
* fix candidate device * this line shouldn't have been in
-
Apoorv Saxena authored
* MVP * fix ci * more ci * remove redundant kwarg * added and wired up PromptLookupCandidateGenerator * rebased with main, working * removed print * style fixes * fix test * fixed tests * added test for prompt lookup decoding * fixed circleci * fixed test issue * Update src/transformers/generation/candidate_generator.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/generation/candidate_generator.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/generation/candidate_generator.py * Update src/transformers/generation/candidate_generator.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> --------- Co-authored-by:
Joao Gante <joao@huggingface.co> Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
- 12 Jan, 2024 12 commits
-
-
Siddartha Naidu authored
-
Matt authored
* Fix TF Regnet docstring * Fix TF Regnet docstring * Make a change to the PyTorch Regnet too to make sure the CI is checking it * Add skips for TFRegnet * Update error message for docstring checker
-
Joao Gante authored
-
Joao Gante authored
-
Joao Gante authored
-
Joao Gante authored
-
sungho-ham authored
Fix xlnet torch.ones usage Co-authored-by:
sungho-ham <sungho.ham@linecorp.com>
-
dependabot[bot] authored
Bump jinja2 in /examples/research_projects/decision_transformer Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.3 to 3.1.3. - [Release notes](https://github.com/pallets/jinja/releases) - [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst) - [Commits](https://github.com/pallets/jinja/compare/2.11.3...3.1.3 ) --- updated-dependencies: - dependency-name: jinja2 dependency-type: direct:production ... Signed-off-by:
dependabot[bot] <support@github.com> Co-authored-by:
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
-
Younes Belkada authored
* add mixtral fused modules * add changes from modeling utils * add test * fix test + rope theta issue * Update src/transformers/modeling_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * add tests --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
amyeroberts authored
* Update meatdata loading for oneformer * Enable loading from a model repo * Update docstrings * Fix tests * Update tests * Clarify repo_path behaviour
-
amyeroberts authored
* Mark two logger tests as flaky * Add description to is_flaky
-
Younes Belkada authored
* add llava + fused modules * Update src/transformers/models/llava/modeling_llava.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
- 11 Jan, 2024 3 commits
-
-
Hankyeol Kyung authored
* [docs] Fix broken link Signed-off-by:
Hankyeol Kyung <kghnkl0103@gmail.com> * [docs] Use shorter domain Signed-off-by:
Hankyeol Kyung <kghnkl0103@gmail.com> --------- Signed-off-by:
Hankyeol Kyung <kghnkl0103@gmail.com>
-
Matt authored
-
jiqing-feng authored
* update version for cpu training * update docs for cpu training * fix readme * fix readme
-