- 16 Nov, 2023 3 commits
-
-
Arthur authored
add flash attn markers
-
Dean Wyatte authored
support onnx for causal lm sequence classification
-
Hz, Ji authored
* translate model.md to chinese * apply review suggestion Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 15 Nov, 2023 15 commits
-
-
Marc Sun authored
* fix * style * add test
-
JiangZhongqing authored
* Fix bug in handling varying encoder and decoder layers This commit resolves an issue where the script failed to convert T5x models to PyTorch models when the number of decoder layers differed from the number of encoder layers. I've addressed this issue by passing an additional 'num_decoder_layers' parameter to the relevant function. * Fix bug in handling varying encoder and decoder layers
-
Matt authored
* Remove the torch main_process_first context manager from TF examples * Correctly set num_beams=1 in our examples, and add a guard in GenerationConfig.validate() * Update src/transformers/generation/configuration_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Adam Louly authored
fix max pos issue Co-authored-by: Adam Louly <adamlouly@microsoft.com@orttrainingdev9.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
Yuki-Imajuku authored
* update _toctree.yml & add albert-autoformer * Fixed typo in docs/source/ja/model_doc/audio-spectrogram-transformer.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Delete duplicated sentence docs/source/ja/model_doc/autoformer.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Reflect reviews * delete untranslated models from toctree * delete all comments * add abstract translation --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
Zach Mueller authored
Fix test
-
Arthur authored
* import hf error * nits * fixup * catch the error at the correct place * style * improve message a tiny bit * Update src/transformers/utils/hub.py Co-authored-by:
Lucain <lucainp@gmail.com> * add a test --------- Co-authored-by:
Lucain <lucainp@gmail.com>
-
Xin Qiu authored
* Fix beam score calculation issue for decoder-only models * Update beam search test and fix code quality issue * Fix beam_sample, group_beam_search and constrained_beam_search * Split test for pytorch and TF, add documentation --------- Co-authored-by:
Xin Qiu <xin.qiu@sentient.ai>
-
Arthur authored
* update `tokenizers` version pin * force tokenizers>=0.15 * use 0.14 Co-authored-by:
Lysandre <lysandre@huggingface.co> --------- Co-authored-by:
Lysandre <lysandre@huggingface.co>
-
Yih-Dar authored
fix Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com>
-
Arthur authored
* skip 4 tests * nits * style * wow it's not my day * skip new failing tests * style * skip for NLLB MoE as well * skip `test_assisted_decoding_sample` for everyone
-
Phyzer authored
thoroughly was misspelled thouroughly
-
NielsRogge authored
* Improve conversion scripts * Fix paths * Fix style
-
NielsRogge authored
* Add tests * Add integration test * More improvements * Fix tests * Fix style * Skip gradient checkpointing tests * Update script * Remove scripts * Remove Fuyu from auto mapping * Fix integration test * More improvements * Remove file * Add Fuyu to slow documentation tests * Address comments * Clarify comment
-
Arthur authored
* skip 4 tests * nits * style * wow it's not my day * skip new failing tests * style * skip for NLLB MoE as well
-
- 14 Nov, 2023 16 commits
-
-
Zach Mueller authored
* Add tokens seen * Address comments, add to TrainingArgs * Update log * Apply suggestions from code review Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Use self.args * Fix docstring Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
amyeroberts authored
-
Zach Mueller authored
* Have seq2seq just use gather * Change * Reset after * Make slow * Apply suggestions from code review Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Clean * Simplify and just use gather * Update tests/trainer/test_trainer_seq2seq.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * gather always for seq2seq --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Costa Huang authored
* Minor type annotation fix * Trigger Build
-
Joao Gante authored
-
Matt authored
* Update and reorder docs for chat templates * Fix Mistral docstring * Add section link and small fixes * Remove unneeded line in Mistral example * Add comment on saving memory * Fix generation prompts linl * Fix code block languages
-
Joao Gante authored
fix exponential doctest
-
jiaqiw09 authored
* translate * translate * update
-
amyeroberts authored
The model was merged before final review and approval. This reverts commit 2ac5b932.
-
Sanchit Gandhi authored
-
Max Bain authored
remove wasteful np.stack Np.stack on large 1-D tensor, causing ~0.5s processing time on short audio (<10s). Compared to 0.02s for medium length audio
-
Sihan Chen authored
* fix speecht5 wrong attention mask when padding * enable batch generation and add parameter attention_mask * fix doc * fix format * batch postnet inputs, return batched lengths, and consistent to old api * fix format * fix format * fix the format * fix doc-builder error * add test, cross attention and docstring * optimize code based on reviews * docbuild * refine * not skip slow test * add consistent dropout for batching * loose atol * add another test regarding to the consistency of vocoder * fix format * refactor * add return_concrete_lengths as parameter for consistency w/wo batching * fix review issues * fix cross_attention issue
-
Yoach Lacombe authored
fix seamless m4t weights tying
-
Arthur authored
* skip 4 tests * nits * style * wow it's not my day
-
Younes Belkada authored
* `modules_to_save` support for peft integration * Update docs/source/en/peft.md Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * slightly elaborate test --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Marc Sun authored
* put back import * switch to logger.warnings instead
-
- 13 Nov, 2023 6 commits
-
-
Gift Sinthong authored
* Initial commit of PatchTST model classes Co-authored-by:
Phanwadee Sinthong <phsinthong@gmail.com> Co-authored-by:
Nam Nguyen <namctin@gmail.com> Co-authored-by:
Vijay Ekambaram <vijaykr.e@gmail.com> Co-authored-by:
Ngoc Diep Do <55230119+diepi@users.noreply.github.com> Co-authored-by:
Wesley Gifford <79663411+wgifford@users.noreply.github.com> * Add PatchTSTForPretraining * update to include classification Co-authored-by:
Phanwadee Sinthong <phsinthong@gmail.com> Co-authored-by:
Nam Nguyen <namctin@gmail.com> Co-authored-by:
Vijay Ekambaram <vijaykr.e@gmail.com> Co-authored-by:
Ngoc Diep Do <55230119+diepi@users.noreply.github.com> Co-authored-by:
Wesley Gifford <79663411+wgifford@users.noreply.github.com> * clean up auto files * Add PatchTSTForPrediction * Fix relative import * Replace original PatchTSTEncoder with ChannelAttentionPatchTSTEncoder * temporary adding absolute path + add PatchTSTForForecasting class * Update base PatchTSTModel + Unittest * Update ForecastHead to use the config class * edit cv_random_masking, add mask to model output * Update configuration_patchtst.py * add masked_loss to the pretraining * add PatchEmbeddings * Update configuration_patchtst.py * edit loss which considers mask in the pretraining * remove patch_last option * Add commits from internal repo * Update ForecastHead * Add model weight initilization + unittest * Update PatchTST unittest to use local import * PatchTST integration tests for pretraining and prediction * Added PatchTSTForRegression + update unittest to include label generation * Revert unrelated model test file * Combine similar output classes * update PredictionHead * Update configuration_patchtst.py * Add Revin * small edit to PatchTSTModelOutputWithNoAttention * Update modeling_patchtst.py * Updating integration test for forecasting * Fix unittest after class structure changed * docstring updates * change input_size to num_input_channels * more formatting * Remove some unused params * Add a comment for pretrained models * add channel_attention option add channel_attention option and remove unused positional encoders. * Update PatchTST models to use HF's MultiHeadAttention module * Update paper + github urls * Fix hidden_state return value * Update integration test to use PatchTSTForForecasting * Adding dataclass decorator for model output classes * Run fixup script * Rename model repos for integration test * edit argument explanation * change individual option to shared_projection * style * Rename integration test + import cleanup * Fix outpu_hidden_states return value * removed unused mode * added std, mean and nops scaler * add initial distributional loss for predition * fix typo in docs * add generate function * formatting * add num_parallel_samples * Fix a typo * copy weighted_average function, edit PredictionHead * edit PredictionHead * add distribution head to forecasting * formatting * Add generate function for forecasting * Add generate function to prediction task * formatting * use argsort * add past_observed_mask ordering * fix arguments * docs * add back test_model_outputs_equivalence test * formatting * cleanup * formatting * use ACT2CLS * formatting * fix add_start_docstrings decorator * add distribution head and generate function to regression task add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput, PatchTSTForRegressionOutput. * add distribution head and generate function to regression task add distribution head and generate function to regression task. Also made add PatchTSTForForecastingOutput, PatchTSTForRegressionOutput. * fix typos * add forecast_masking * fixed tests * use set_seed * fix doc test * formatting * Update docs/source/en/model_doc/patchtst.md Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> * better var names * rename PatchTSTTranspose * fix argument names and docs string * remove compute_num_patches and unused class * remove assert * renamed to PatchTSTMasking * use num_labels for classification * use num_labels * use default num_labels from super class * move model_type after docstring * renamed PatchTSTForMaskPretraining * bs -> batch_size * more review fixes * use hidden_state * rename encoder layer and block class * remove commented seed_number * edit docstring * Add docstring * formatting * use past_observed_mask * doc suggestion * make fix-copies * use Args: * add docstring * add docstring * change some variable names and add PatchTST before some class names * formatting * fix argument types * fix tests * change x variable to patch_input * format * formatting * fix-copies * Update tests/models/patchtst/test_modeling_patchtst.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * move loss to forward * Update src/transformers/models/patchtst/modeling_patchtst.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/patchtst/modeling_patchtst.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/patchtst/modeling_patchtst.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/patchtst/modeling_patchtst.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/transformers/models/patchtst/modeling_patchtst.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * formatting * fix a bug when pre_norm is set to True * output_hidden_states is set to False as default * set pre_norm=True as default * format docstring * format * output_hidden_states is None by default * add missing docs * better var names * docstring: remove default to False in output_hidden_states * change labels name to target_values in regression task * format * fix tests * change to forecast_mask_ratios and random_mask_ratio * change mask names * change future_values to target_values param in the prediction class * remove nn.Sequential and make PatchTSTBatchNorm class * black * fix argument name for prediction * add output_attentions option * add output_attentions to PatchTSTEncoder * formatting * Add attention output option to all classes * Remove PatchTSTEncoderBlock * create PatchTSTEmbedding class * use config in PatchTSTPatchify * Use config in PatchTSTMasking class * add channel_attn_weights * Add PatchTSTScaler class * add output_attentions arg to test function * format * Update doc with image patchtst.md * fix-copies * rename Forecast <-> Prediction * change name of a few parameters to match with PatchTSMixer. * Remove *ForForecasting class to match with other time series models. * make style * Remove PatchTSTForForecasting in the test * remove PatchTSTForForecastingOutput class * change test_forecast_head to test_prediction_head * style * fix docs * fix tests * change num_labels to num_targets * Remove PatchTSTTranspose * remove arguments in PatchTSTMeanScaler * remove arguments in PatchTSTStdScaler * add config as an argument to all the scaler classes * reformat * Add norm_eps for batchnorm and layernorm * reformat. * reformat * edit docstring * update docstring * change variable name pooling to pooling_type * fix output_hidden_states as tuple * fix bug when calling PatchTSTBatchNorm * change stride to patch_stride * create PatchTSTPositionalEncoding class and restructure the PatchTSTEncoder * formatting * initialize scalers with configs * edit output_hidden_states * style * fix forecast_mask_patches doc string --------- Co-authored-by:
Gift Sinthong <gift.sinthong@ibm.com> Co-authored-by:
Nam Nguyen <namctin@gmail.com> Co-authored-by:
Vijay Ekambaram <vijaykr.e@gmail.com> Co-authored-by:
Ngoc Diep Do <55230119+diepi@users.noreply.github.com> Co-authored-by:
Wesley Gifford <79663411+wgifford@users.noreply.github.com> Co-authored-by:
Wesley M. Gifford <wmgifford@us.ibm.com> Co-authored-by:
nnguyen <nnguyen@us.ibm.com> Co-authored-by:
Ngoc Diep Do <diiepy@gmail.com> Co-authored-by:
Kashif Rasul <kashif.rasul@gmail.com> Co-authored-by:
NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
adismort14 authored
Update pipelines.md
-
jiaqiw09 authored
* translate perrf_torch_compile.md * translate tf_xla.md * update
-
Younes Belkada authored
addresses todo for awq tests
-
Matt authored
* Improve pipeline tokenizer loading and hope nothing breaks * Let's try a hacky solution * Revert the changes to init * Add a falcon hack to the automapping * Add a falcon hack to the automapping
-
Matt authored
* Add version check for Jinja * Update src/transformers/tokenization_utils_base.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * make fixup --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-