1. 07 Dec, 2023 15 commits
    • ydshieh's avatar
      fix · a168494e
      ydshieh authored
      a168494e
    • Joao Gante's avatar
    • Matt's avatar
      Fix TF loading PT safetensors when weights are tied (#27490) · 47500b1d
      Matt authored
      
      * Un-skip tests
      
      * Add aliasing support to tf_to_pt_weight_rename
      
      * Refactor tf-to-pt weight rename for simplicity
      
      * Patch mobilebert
      
      * Let us pray that the transfo-xl one works
      
      * Add XGLM rename
      
      * Expand the test to see if we can get more models to break
      
      * Expand the test to see if we can get more models to break
      
      * Fix MPNet (it was actually an unrelated bug)
      
      * Fix MPNet (it was actually an unrelated bug)
      
      * Add speech2text fix
      
      * Update src/transformers/modeling_tf_pytorch_utils.py
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update src/transformers/models/mobilebert/modeling_tf_mobilebert.py
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Update to always return a tuple from tf_to_pt_weight_rename
      
      * reformat
      
      * Add a couple of missing tuples
      
      * Remove the extra test for tie_word_embeddings since it didn't cause any unexpected failures anyway
      
      * Revert changes to modeling_tf_mpnet.py
      
      * Skip MPNet test and add explanation
      
      * Add weight link for BART
      
      * Add TODO to clean this up a bit
      
      ---------
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      47500b1d
    • Yih-Dar's avatar
      9f1f11a2
    • fxmarty's avatar
      Fix device of masks in tests (#27887) · c99f2547
      fxmarty authored
      fix device of mask in tests
      c99f2547
    • Hz, Ji's avatar
    • Sourab Mangrulkar's avatar
      update `create_model_card` to properly save peft details when using Trainer with PEFT (#27754) · 5324bf9c
      Sourab Mangrulkar authored
      
      * update `create_model_card` to properly save peft details when using Trainer with PEFT
      
      * nit
      
      * Apply suggestions from code review
      
      Co-authored-by: default avatarBenjamin Bossan <BenjaminBossan@users.noreply.github.com>
      
      ---------
      
      Co-authored-by: default avatarBenjamin Bossan <BenjaminBossan@users.noreply.github.com>
      5324bf9c
    • Yih-Dar's avatar
      Allow `# Ignore copy` (#27328) · 52746922
      Yih-Dar authored
      
      * fix
      
      ---------
      
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      52746922
    • Younes Belkada's avatar
      [`Llava`] Add Llava to transformers (#27662) · 44b5506d
      Younes Belkada authored
      * add model like
      
      * logits match
      
      * minor fixes
      
      * fixes
      
      * up
      
      * up
      
      * add todo
      
      * llava processor
      
      * keep the processor simple
      
      * add conversion script
      
      * fixup
      
      * fix copies
      
      * up
      
      * add to index
      
      * fix config + logits
      
      * fix
      
      * refactor
      
      * more refactor
      
      * more refactor
      
      * fix copies
      
      * add authors
      
      * v1 tests
      
      * add `LlavaProcessor` in init
      
      * remove unneeded import
      
      * up
      
      * up
      
      * docs
      
      * up
      
      * fix CI
      
      * fix CI
      
      * add attention  mask in test
      
      * make fixup
      
      * remove the vision model
      
      * that' s the dirty way to do it
      
      * nits
      
      * nits
      
      * updates
      
      * add more tests
      
      * add input tests
      
      * fixup
      
      * more styling
      
      * nits
      
      * updates amd cleanup
      
      * fixup the generation expected results
      
      * fix the testing script
      
      * some cleanup and simplification which does not work yet but almost there!
      
      * make correct dispatch operations
      
      * vectorize works for batch of images and text
      
      * last todos
      
      * nits
      
      * update test and modeling code
      
      * remove useless function for now
      
      * fix few issues
      
      * fix generation
      
      * some nits
      
      * add bakllava
      
      * nits
      
      * remove duplicated code
      
      * finis merge
      
      * cleanup
      
      * missed this line
      
      * fill the todos
      
      * add left padding offset
      
      * add left and rignt padding logic
      
      * bool to properly index
      
      * make sure
      
      * more cleanups
      
      * batch is fixed 😉
      
      
      
      * add correct device for tensor creation
      
      * fix some dtype missmatch
      
      * ruff
      
      * update conversion script
      
      * Update src/transformers/__init__.py
      
      * fa 2 support + fix conversion script
      
      * more
      
      * correct reshaping
      
      * fix test dict
      
      * fix copies by ignoring
      
      * fix nit
      
      * skip clip vision model
      
      * fixup
      
      * fixup
      
      * LlavaForVisionText2Text -> LlavaForCausalLM
      
      * update
      
      * fix
      
      * raise correct errors
      
      * fix
      
      * docs
      
      * nuke for now
      
      * nits here and there
      
      * fixup
      
      * fix remaining tests
      
      * update LlavaForConditionalGeneration instead of CausalLM
      
      * fixups
      
      * pipeline support
      
      * slow and piepline tests
      
      * supports batch
      
      * nits
      
      * cleanup
      
      * fix first integration tests
      
      * add pad token where needed
      
      * correct etsts
      
      * fixups
      
      * update pipeline testr
      
      * fix quality
      
      * nits
      
      * revert unneeded change
      
      * nit
      
      * use BatchFeature
      
      * from ...feature_extraction_utils import BatchFeature
      
      * nits
      
      * nits
      
      * properly update
      
      * more f*** nits
      
      * fix copies
      
      * comment
      
      * keep slow test slow
      
      * Update src/transformers/models/llava/processing_llava.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * add piepline example
      
      * add pixel values in docstrign
      
      * update pr doctest
      
      * fix
      
      * fix slow tests
      
      * remove hack
      
      * fixup
      
      * small note
      
      * forward contrib credits from PR25789
      
      * forward contrib credits from original implementation and work
      
      * add arthur
      
      * Update src/transformers/models/llava/processing_llava.py
      
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * update docstring
      
      * nit
      
      * move to not doctested because of timeout issues
      
      * fixup
      
      * add description
      
      * more
      
      * fix-copies
      
      * fix docs
      
      * add beam search
      
      * add more comments
      
      * add typehints on processor
      
      * add speedup plot
      
      * update slow tests and docs
      
      * push test
      
      * push batched test
      
      * fix batched generation with different number of images
      
      * remove benchmark due to a bug
      
      * fix test
      
      * fix copies
      
      * add gcolab demo
      
      ---------
      
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarshauray8 <shauray8@users.noreply.github.com>
      Co-authored-by: default avatarhaotian-liu <haotian-liu@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      44b5506d
    • Phuc Van Phan's avatar
    • Susnato Dhar's avatar
      [`FA-2`] Add Flash Attention to `Phi` (#27661) · f84d85ba
      Susnato Dhar authored
      * add FA and modify doc file
      
      * test_flash_attn_2_generate_padding_right test overwritten
      
      * comment
      
      * modify persimmon modeling file
      
      * added speedup graph
      
      * more changes
      f84d85ba
    • Nolwenn Bernard's avatar
      [i18n-fr] Translate autoclass tutorial to French (#27659) · 06f56168
      Nolwenn Bernard authored
      * Translation of autoclass tutorial
      
      * Update totree to keep only tutorial section
      
      * Translate title toctree
      
      * Fix typos
      
      * Update review comments
      06f56168
    • jiqing-feng's avatar
      Fix bug of _prepare_4d_attention_mask (#27847) · 4d806dba
      jiqing-feng authored
      * use _prepare_4d_attention_mask
      
      * fix comment
      4d806dba
    • Alex McKinney's avatar
      Add Llama Flax Implementation (#24587) · 75336c17
      Alex McKinney authored
      * Copies `modeling_flax_gpt_neo.py` to start
      
      * MLP Block. WIP Attention and Block
      
      * Adds Flax implementation of `LlamaMLP`
      Validated with in-file test.
      Some slight numeric differences, but assuming it isn't an issue
      
      * Adds `FlaxLlamaRMSNorm` layer
      `flax.linen` includes `RMSNorm` layer but not necessarily in all
      versions. Hence, we add in-file.
      
      * Adds FlaxLlamaAttention
      Copied from GPT-J as it has efficient caching implementation as well as
      rotary embeddings.
      Notice numerically different, but not by a huge amount. Needs
      investigating
      
      * Adds `FlaxLlamaDecoderLayer`
      numerically inaccurate, debugging..
      
      * debugging rotary mismatch
      gptj uses interleaved whilst llama uses contiguous
      i think they match now but still final result is wrong.
      maybe drop back to just debugging attention layer?
      
      * fixes bug with decoder layer
      still somewhat numerically inaccurate, but close enough for now
      
      * adds markers for what to implement next
      the structure here diverges a lot from the PT version.
      not a big fan of it, but just get something working for now
      
      * implements `FlaxLlamaBlockCollection`]
      tolerance must be higher than expected, kinda disconcerting
      
      * Adds `FlaxLlamaModule`
      equivalent PyTorch model is `LlamaModel`
      yay! a language model🤗
      
      * adds `FlaxLlamaForCausalLMModule`
      equivalent to `LlamaForCausalLM`
      still missing returning dict or tuple, will add later
      
      * start porting pretrained wrappers
      realised it probably needs return dict as a prereq
      
      * cleanup, quality, style
      
      * readds `return_dict` and model output named tuples
      
      * (tentatively) pretrained wrappers work 🔥
      
      * fixes numerical mismatch in `FlaxLlamaRMSNorm`
      seems `jax.lax.rsqrt` does not match `torch.sqrt`.
      manually computing `1 / jax.numpy.sqrt` results in matching values.
      
      * [WIP] debugging numerics
      
      * numerical match
      I think issue was accidental change of backend. forcing CPU fixes test.
      We expect some mismatch on GPU.
      
      * adds in model and integration tests for Flax Llama
      summary of failing:
      - mul invalid combination of dimensions
      - one numerical mismatch
      - bf16 conversion (maybe my local backend issue)
      - params are not FrozenDict
      
      * adds missing TYPE_CHECKING import and `make fixup`
      
      * adds back missing docstrings
      needs review on quality of docstrings, not sure what is required.
      Furthermore, need to check if `CHECKPOINT_FOR_DOC` is valid. See TODO
      
      * commenting out equivalence test as can just use common
      
      * debugging
      
      * Fixes bug where mask and pos_ids were swapped in pretrained models
      This results in all tests passing now 🔥
      
      
      
      * cleanup of modeling file
      
      * cleanup of test file
      
      * Resolving simpler review comments
      
      * addresses more minor review comments
      
      * fixing introduced pytest errors from review
      
      * wip additional slow tests
      
      * wip tests
      need to grab a GPU machine to get real logits for comparison
      otherwise, slow tests should be okay
      
      * `make quality`, `make style`
      
      * adds slow integration tests
      - checking logits
      - checking hidden states
      - checking generation outputs
      
      * `make fix-copies`
      
      * fix mangled function following `make fix-copies`
      
      * adds missing type checking imports
      
      * fixes missing parameter checkpoint warning
      
      * more finegrained 'Copied from' tags
      avoids issue of overwriting `LLAMA_INPUTS_DOCSTRING`
      
      * swaps import guards
      ??? how did these get swapped initially?
      
      * removing `inv_freq` again as pytorch version has now removed
      
      * attempting to get CI to pass
      
      * adds doc entries for llama flax models
      
      * fixes typo in __init__.py imports
      
      * adds back special equivalence tests
      these come from the gpt neo flax tests. there is special behaviour for these models that needs to override the common version
      
      * overrides tests with dummy to see if CI passes
      need to fill in these tests later
      
      * adds my contribution to docs
      
      * `make style; make quality`
      
      * replaces random masking with fixed to work with flax version
      
      * `make quality; make style`
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * updates `x`->`tensor` in `rotate_half`
      
      * addresses smaller review comments
      
      * Update docs/source/en/model_doc/llama.md
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * adds integration test class
      
      * adds `dtype` to rotary embedding to cast outputs
      
      * adds type to flax llama rotary layer
      
      * `make style`
      
      * `make fix-copies`
      
      * Apply suggestions from code review
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * applies suggestions from review
      
      * Update modeling_flax_llama.py
      
      * `make fix-copies`
      
      * Update tests/models/llama/test_modeling_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * Update src/transformers/models/llama/modeling_flax_llama.py
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      
      * fixes shape mismatch in FlaxLlamaMLP
      
      * applies some suggestions from reviews
      
      * casts attn output logits to f32 regardless of dtype
      
      * adds attn bias using `LlamaConfig.attention_bias`
      
      * adds Copied From comments to Flax Llama test
      
      * mistral and persimmon test change -copy from llama
      
      * updates docs index
      
      * removes Copied from in tests
      
      it was preventing `make fix-copies` from succeeding
      
      * quality and style
      
      * ignores FlaxLlama input docstring
      
      * adds revision to `_CHECKPOINT_FOR_DOC`
      
      * repo consistency and quality
      
      * removes unused import
      
      * removes copied from from Phi test
      
      now diverges from llama tests following FlaxLlama changes
      
      * adds `_REAL_CHECKPOINT_FOR_DOC`
      
      * removes refs from pr tests
      
      * reformat to make ruff happy
      
      ---------
      
      Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
      75336c17
    • Xin Qiu's avatar
      Fix beam score calculation issue for JAX version (#27816) · 7fc80724
      Xin Qiu authored
      * Fix beam score calculation issue for JAX
      
      * Fix abstract tracer value errors
      7fc80724
  2. 06 Dec, 2023 4 commits
  3. 05 Dec, 2023 13 commits
    • Zach Mueller's avatar
      Update CUDA versions for DeepSpeed (#27853) · acd65316
      Zach Mueller authored
      * Update CUDA versions
      
      * For testing
      
      * Allow for workflow dispatch
      
      * Use newer image
      
      * Revert workflow
      
      * Revert workflow
      
      * Push
      
      * Other docker image
      acd65316
    • Younes Belkada's avatar
      [`Docs`] Update broken image on fused modules (#27856) · ba52dec4
      Younes Belkada authored
      Update quantization.md
      ba52dec4
    • Aaron Jimenez's avatar
      Documentation: Spanish translation of perplexity.mdx (#27807) · da1d0d40
      Aaron Jimenez authored
      * Copy perplexity.md file to es/ folder
      
      * Adding perplexity to es/_toctree.yml
      
      * Translate first section
      
      * Calculating PPL section translate
      
      * Example section translate
      
      * fix translate of log-likehood
      
      * Fix title translate
      
      * Fix \ in second paragraph
      
      * Change verosimilitud for log-likelihood
      
      * Run 'make style'
      da1d0d40
    • Vedat Baday's avatar
      788730c6
    • Yih-Dar's avatar
    • NielsRogge's avatar
      ️ [VitDet] Fix test (#27832) · 28e2887a
      NielsRogge authored
      Address test
      28e2887a
    • Arindam Jati's avatar
      [Time series] Add PatchTSMixer (#26247) · b242d0f2
      Arindam Jati authored
      
      * patchtsmixer initial commit
      
      * x,y->context_values,target_values, unittest addded
      
      * cleanup code
      
      * minor
      
      * return hidden states
      
      * model tests, partial integration tests
      
      * ettm notebook temporary
      
      * minor
      
      * config mask bug fix, tests updated
      
      * final ETT notebooks
      
      * add selfattn
      
      * init
      
      * added docstrings
      
      * PatchTSMixerForPretraining -> PatchTSMixerForMaskPretraining
      
      * functionality tests added
      
      * add start and input docstrings
      
      * docstring edits
      
      * testcase edits
      
      * minor changes
      
      * docstring error fixed
      
      * ran make fixup
      
      * finalize integration tests and docs
      
      * minor
      
      * cleaned gitignore
      
      * added dataclass decorator, ran black formatter
      
      * ran ruff
      
      * formatting
      
      * add slow decorator
      
      * renamed in_Channel to input_size and default to 1
      
      * shorten dataclass names
      
      * use smaller model for testing
      
      * moved the 3 heads to the modeling file
      
      * use scalers instead of revin
      
      * support forecast_channel_indices
      
      * fix regression scaling
      
      * undo reg. scaling
      
      * removed unneeded classes
      
      * forgot missing
      
      * add more layers
      
      * add copied positional_encoding
      
      * use patchmask from patchtst
      
      * removed dependency on layers directory
      
      * formatting
      
      * set seed
      
      * removed unused imports
      
      * fixed forward signature test
      
      * adding distributional head for PatchTSMixerForecasting
      
      * add generate to forecast
      
      * testcases for generate
      
      * add generate and distributional head for regression
      
      * raise Exception for negative values for neg binominal distribution
      
      * formatting changes
      
      * remove copied from patchtst and add TODO for test passing
      
      * make copies
      
      * doc edits
      
      * minor changes
      
      * format issues
      
      * minor changes
      
      * minor changes
      
      * format docstring
      
      * change some class names to PatchTSMixer + class name
      
      Transpose to PatchTSMixerTranspose
      GatedAttention to PatchTSMixerGatedAttention
      
      * change NormLayer to PatchTSMixerNormLayer
      
      * change MLP to PatchTSMixerMLP
      
      * change PatchMixer to PatchMixerBlock, FeatureMixer to FeatureMixerBlock
      
      * change ChannelFeatureMixer to ChannelFeatureMixerBlock
      
      * change PatchMasking to PatchTSMixerMasking
      
      * change Patchify to PatchTSMixerPatchify
      
      * list to `list`
      
      * fix docstrings
      
      * formatting
      
      * change bs to batch_size, edit forecast_masking
      
      * edit random_masking
      
      * change variable name and update docstring in PatchTSMixerMasking
      
      * change variable name and update docstring in InjectScalerStatistics4D
      
      * update forward call in PatchTSMixerTranspose
      
      * change variable name and update docstring in PatchTSMixerNormLayer
      
      * change variable name and update docstring in PatchTSMixerMLP
      
      * change variable name and update docstring in ChannelFeatureMixerBlock
      
      * formatting
      
      * formatting issues
      
      * docstring issue
      
      * fixed observed_mask type in docstrings
      
      * use FloatTensor type
      
      * formatting
      
      * fix rescaling issue in forecasting, fixed integration tests
      
      * add docstring from decorator
      
      * fix docstring
      
      * Update README.md
      
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/configuration_patchtsmixer.py
      
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/configuration_patchtsmixer.py
      
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      
      * PatchTSMixerChannelFeatureMixerBlock
      
      * formatting
      
      * ForPretraining
      
      * use num_labels instead of n_classes
      
      * remove commented out code
      
      * docstring fixed
      
      * nn.functional used instead of one letter F
      
      * x_tmp renamed
      
      * one letter variable x removed from forward calls
      
      * one letter variable y removed
      
      * remove commented code
      
      * rename patch_size, in_channels, PatchTSMixerBackbone
      
      * add config to heads
      
      * add config to heads tests
      
      * code reafactoring to use config instead of passing individual params
      
      * Cdocstring fixes part 1
      
      * docstring fixes part 2
      
      * removed logger.debug
      
      * context_values -> past_values
      
      * formatting changes
      
      * pe -> positional_encoding
      
      * removed unused target variable
      
      * self.mode logic fixed
      
      * formatting change
      
      * edit docstring and var name
      
      * change n_targets to num_targets
      
      * rename input_size to num_input_channels
      
      * add head names with prefix PatchTSMixer
      
      * edit docstring in PatchTSMixerForRegression
      
      * fix var name change in testcases
      
      * add PatchTSMixerAttention
      
      * return dict for all exposed classes, test cases added
      
      * format
      
      * move loss function to forward call
      
      * make style
      
      * adding return dict/tuple
      
      * make repo-consistency
      
      * remove flatten mode
      
      * code refactoring
      
      * rename data
      
      * remove PatchTSMixer and keep only PatchTSMixerEncoder
      
      * docstring fixes
      
      * removed unused code
      
      * format
      
      * format
      
      * remove contiguous and formatting changes
      
      * remove model description from config
      
      * replace asserts with ValueError
      
      * remove nn.Sequential from PatchTSMixerNormLayer
      
      * replace if-else with map
      
      * remove all nn.Sequential
      
      * format
      
      * formatting
      
      * fix gradient_checkpointing error after merge, and formatting
      
      * make fix-copies
      
      * remove comments
      
      * reshape
      
      * doesnt support gradient checkpointing
      
      * corect Patchify
      
      * masking updates
      
      * batchnorm copy from
      
      * format checks
      
      * scaler edits
      
      * remove comments
      
      * format changes
      
      * remove self.config
      
      * correct class PatchTSMixerMLP(nn.Module):
      
      * makr fix
      
      * doc updates
      
      * fix-copies
      
      * scaler class correction
      
      * doc edits
      
      * scaler edits
      
      * update readme with links
      
      * injectstatistics add
      
      * fix-copies
      
      * add norm_eps option to LayerNorm
      
      * format changes
      
      * fix copies
      
      * correct make copies
      
      * use parametrize
      
      * fix doc string
      
      * add docs to toctree
      
      * make style
      
      * doc segmenting
      
      * docstring edit
      
      * change forecast to prediction
      
      * edit doc
      
      * doc edits
      
      * remove PatchTSMixerTranspose
      
      * add PatchTSMixerPositionalEncoding and init position_enc
      
      * remove positional_encoding
      
      * edit forecast_masking, remove forecast_mask_ratios
      
      * fix broken code
      
      * var rename target_values -> future_values
      
      * num_features -> d_model
      
      * fix broken code after master merge
      
      * repo consistency
      
      * use postional embedding
      
      * prediction_logits -> prediction_outputs, make fix-copies
      
      * uncommented @slow
      
      * minor changes
      
      * loss first in tuple
      
      * tuple and dict same ordering
      
      * style edits
      
      * minor changes
      
      * dict/tuple consistent enablement
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update tests/models/patchtsmixer/test_modeling_patchtsmixer.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/models/patchtsmixer/modeling_patchtsmixer.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix formatting
      
      * formatting
      
      * usage tip
      
      * test on cpu only
      
      * add sample usage
      
      * change PatchTSMixerForClassification to PatchTSMixerForTimeSeriesClassification
      
      * push changes
      
      * fix copies
      
      * std scaling set to default True case
      
      * minor changes
      
      * stylechanges
      
      ---------
      
      Co-authored-by: default avatarArindam Jati <arindam.jati@ibm.com>
      Co-authored-by: default avatarvijaye12 <vijaye12@in.ibm.com>
      Co-authored-by: default avatarKashif Rasul <kashif.rasul@gmail.com>
      Co-authored-by: default avatarnnguyen <nnguyen@us.ibm.com>
      Co-authored-by: default avatarvijaye12 <vijaykr.e@gmail.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarNam Nguyen <namctin@gmail.com>
      Co-authored-by: default avatarWesley Gifford <79663411+wgifford@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      b242d0f2
    • Bram Willemsen's avatar
    • Younes Belkada's avatar
      [`ClipVision`] `accelerate` support for clip-vision (#27851) · 3e68944c
      Younes Belkada authored
      support accelerate for clip-vision
      3e68944c
    • Joao Gante's avatar
      Generate: Update VisionEncoderDecoder test value (#27850) · b7e6d120
      Joao Gante authored
      update test result, due to bug fix in decoder-only beam search
      b7e6d120
    • Younes Belkada's avatar
      Faster generation using AWQ + Fused modules (#27411) · fdb85be4
      Younes Belkada authored
      
      * v1 fusing modules
      
      * add fused mlp support
      
      * up
      
      * fix CI
      
      * block save_pretrained
      
      * fixup
      
      * small fix
      
      * add new condition
      
      * add v1 docs
      
      * add some comments
      
      * style
      
      * fix nit
      
      * adapt from suggestion
      
      * add check
      
      * change arg names
      
      * change variables name
      
      * Update src/transformers/integrations/awq.py
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * style
      
      * split up into 3 different private methods
      
      * more conditions
      
      * more checks
      
      * add fused tests for custom models
      
      * fix
      
      * fix tests
      
      * final update docs
      
      * final fixes
      
      * fix importlib metadata
      
      * Update src/transformers/utils/quantization_config.py
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * change it to `do_fuse`
      
      * nit
      
      * Update src/transformers/utils/quantization_config.py
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * Update src/transformers/utils/quantization_config.py
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * Update src/transformers/utils/quantization_config.py
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * few fixes
      
      * revert
      
      * fix test
      
      * fix copies
      
      * raise error if model is not quantized
      
      * add test
      
      * use quantization_config.config when fusing
      
      * Update src/transformers/modeling_utils.py
      
      ---------
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      fdb85be4
    • NielsRogge's avatar
      Make image processors more general (#27690) · df40edfb
      NielsRogge authored
      * Make image processors more general
      
      * Add backwards compatibility for KOSMOS-2
      
      * Remove use_square_size everywhere
      
      * Remove script
      df40edfb
    • Yih-Dar's avatar
      pin `ruff==0.1.5` (#27849) · 96f9caa1
      Yih-Dar authored
      
      fix
      
      Co-authored-by: default avatarydshieh <ydshieh@users.noreply.github.com>
      96f9caa1
  4. 04 Dec, 2023 8 commits