1. 12 Oct, 2024 1 commit
  2. 11 Oct, 2024 15 commits
  3. 10 Oct, 2024 16 commits
    • Matthew Hoffman's avatar
      Default `synced_gpus` to `True` when using `FullyShardedDataParallel` (#33483) · 70b07d97
      Matthew Hoffman authored
      * Default synced_gpus to True when using FullyShardedDataParallel
      
      Fixes #30228
      
      Related:
      
      * https://github.com/pytorch/pytorch/issues/100069
      * https://github.com/pytorch/pytorch/issues/123962
      
      Similar to DeepSpeed ZeRO Stage 3, when using FSDP with multiple GPUs and differently sized data per rank, the ranks reach different synchronization points at the same time, leading to deadlock
      
      To avoid this, we can automatically set synced_gpus to True if we detect that a PreTrainedModel is being managed by FSDP using _is_fsdp_managed_module, which was added in 2.0.0 for torch.compile: https://github.com/pytorch/pytorch/blob/v2.0.0/torch/distributed/fsdp/_dynamo_utils.py
      
      
      
      * Remove test file
      
      * ruff formatting
      
      * ruff format
      
      * Update copyright year
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Add test for FSDP-wrapped model generation
      
      Before #33483, these tests would have hung for 10 minutes before crashing due to a timeout error
      
      * Ruff format
      
      * Move argparse import
      
      * Remove barrier
      
      I think this might cause more problems if one of the workers was killed
      
      * Move import into function to decrease load time
      
      https://github.com/huggingface/transformers/pull/33483#discussion_r1787972735
      
      * Add test for accelerate and Trainer
      
      https://github.com/huggingface/transformers/pull/33483#discussion_r1790309675
      
      
      
      * Refactor imports
      
      * Ruff format
      
      * Use nullcontext
      
      ---------
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      70b07d97
    • Mohamed Mekkouri's avatar
      Small Fix to modular converter (#34051) · 24b82f3c
      Mohamed Mekkouri authored
      * small_fix
      
      * supporting both src/tranformers and examples/
      
      * make style
      24b82f3c
    • Ekaterina Aidova's avatar
    • Pavel Iakubovskii's avatar
      Update Blip2 `is_pipeline_test_to_skip` method signature (#34067) · 8363fd83
      Pavel Iakubovskii authored
      Update method signature
      8363fd83
    • Yoach Lacombe's avatar
      [TESTS] ASR pipeline (#33925) · e7dfb917
      Yoach Lacombe authored
      * fix whisper translation
      
      * correct slow_unfinished_sequence test
      
      * make fixup
      e7dfb917
    • Mohamed Mekkouri's avatar
      Fix data_seed unused (#33731) · a37a06a2
      Mohamed Mekkouri authored
      * fixing data_seed unused
      
      * fix accelerate version needed
      
      * fix style
      
      * update the fix following accelerate fix
      a37a06a2
    • Michael Goin's avatar
      [Docs] Update compressed_tensors.md (#33961) · b2f09fb9
      Michael Goin authored
      
      * Update compressed_tensors.md
      
      Fix some unfinished sections
      
      * Update docs/source/en/quantization/compressed_tensors.md
      
      Co-authored-by: default avatarXiao Yuan <yuanx749@gmail.com>
      
      ---------
      
      Co-authored-by: default avatarXiao Yuan <yuanx749@gmail.com>
      b2f09fb9
    • Mohamed Abu El-Nasr's avatar
      check if eigenvalues of covariance matrix are complex. (#34037) · 4a3f1a68
      Mohamed Abu El-Nasr authored
      check if eigenvalues of covariance complex for psd checking
      4a3f1a68
    • Daniel Korat's avatar
      Universal Assisted Generation: Assisted generation with any assistant model... · fb0c6b52
      Daniel Korat authored
      Universal Assisted Generation: Assisted generation with any assistant model (by Intel Labs) (#33383)
      
      * Update candidate_generator.py
      
      * Update utils.py
      
      * add lookbehind params to _get_candidate_generator
      
      * make fixup
      
      * add unit tests
      
      * fix failing tests
      
      * add docstrings
      
      * fix docstrings; remove non-optimized AnyTokenizer
      
      * added any tokenizer generation correctness test
      
      * make fixup
      
      * fix assertion syntax
      
      * PR review fixes
      
      * address additional PR comments
      
      * fix tests
      
      * remove stropping criteria arg
      
      * make fixup
      
      * add AssistantConfig
      
      * fix prev_tokens branching
      
      * pass tokenizers through `generate()`kwargs
      
      * fix lookbehind values; tokenizer params WIP
      
      * fixup
      
      * AssistantConfig
      
      * remove AssistantConfig; apply PR suggestions
      
      * restructure tests
      
      * fixup
      
      * fix assistant_tokenizer arg validation
      
      * fixup
      
      * fix tests in TestAssistedCandidateGeneratorDifferentTokenizers
      
      * fix class docstring
      
      * PR suggestions
      
      * doc
      
      * doc update and improvements to `_validate_assistant()`
      
      ---------
      
      Co-authored-by: default avatarmosheber <moshe.berchansky@intel.com>
      fb0c6b52
    • Hamza Tahboub's avatar
      Specifying torch dtype in Qwen2VLForConditionalGeneration (#33953) · dda3f91d
      Hamza Tahboub authored
      * Specifying torch dtype
      
      * Reverting change & changing fallback _from_config() dtype
      dda3f91d
    • Matt's avatar
      Sync QuestionAnsweringPipeline (#34039) · f8a260e2
      Matt authored
      * Sync QuestionAnsweringPipeline
      
      * typo fixes
      
      * Update deprecation warnings
      f8a260e2
    • Vladislav Bronzov's avatar
      Add gguf support for gpt2 (#34044) · c9afee53
      Vladislav Bronzov authored
      * add gpt2 gguf support
      
      * add doc change
      
      * small refactoring
      c9afee53
    • Pavel Iakubovskii's avatar
      Fix pipelines tests (#34049) · 66e08dba
      Pavel Iakubovskii authored
      * Fix wrong skip annotation
      
      * Remove error raise
      66e08dba
    • Dani Martí's avatar
      HfArgumentParser: allow for hyhenated field names in long-options (#33990) · a84c4137
      Dani Martí authored
      
      Allow for hyphenated field names in long-options
      
      argparse converts hyphens into underscores before assignment (e.g., an
      option passed as `--long-option` will be stored under `long_option`), So
      there is no need to pass options as literal attributes, as in
      `--long_option` (with an underscore instead of a hyphen). This commit
      ensures that this behavior is respected by `parse_args_into_dataclasses`
      as well.
      
      Issue: #33933
      
      Co-authored-by: default avatarDaniel Marti <mrtidm@amazon.com>
      a84c4137
    • Raushan Turganbay's avatar
      Phi3: fix attn for sliding window (#33586) · adea6754
      Raushan Turganbay authored
      * fix phi3 attn fir sliding window
      
      * fix tests
      
      * address most comment
      
      * style
      
      * update after rebase
      
      * add more models
      
      * fix tests
      adea6754
    • Avishai Elmakies's avatar
      add sdpa to OPT (#33298) · a265600c
      Avishai Elmakies authored
      
      * add sdpa to OPT
      
      * chore: remove redundant whitespace in OPTDecoder class
      
      * fixup
      
      * bug fix
      
      * add sdpa and attention generate test
      
      * fixup
      
      * Refactor OPTAttention forward method for improved readability and maintainability
      
      * undo refactor for _shape and key,val states
      
      * add OPT to doc, fixup didn't find it for some reason
      
      * change order
      
      * change default attn_implemntation in testing to eager
      
      * [run-slow] opt
      
      * change test_eager_matches_sdpa_generate to the one llama
      
      * Update default attention implementation in testing common
      
      * [run-slow] opt
      
      * remove uneeded print
      
      * [run-slow] opt
      
      * refactor model testers to have attn_implementation="eager"
      
      * [run-slow] opt
      
      * convert test_eager_matches_sdpa_generate to opt-350M
      
      * bug fix when creating mask for opt
      
      * [run-slow] opt
      
      * if layer head mask default to eager
      
      * if head mask is not none fall to eager
      
      * [run-slow] opt
      
      * Update src/transformers/models/opt/modeling_opt.py
      
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      
      * Clean up Unpack imports (#33631)
      
      clean up Unpack imports
      
      * Fix DPT /Dinov2 sdpa regression on main (#33660)
      
      * fallback to eager if output attentions.
      
      * fix copies
      
      * handle dependency errors in check_imports (#33622)
      
      * handle dependency errors in check_imports
      
      * change log level to warning
      
      * add back self.max_position_embeddings = config.max_position_embeddings (#33550)
      
      * add back self.max_position_embeddings = config.max_position_embeddings
      
      * fix-copies
      
      * Fix Llava conversion for LlavaQwen2ForCausalLM with Clip vision tower (#33613)
      
      fix llavaqwen2 model conversion
      
      * Uniformize kwargs for Udop processor and update docs (#33628)
      
      * Add optional kwargs and uniformize udop
      
      * cleanup Unpack
      
      * nit Udop
      
      * Generation: deprecate `PreTrainedModel` inheriting from `GenerationMixin`  (#33203)
      
      * Enable BNB multi-backend support (#31098)
      
      * enable cpu bnb path
      
      * fix style
      
      * fix code style
      
      * fix 4 bit path
      
      * Update src/transformers/utils/import_utils.py
      
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      
      * add multi backend refactor tests
      
      * fix style
      
      * tweak 4bit quantizer + fix corresponding tests
      
      * tweak 8bit quantizer + *try* fixing corresponding tests
      
      * fix dequant bnb 8bit
      
      * account for Intel CPU in variability of expected outputs
      
      * enable cpu and xpu device map
      
      * further tweaks to account for Intel CPU
      
      * fix autocast to work with both cpu + cuda
      
      * fix comments
      
      * fix comments
      
      * switch to testing_utils.torch_device
      
      * allow for xpu in multi-gpu tests
      
      * fix tests 4bit for CPU NF4
      
      * fix bug with is_torch_xpu_available needing to be called as func
      
      * avoid issue where test reports attr err due to other failure
      
      * fix formatting
      
      * fix typo from resolving of merge conflict
      
      * polish based on last PR review
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * fix CI
      
      * Update src/transformers/integrations/integration_utils.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Update src/transformers/integrations/integration_utils.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * fix error log
      
      * fix error msg
      
      * add \n in error log
      
      * make quality
      
      * rm bnb cuda restriction in doc
      
      * cpu model don't need dispatch
      
      * fix doc
      
      * fix style
      
      * check cuda avaliable in testing
      
      * fix tests
      
      * Update docs/source/en/model_doc/chameleon.md
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * Update docs/source/en/model_doc/llava_next.md
      
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      
      * Update tests/quantization/bnb/test_4bit.py
      
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      
      * Update tests/quantization/bnb/test_4bit.py
      
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      
      * fix doc
      
      * fix check multibackends
      
      * fix import sort
      
      * remove check torch in bnb
      
      * docs: update bitsandbytes references with multi-backend info
      
      * docs: fix small mistakes in bnb paragraph
      
      * run formatting
      
      * reveret bnb check
      
      * move bnb multi-backend check to import_utils
      
      * Update src/transformers/utils/import_utils.py
      
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      
      * fix bnb check
      
      * minor fix for bnb
      
      * check lib first
      
      * fix code style
      
      * Revert "run formatting"
      
      This reverts commit ac108c6d6b34f45a5745a736ba57282405cfaa61.
      
      * fix format
      
      * give warning when bnb version is low and no cuda found]
      
      * fix device assignment check to be multi-device capable
      
      * address akx feedback on get_avlbl_dev fn
      
      * revert partially, as we don't want the function that public, as docs would be too much (enforced)
      
      ---------
      
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      Co-authored-by: default avatarTitus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com>
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * Fix error string after refactoring into get_chat_template (#33652)
      
      * Fix error string after refactoring into get_chat_template
      
      * Take suggestion from CR
      
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      ---------
      
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      
      * uniformize git processor (#33668)
      
      * uniformize git processor
      
      * update doctring
      
      * Modular `transformers`: modularity and inheritance for new model additions (#33248)
      
      * update exampel
      
      * update
      
      * push the converted diff files for testing and ci
      
      * correct one example
      
      * fix class attributes and docstring
      
      * nits
      
      * oups
      
      * fixed config!
      
      * update
      
      * nitd
      
      * class attributes are not matched against the other, this is missing
      
      * fixed overwriting self.xxx now onto the attributes I think
      
      * partial fix, now order with docstring
      
      * fix docstring order?
      
      * more fixes
      
      * update
      
      * fix missing docstrings!
      
      * examples don't all work yet
      
      * fixup
      
      * nit
      
      * updated
      
      * hick
      
      * update
      
      * delete
      
      * update
      
      * update
      
      * update
      
      * fix
      
      * all default
      
      * no local import
      
      * fix more diff
      
      * some fix related to "safe imports"
      
      * push fixed
      
      * add helper!
      
      * style
      
      * add a check
      
      * all by default
      
      * add the
      
      * update
      
      * FINALLY!
      
      * nit
      
      * fix config dependencies
      
      * man that is it
      
      * fix fix
      
      * update diffs
      
      * fix the last issue
      
      * re-default to all
      
      * alll the fixes
      
      * nice
      
      * fix properties vs setter
      
      * fixup
      
      * updates
      
      * update dependencies
      
      * make sure to install what needs to be installed
      
      * fixup
      
      * quick fix for now
      
      * fix!
      
      * fixup
      
      * update
      
      * update
      
      * updates
      
      * whitespaces
      
      * nit
      
      * fix
      
      * simplify everything, and make it file agnostic (should work for image processors)
      
      * style
      
      * finish fixing all import issues
      
      * fixup
      
      * empty modeling should not be written!
      
      * Add logic to find who depends on what
      
      * update
      
      * cleanup
      
      * update
      
      * update gemma to support positions
      
      * some small nits
      
      * this is the correct docstring for gemma2
      
      * fix merging of docstrings
      
      * update
      
      * fixup
      
      * update
      
      * take doc into account
      
      * styling
      
      * update
      
      * fix hidden activation
      
      * more fixes
      
      * final fixes!
      
      * fixup
      
      * fixup instruct  blip video
      
      * update
      
      * fix bugs
      
      * align gemma2 with the rest as well
      
      * updats
      
      * revert
      
      * update
      
      * more reversiom
      
      * grind
      
      * more
      
      * arf
      
      * update
      
      * order will matter
      
      * finish del stuff
      
      * update
      
      * rename to modular
      
      * fixup
      
      * nits
      
      * update makefile
      
      * fixup
      
      * update order of the checks!
      
      * fix
      
      * fix docstring that has a call inside
      
      * fiix conversion check
      
      * style
      
      * add some initial documentation
      
      * update
      
      * update doc
      
      * some fixup
      
      * updates
      
      * yups
      
      * Mostly todo gimme a minut
      
      * update
      
      * fixup
      
      * revert some stuff
      
      * Review docs for the modular transformers (#33472)
      
      Docs
      
      * good update
      
      * fixup
      
      * mmm current updates lead to this code
      
      * okay, this fixes it
      
      * cool
      
      * fixes
      
      * update
      
      * nit
      
      * updates
      
      * nits
      
      * fix doc
      
      * update
      
      * revert bad changes
      
      * update
      
      * updates
      
      * proper update
      
      * update
      
      * update?
      
      * up
      
      * update
      
      * cool
      
      * nits
      
      * nits
      
      * bon bon
      
      * fix
      
      * ?
      
      * minimise changes
      
      * update
      
      * update
      
      * update
      
      * updates?
      
      * fixed gemma2
      
      * kind of a hack
      
      * nits
      
      * update
      
      * remove `diffs` in favor of `modular`
      
      * fix make fix copies
      
      ---------
      
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Fix CIs post merging modular transformers (#33681)
      
      update
      
      * Fixed docstring for cohere model regarding unavailability of prune_he… (#33253)
      
      * Fixed docstring for cohere model regarding unavailability of prune_head() methods
      
      The docstring mentions that cohere model supports prune_heads() methods. I have fixed the docstring by explicitly mentioning that it doesn't support that functionality.
      
      * Update src/transformers/models/cohere/modeling_cohere.py
      
      ---------
      
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      
      * Generation tests: update imagegpt input name, remove unused functions (#33663)
      
      * Improve Error Messaging for Flash Attention 2 on CPU (#33655)
      
      Update flash-attn error message on CPU
      
      Rebased to latest branch
      
      * Gemma2: fix config initialization (`cache_implementation`) (#33684)
      
      * Fix ByteLevel alphabet missing when Sequence pretokenizer is used (#33556)
      
      * Fix ByteLevel alphabet missing when Sequence pretokenizer is used
      
      * Fixed formatting with `ruff`.
      
      * Uniformize kwargs for image-text-to-text processors (#32544)
      
      * uniformize FUYU processor kwargs
      
      * Uniformize instructblip processor kwargs
      
      * Fix processor kwargs and tests Fuyu, InstructBlip, Kosmos2
      
      * Uniformize llava_next processor
      
      * Fix save_load test for processor with chat_template only as extra init args
      
      * Fix import Unpack
      
      * Fix Fuyu Processor import
      
      * Fix FuyuProcessor import
      
      * Fix FuyuProcessor
      
      * Add defaults for specific kwargs kosmos2
      
      * Fix Udop to return BatchFeature instead of BatchEncoding and uniformize kwargs
      
      * Add tests processor Udop
      
      * remove Copied from in processing Udop as change of input orders caused by BatchEncoding -> BatchFeature
      
      * Fix overwrite tests kwargs processors
      
      * Add warnings and BC for changes in processor inputs order, change docs, add BC for text_pair as arg for Udop
      
      * Fix processing test fuyu
      
      * remove unnecessary pad_token check in instructblip ProcessorTest
      
      * Fix BC tests and cleanup
      
      * FIx imports fuyu
      
      * Uniformize Pix2Struct
      
      * Fix wrong name for FuyuProcessorKwargs
      
      * Fix slow tests reversed inputs align fuyu llava-next, change udop warning
      
      * Fix wrong logging import udop
      
      * Add check images text input order
      
      * Fix copies
      
      * change text pair handling when positional arg
      
      * rebase on main, fix imports in test_processing_common
      
      * remove optional args and udop uniformization from this PR
      
      * fix failing tests
      
      * remove unnecessary test, fix processing utils and test processing common
      
      * cleanup Unpack
      
      * cleanup
      
      * fix conflict grounding dino
      
      * 🚨🚨 Setting default behavior of assisted decoding (#33657)
      
      * tests: fix pytorch tensor placement errors (#33485)
      
      This commit fixes the following errors:
      * Fix "expected all tensors to be on the same device" error
      * Fix "can't convert device type tensor to numpy"
      
      According to pytorch documentation torch.Tensor.numpy(force=False)
      performs conversion only if tensor is on CPU (plus few other restrictions)
      which is not the case. For our case we need force=True since we just
      need a data and don't care about tensors coherency.
      
      Fixes: #33517
      See: https://pytorch.org/docs/2.4/generated/torch.Tensor.numpy.html
      
      
      
      Signed-off-by: default avatarDmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
      
      * bump tokenizers, fix added tokens fast (#32535)
      
      * update based on tokenizers release
      
      * update
      
      * nits
      
      * update
      
      * revert re addition
      
      * don't break that yet
      
      * fmt
      
      * revert unwanted
      
      * update tokenizers version
      
      * update dep table
      
      * update
      
      * update in conversion script as well
      
      * some fix
      
      * revert
      
      * fully revert
      
      * fix training
      
      * remove set trace
      
      * fixup
      
      * update
      
      * update
      
      * [Pixtral] Improve docs, rename model (#33491)
      
      * Improve docs, rename model
      
      * Fix style
      
      * Update repo id
      
      * fix code quality after merge
      
      * HFQuantizer implementation for compressed-tensors library (#31704)
      
      * Add compressed-tensors HFQuantizer implementation
      
      * flag serializable as False
      
      * run
      
      * revive lines deleted by ruff
      
      * fixes to load+save from sparseml, edit config to quantization_config, and load back
      
      * address satrat comment
      
      * compressed_tensors to compressed-tensors and revert back is_serializable
      
      * rename quant_method from sparseml to compressed-tensors
      
      * tests
      
      * edit tests
      
      * clean up tests
      
      * make style
      
      * cleanup
      
      * cleanup
      
      * add test skip for when compressed tensors is not installed
      
      * remove pydantic import + style
      
      * delay torch import in test
      
      * initial docs
      
      * update main init for compressed tensors config
      
      * make fix-copies
      
      * docstring
      
      * remove fill_docstring
      
      * Apply suggestions from code review
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * review comments
      
      * review comments
      
      * comments - suppress warnings on state dict load, tests, fixes
      
      * bug-fix - remove unnecessary call to apply quant lifecycle
      
      * run_compressed compatability
      
      * revert changes not needed for compression
      
      * no longer need unexpected keys fn
      
      * unexpected keys not needed either
      
      * Apply suggestions from code review
      
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * add to_diff_dict
      
      * update docs and expand testing
      
      * Update _toctree.yml with compressed-tensors
      
      * Update src/transformers/utils/quantization_config.py
      
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      
      * update doc
      
      * add note about saving a loaded model
      
      ---------
      
      Co-authored-by: default avatarGeorge Ohashi <george@neuralmagic.com>
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      Co-authored-by: default avatarSara Adkins <sara@neuralmagic.com>
      Co-authored-by: default avatarSara Adkins <sara.adkins65@gmail.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarDipika Sikka <ds3822@columbia.edu>
      Co-authored-by: default avatarDipika <dipikasikka1@gmail.com>
      
      * update model card for opt
      
      * add batch size to inference table
      
      * [slow-run] opt
      
      * [run-slow] opt
      
      ---------
      
      Signed-off-by: default avatarDmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
      Co-authored-by: default avatarAvishai Elmakies <avishai.elma@cs.huji.ac.il>
      Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
      Co-authored-by: default avatarPablo Montalvo <39954772+molbap@users.noreply.github.com>
      Co-authored-by: default avatarchengchengpei <5881383+chengchengpei@users.noreply.github.com>
      Co-authored-by: default avatarIsotr0py <2037008807@qq.com>
      Co-authored-by: default avatarYoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
      Co-authored-by: default avatarJoao Gante <joaofranciscocardosogante@gmail.com>
      Co-authored-by: default avatarjiqing-feng <jiqing.feng@intel.com>
      Co-authored-by: default avatarAarni Koskela <akx@iki.fi>
      Co-authored-by: default avatarTitus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com>
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      Co-authored-by: default avatarArthur <48595927+ArthurZucker@users.noreply.github.com>
      Co-authored-by: default avatarTibor Reiss <75096465+tibor-reiss@users.noreply.github.com>
      Co-authored-by: default avatarMatt <Rocketknight1@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <hi@lysand.re>
      Co-authored-by: default avatarMuhammad Naufil <m.naufil1@gmail.com>
      Co-authored-by: default avatarsizhky <yyeshr@gmail.com>
      Co-authored-by: default avatarUmar Butler <umar@umar.au>
      Co-authored-by: default avatarJonathan Mamou <jonathan.mamou@intel.com>
      Co-authored-by: default avatarDmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
      Co-authored-by: default avatarNielsRogge <48327001+NielsRogge@users.noreply.github.com>
      Co-authored-by: default avatarArthur Zucker <arthur.zucker@gmail.com>
      Co-authored-by: default avatarBenjamin Fineran <bfineran@users.noreply.github.com>
      Co-authored-by: default avatarGeorge Ohashi <george@neuralmagic.com>
      Co-authored-by: default avatarSara Adkins <sara@neuralmagic.com>
      Co-authored-by: default avatarSara Adkins <sara.adkins65@gmail.com>
      Co-authored-by: default avatarDipika Sikka <ds3822@columbia.edu>
      Co-authored-by: default avatarDipika <dipikasikka1@gmail.com>
      a265600c
  4. 09 Oct, 2024 8 commits