1. 13 Apr, 2023 9 commits
  2. 12 Apr, 2023 11 commits
    • Matt's avatar
      Fix docstrings for TF BLIP (#22618) · 50f82e12
      Matt authored
      * Fix docstrings for TFBLIP
      
      * Fix missing line in TF port!
      
      * Use values from torch tests now other bugs fixed
      
      * Use values from torch tests now other bugs fixed
      
      * Fix doctest string
      50f82e12
    • NielsRogge's avatar
      Update warning levels (#22727) · ce06e478
      NielsRogge authored
      * Use different level
      
      * Remove futurewarning
      
      * Use warning_once
      
      * Update copies
      ce06e478
    • Arthur's avatar
      add fast support and option (#22724) · 98581954
      Arthur authored
      
      * add fast support and option
      
      * update based on review
      
      * Apply suggestions from code review
      
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * Update src/transformers/models/llama/convert_llama_weights_to_hf.py
      
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * nit
      
      * add print
      
      * fixup
      
      ---------
      
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      98581954
    • Michael Benayoun's avatar
      `torch.distributed` group initialization for `torch_neuron` disabled when... · 10fab90f
      Michael Benayoun authored
      `torch.distributed` group initialization for `torch_neuron` disabled when `optimum-neuron` is installed (#22728)
      
      * Make the process group initialization not happen if optimum_neuron is installed
      
      * Add warning
      
      * Remove list and added warning
      10fab90f
    • Stas Bekman's avatar
      [tests] switch to torchrun (#22712) · 1306b7d3
      Stas Bekman authored
      1306b7d3
    • ARKA1112's avatar
      Modify pipeline_tutorial.mdx (#22726) · d87ef00c
      ARKA1112 authored
      generator(model="openai/whisper-large") always returns error. As the error says the generator expects an input, just like the .flac file above. Even the generator object has no parameters called model. While there are parameters which can be passed to generator like 'batch_size' but to pass a model i believe the the parameter has to be passed while instantiating the pipeline and not as a parameter to the instance.
      
      I believe the correct term should be:
      
      generator = pipeline(model="openai/whisper-large", device=0)
      d87ef00c
    • Younes Belkada's avatar
      [`bnb`] Let's make serialization of int8 models possible (#22177) · 370f0ca1
      Younes Belkada authored
      
      * make serialization of int8 models possible
      
      * make fixup
      
      * add docs
      
      * add ability to push to hub and save pretrained
      
      * fixes
      
      * more addition
      
      * more tests
      
      * fix issues
      
      * change variable
      
      * clearer message
      
      * adapt from suggestions
      
      * few fixes
      
      * remove unused function
      
      * Update src/transformers/utils/quantization_config.py
      
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * address last comments
      
      * last warning
      
      * clarify doc
      
      * protect import
      
      * Update src/transformers/modeling_utils.py
      
      * Apply suggestions from code review
      
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      ---------
      
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      370f0ca1
    • pioliverse's avatar
      add model resources for CPMAnt (new) (#20906) · 523ca4e0
      pioliverse authored
      
      * resolve conflicts
      
      * rebase and make style
      
      * test
      
      * test
      
      * test
      
      * rebase and make style
      
      * rebase and make style
      
      * tests
      
      * tests
      
      * rewrite some functions
      
      * rebase and make style
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * fix some bugs & docstring
      
      * add models and tests
      
      * solve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * tests
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * fix some bugs & docstring
      
      * save resolution
      
      * make style
      
      * delete redefinition code
      
      * reformat function
      
      * reformat
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * tests
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * resolve conflicts
      
      * fix load_tf_weights_in_cpmant
      
      * reformat some unrelated files
      
      * upgrade quality
      
      * resolve conflicts
      
      * make style
      
      * fix bugs and refactor
      
      * modify docstrings and make style
      
      * unify import format in __init__.py
      
      * fix import-altclp bug
      
      * fix copies to update index.md
      
      * fix unused config parameters
      
      * fix unused config parameters
      
      * fix unused config parameters
      
      * update README_ja.md
      
      * dummy commit for unit test
      
      * fix attention mask
      
      * add CPMAntTokenizer&-Fast to auto-mapping
      
      * drop redundant changes in README_ko
      
      * fix  defaults in docstring
      
      * fix use_cache and some docstring
      
      * add missing args in tokenizer
      
      * modify tester inheritance
      
      * add is_jieba_available
      
      * fix some bugs
      
      * make style and fix-copies
      
      * add doctests
      
      * skip integration tests
      
      * add is_jieba_available
      
      * fix bugs in common tests
      
      * adjust docstrings and make style
      
      * add argument docstring
      
      * adjust code to some specifications
      
      * make style and fix-copies
      
      * add fast tokenization test
      
      * dummy commit for unit test
      
      * dummy commit for unit test
      
      * dummy commit for unit test
      
      * normalize some comments and names
      
      * Bert->CPMAnt
      
      * camel names and drop redundant codes
      
      * make style and fix-coies
      
      * add CpmTokenizerFast _import_structure
      
      * drop cpmanttokenizerfast in model_doc
      
      * fix some problems
      
      * fix CPMAnt tokenization for common test
      
      * make style and fixup
      
      * fix copies and fixup
      
      * fix bugs in tokenization test
      
      * dummy commit for connection failure in unittest
      
      * fix copies
      
      * drop trailing comma
      
      * fix decorator in tests
      
      * dummy commit for connection failure in unittest
      
      ---------
      
      Co-authored-by: default avatarGong Baitao <gongbaitao11@gmail.com>
      523ca4e0
    • jprivera44's avatar
      17503b00
    • Arthur's avatar
      remove wrong doc in readme (#22723) · b76e6ebd
      Arthur authored
      b76e6ebd
    • amyeroberts's avatar
      Update input values for docstring (#22631) · 5a71977b
      amyeroberts authored
      5a71977b
  3. 11 Apr, 2023 7 commits
  4. 10 Apr, 2023 8 commits
  5. 07 Apr, 2023 5 commits