v4.17.0
版本发布时间: 2022-03-03 23:19:06
huggingface/transformers最新发布版本:v4.47.1(2024-12-17 23:42:54)
New models
XGLM
The XGLM model was proposed in Few-shot Learning with Multilingual Language Models by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
XGLM is a GPT3-like multilingual model trained on a balanced corpus covering a diverse set of languages.
- Add XGLM models by @patil-suraj in https://github.com/huggingface/transformers/pull/14876
ConvNext
The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
- Add ConvNeXT by @NielsRogge in https://github.com/huggingface/transformers/pull/15277
- Add TFConvNextModel by @sayakpaul in https://github.com/huggingface/transformers/pull/15750
PoolFormer
The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs.
- Add PoolFormer by @heytanay in https://github.com/huggingface/transformers/pull/15531
PLBart
The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.
This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task on Java, Python and English.
- Add PLBart by @gchhablani in https://github.com/huggingface/transformers/pull/13269
- Add missing PLBart entry in README by @gchhablani in https://github.com/huggingface/transformers/pull/15721
Data2Vec
The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.
Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images. Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.
- Add Data2Vec by @edugp in https://github.com/huggingface/transformers/pull/15507
Maskformer
The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.
MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification.
- Maskformer by @FrancescoSaverioZuppichini in https://github.com/huggingface/transformers/pull/15682
Code in the Hub
This is a new experimental feature added to the library. It allows you to share a custom model (with configuration, tokenizer, feature extractor, processor) with anyone through the Model Hub while still using the Auto-classes API of the Transformers library.
See the documentation for more information!
- Allow relative imports in dynamic code by @sgugger in https://github.com/huggingface/transformers/pull/15352
- Save code of registered custom models by @sgugger in https://github.com/huggingface/transformers/pull/15379
Documentation
We are working on updating the existing guides in the documentation, and writing more!
- Update model share tutorial by @stevhliu in https://github.com/huggingface/transformers/pull/15288
- Get started docs by @stevhliu in https://github.com/huggingface/transformers/pull/15098
- Update fine-tune docs by @stevhliu in https://github.com/huggingface/transformers/pull/15259
- Update tutorial docs by @stevhliu in https://github.com/huggingface/transformers/pull/15165
- Create a custom model guide by @stevhliu in https://github.com/huggingface/transformers/pull/15489
- 🧼 NLP task guides by @stevhliu in https://github.com/huggingface/transformers/pull/15731
- Inference for multilingual models by @stevhliu in https://github.com/huggingface/transformers/pull/15836
Time Stamps for Speech models
Speech models that have been trained with the CTC loss (Wav2Vec2, XLS-R, HuBERT, WavLM, ...) can now output the time
stamp in addition to the transcription of the input audio. E.g. one can retrieve the start and end time for every transcribed word
via the Wav2Vec2CTCTokenizer.decode
method or the Wav2Vec2ProcessorWithLM.decoder
method. See the documentation here and here respectively.
This feature can also be directly used via the ASR pipeline - see here and this example.
- Add time stamps for wav2vec2 with lm by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15854
- Adding timestamps for CTC with LM in ASR pipeline. by @Narsil in https://github.com/huggingface/transformers/pull/15863
- Adding the option to return_timestamps on pure CTC ASR models. by @Narsil in https://github.com/huggingface/transformers/pull/15792
- Time stamps for CTC models by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15687
Breaking change
Unfortunately, some bugs had crept into CLIPTokenizerFast
: the tokenization produced by CLIPTokenizer
and CLIPTokenizerFast
were not equal. CLIPTokenizerFast
has been corrected to encode the text with the same strategy as CLIPTokenizer
.
What does this mean for you ? You need to use the tokenizer that was used to train the CLIP template you are using. For example:
- Case 1 : you use
openai/clip-vit-base-patch32
,openai/clip-vit-base-patch16
oropenai/clip-vit-large-patch14
, before v4.17.0 the good version of the tokenizer wasCLIPTokenizer
. From v4.17.0, you can use bothCLIPTokenizer
andCLIPTokenizerFast
. - Case 2 : you have trained your own CLIP model using
CLIPTokenizerFast
. Your tokenizer is no longer aCLIPTokenizerFast
and we recommend you to load yourtokenizer.json
in aPreTrainedTokenizerFast
directly or to continue to use a version prior to v4.17.0. - Case 3: you have trained your own CLIP model using
CLIPTokenizer
. Now, you can produce a fast equivalent of your tokenizer by doingCLIPTokenizerFast.from_pretrained("Path to local folder or Hub repo with slow tokenizer files", from_slow=True)
.
To make CLIPTokenizerFast
identical to CLIPTokenizer
, the template of the tokenization of a sentence pair (A,B)
has been modified. The previous template was <|startoftext|> A B <|endoftext|>
and the new one is <|startoftext|> A <|endoftext|> <|endoftext|> B <|endoftext|>
.
What's Changed
- Fix tests_fetcher by @sgugger in https://github.com/huggingface/transformers/pull/15376
- Fix code format for Accelerate doc by @stevhliu in https://github.com/huggingface/transformers/pull/15335
- Add init to BORT by @LysandreJik in https://github.com/huggingface/transformers/pull/15378
- Set syncfree AdamW as the default optimizer for xla:gpu device in amp mode by @ymwangg in https://github.com/huggingface/transformers/pull/15361
- Fixing support
batch_size
andnum_return_Sequences
intext-generation
pipeline by @Narsil in https://github.com/huggingface/transformers/pull/15318 - Fix
bad_words_ids
not working with sentencepiece-based tokenizers by @ngoquanghuy99 in https://github.com/huggingface/transformers/pull/15343 - [docs] fix wrong file name in
pr_check
by @ngoquanghuy99 in https://github.com/huggingface/transformers/pull/15380 - Prepare deprecated ONNX exporter for torch v1.11 by @lewtun in https://github.com/huggingface/transformers/pull/15388
- [Fix doc example] FlaxMarianPreTrainedModel by @ydshieh in https://github.com/huggingface/transformers/pull/15391
- Make links explicit by @Rocketknight1 in https://github.com/huggingface/transformers/pull/15395
- [deepspeed] saving checkpoint fallback when fp16 weights aren't saved by @stas00 in https://github.com/huggingface/transformers/pull/14948
- Fix missing eps arg for LayerNorm in ElectraGeneratorPredictions by @ydshieh in https://github.com/huggingface/transformers/pull/15332
- Use argument for preprocessing workers in run_summairzation by @sgugger in https://github.com/huggingface/transformers/pull/15394
- Add support for XLM-R XL and XXL models by modeling_xlm_roberta_xl.py by @Soonhwan-Kwon in https://github.com/huggingface/transformers/pull/13727
- Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel by @ydshieh in https://github.com/huggingface/transformers/pull/15298
- [XGLMTokenizer] fix init and add in AutoTokenizer by @patil-suraj in https://github.com/huggingface/transformers/pull/15406
- Add SegformerFeatureExtractor to Auto API by @NielsRogge in https://github.com/huggingface/transformers/pull/15410
- Fix additional DataTrainingArguments documentation by @FremyCompany in https://github.com/huggingface/transformers/pull/15408
- Add (M)Luke model training for Token Classification in the examples by @jplu in https://github.com/huggingface/transformers/pull/14880
- Update README.md by @kamalkraj in https://github.com/huggingface/transformers/pull/15430
- [Robust Speech Challenge] Add missing LR parameter by @jonatasgrosman in https://github.com/huggingface/transformers/pull/15428
- [XGLM] fix gradient checkpointing by @patil-suraj in https://github.com/huggingface/transformers/pull/15427
- [Hotfix] Fix Swin model outputs by @NielsRogge in https://github.com/huggingface/transformers/pull/15414
- add t5 ner finetuning by @ToluClassics in https://github.com/huggingface/transformers/pull/15432
- Add doc for add-new-model-like command by @sgugger in https://github.com/huggingface/transformers/pull/15433
- [Swin] Add missing header by @NielsRogge in https://github.com/huggingface/transformers/pull/15434
- [deepspeed doc] fix import, extra notes by @stas00 in https://github.com/huggingface/transformers/pull/15400
- Fix loss calculation in TFXXXForTokenClassification models by @ydshieh in https://github.com/huggingface/transformers/pull/15294
- Fix spurious warning in TF TokenClassification models by @Rocketknight1 in https://github.com/huggingface/transformers/pull/15435
- Change REALM checkpoint to new ones by @sgugger in https://github.com/huggingface/transformers/pull/15439
- [Trainer] suppress warning for length-related columns by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15421
- [examples/Flax] add a section about GPUs by @patil-suraj in https://github.com/huggingface/transformers/pull/15198
- Fix TFLEDModel by @ydshieh in https://github.com/huggingface/transformers/pull/15356
- [XGLMTokenizer] correct positional emb size by @patil-suraj in https://github.com/huggingface/transformers/pull/15441
- [RobertaTokenizer] remove inheritance on GPT2Tokenizer by @patil-suraj in https://github.com/huggingface/transformers/pull/15429
- Misfiring tf warnings by @Rocketknight1 in https://github.com/huggingface/transformers/pull/15442
- Add 'with torch.no_grad()' to BEiT integration test forward passes by @itsTurner in https://github.com/huggingface/transformers/pull/14961
- Update modeling_wav2vec2.py by @peregilk in https://github.com/huggingface/transformers/pull/15423
- Error when group_by_length is used with an IterableDataset by @sgugger in https://github.com/huggingface/transformers/pull/15437
- skip large generations pipeline test for XGLM by @patil-suraj in https://github.com/huggingface/transformers/pull/15445
- [generate] fix synced_gpus default by @stas00 in https://github.com/huggingface/transformers/pull/15446
- Remove "inputs" in tf common test script (no longer required) by @ydshieh in https://github.com/huggingface/transformers/pull/15262
- Fix TF Causal LM models' returned logits by @ydshieh in https://github.com/huggingface/transformers/pull/15256
- fix from_vision_text_pretrained doc example by @ydshieh in https://github.com/huggingface/transformers/pull/15453
- [M2M100, XGLM] fix positional emb resize by @patil-suraj in https://github.com/huggingface/transformers/pull/15444
- Update README.md by @kamalkraj in https://github.com/huggingface/transformers/pull/15462
- replace assert with exception for padding_side arg in
PreTrainedTokenizerBase
__init__
by @SaulLu in https://github.com/huggingface/transformers/pull/15454 - fix the
tokenizer_config.json
file for the slow tokenizer when a fast version is available by @SaulLu in https://github.com/huggingface/transformers/pull/15319 - use mean instead of elementwise_mean in XLMPredLayer by @ydshieh in https://github.com/huggingface/transformers/pull/15436
- [BartTokenizer] remove inheritance on RobertaTokenizer by @patil-suraj in https://github.com/huggingface/transformers/pull/15461
-
Trainer.push_to_hub
always tries to push to the Hub by @sgugger in https://github.com/huggingface/transformers/pull/15463 - Harder check for IndexErrors in QA scripts by @sgugger in https://github.com/huggingface/transformers/pull/15438
- Add option to resize like torchvision's Resize by @NielsRogge in https://github.com/huggingface/transformers/pull/15419
- [Wav2Vec2ProcessorWithLM] add alpha & beta to batch decode & decode by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15465
- Adding support for
microphone
streaming within pipeline. by @Narsil in https://github.com/huggingface/transformers/pull/15046 - fix error posted in issue #15448 by @bugface in https://github.com/huggingface/transformers/pull/15480
- Fic docstring of ASR pipeline by @sgugger in https://github.com/huggingface/transformers/pull/15481
- Add W&B backend for hyperparameter sweep by @AyushExel in https://github.com/huggingface/transformers/pull/14582
- Fix labels stored in model config for token classification examples by @sgugger in https://github.com/huggingface/transformers/pull/15482
- fix set truncation attribute in
__init__
ofPreTrainedTokenizerBase
by @SaulLu in https://github.com/huggingface/transformers/pull/15456 - Correct eos_token_id settings in generate by @thinksoso in https://github.com/huggingface/transformers/pull/15403
- fix TFMarianMTModel output by @ydshieh in https://github.com/huggingface/transformers/pull/15494
- Cleanup load_weight_prefix in TFEncoderDecoderModel by @ydshieh in https://github.com/huggingface/transformers/pull/15101
- [Flax tests] Disable scheduled GPU tests by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15503
- Add general vision docstrings by @NielsRogge in https://github.com/huggingface/transformers/pull/15501
- [deepspeed] fix a bug in a test by @stas00 in https://github.com/huggingface/transformers/pull/15493
- Add preprocess_logits_for_metrics Trainer param by @davidleonfdez in https://github.com/huggingface/transformers/pull/15473
- [deepspeed docs] memory requirements by @stas00 in https://github.com/huggingface/transformers/pull/15506
- Remove loss from some flax models docs & examples by @ydshieh in https://github.com/huggingface/transformers/pull/15492
- Fix TFElectraForMultipleChoice by @ydshieh in https://github.com/huggingface/transformers/pull/15509
- Handle PyTorch to Flax conversion of 1D convolutions by @sanchit-gandhi in https://github.com/huggingface/transformers/pull/15519
- Fix TFRemBertEncoder all_hidden_states by @ydshieh in https://github.com/huggingface/transformers/pull/15510
- [parallelism docs] Megatron-Deepspeed info by @stas00 in https://github.com/huggingface/transformers/pull/15488
- Standardize semantic segmentation models outputs by @sgugger in https://github.com/huggingface/transformers/pull/15469
- [deepspeed docs] DeepSpeed ZeRO Inference by @stas00 in https://github.com/huggingface/transformers/pull/15486
- Revert "Handle PyTorch to Flax conversion of 1D convolutions" by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15540
- [ASR pipeline] correct asr pipeline for seq2seq models by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15541
- [torch_int_div] Correct true division in generation by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15498
- [Trainer] Deeper length checks for IterableDatasetShard by @anton-l in https://github.com/huggingface/transformers/pull/15539
- Add ASR CTC streaming example by @anton-l in https://github.com/huggingface/transformers/pull/15309
- Wav2Vec2 models must either throw or deal with add_apater by @FremyCompany in https://github.com/huggingface/transformers/pull/15409
- Remove Longformers from ONNX-supported models by @lewtun in https://github.com/huggingface/transformers/pull/15273
- Fix TF T5/LED missing cross attn in retrun values by @ydshieh in https://github.com/huggingface/transformers/pull/15511
- Make TF Wav2Vec2 outputs the same as PT's version by @ydshieh in https://github.com/huggingface/transformers/pull/15530
- FX tracing improvement by @michaelbenayoun in https://github.com/huggingface/transformers/pull/14321
- electra is added to onnx supported model by @arron1227 in https://github.com/huggingface/transformers/pull/15084
- [GPTJ] fix docs by @patil-suraj in https://github.com/huggingface/transformers/pull/15558
- Force use_cache to be False in PyTorch by @ydshieh in https://github.com/huggingface/transformers/pull/15385
- Add TFSpeech2Text by @gante in https://github.com/huggingface/transformers/pull/15113
- feat(flax): allow encoder_outputs in generate by @borisdayma in https://github.com/huggingface/transformers/pull/15554
- Add codecarbon callback to docs by @nateraw in https://github.com/huggingface/transformers/pull/15563
- [Flax tests] fix test_model_outputs_equivalence by @patil-suraj in https://github.com/huggingface/transformers/pull/15571
- logger.warn --> logger.warning by @ydshieh in https://github.com/huggingface/transformers/pull/15572
- PoC for a ProcessorMixin class by @sgugger in https://github.com/huggingface/transformers/pull/15549
- add model scaling section by @lvwerra in https://github.com/huggingface/transformers/pull/15119
- Upgrade black to version ~=22.0 by @LysandreJik in https://github.com/huggingface/transformers/pull/15565
- Make sure custom configs work with Transformers by @sgugger in https://github.com/huggingface/transformers/pull/15569
- Add Wav2Vec2 Adapter Weights to Flax by @sanchit-gandhi in https://github.com/huggingface/transformers/pull/15566
- Click new version by @LysandreJik in https://github.com/huggingface/transformers/pull/15579
- [Flax tests/FlaxBert] make from_pretrained test faster by @patil-suraj in https://github.com/huggingface/transformers/pull/15561
- Add implementation of typical sampling by @cimeister in https://github.com/huggingface/transformers/pull/15504
- Constrained Beam Search [without disjunctive decoding] by @cwkeam in https://github.com/huggingface/transformers/pull/15416
- Fix tests hub failure by @sgugger in https://github.com/huggingface/transformers/pull/15580
- update serving_output for some TF models by @ydshieh in https://github.com/huggingface/transformers/pull/15568
- [trainer docs] document how to select specific gpus by @stas00 in https://github.com/huggingface/transformers/pull/15551
- [ViTMAE] Add link to script by @NielsRogge in https://github.com/huggingface/transformers/pull/15588
- Expand tutorial for custom models by @sgugger in https://github.com/huggingface/transformers/pull/15587
- Add Tensorflow handling of ONNX conversion by @Albertobegue in https://github.com/huggingface/transformers/pull/13831
- Add example batch size to all commands by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15596
- Compute loss independent from decoder for TF EncDec models (as #14139) by @ydshieh in https://github.com/huggingface/transformers/pull/15175
- Fix Seq2SeqTrainer for VisionEncoderDecoderModel by @NielsRogge in https://github.com/huggingface/transformers/pull/15603
- Add local and TensorFlow ONNX export examples to docs by @lewtun in https://github.com/huggingface/transformers/pull/15604
- [deepspeed docs] Correct JSON format by @ngoquanghuy99 in https://github.com/huggingface/transformers/pull/15600
- Small clean up generate by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15611
- Mark "code in the Hub" API as experimental by @sgugger in https://github.com/huggingface/transformers/pull/15624
- Enable ONNX export when PyTorch and TensorFlow installed in the same env by @lewtun in https://github.com/huggingface/transformers/pull/15625
- TF: Add informative warning for inexistent CPU backprop ops by @gante in https://github.com/huggingface/transformers/pull/15612
- Add aws studio notebooks by @mishig25 in https://github.com/huggingface/transformers/pull/15606
- TF MT5 embeddings resize by @gante in https://github.com/huggingface/transformers/pull/15567
- Fix broken link in CTRL docs by @stevhliu in https://github.com/huggingface/transformers/pull/15615
- Fix _configuration_file argument getting passed to model by @sgugger in https://github.com/huggingface/transformers/pull/15629
- [deepspeed docs] misc additions by @stas00 in https://github.com/huggingface/transformers/pull/15585
- [research_projects] deal with security alerts by @stas00 in https://github.com/huggingface/transformers/pull/15594
- Custom feature extractor by @sgugger in https://github.com/huggingface/transformers/pull/15630
- Fix grammar in tokenizer_summary docs by @derenrich in https://github.com/huggingface/transformers/pull/15614
- Add push to hub to feature extractor by @sgugger in https://github.com/huggingface/transformers/pull/15632
- [Fix doc example] FlaxVisionEncoderDecoder by @ydshieh in https://github.com/huggingface/transformers/pull/15626
- Fix a bug that QuestionAnsweringPipeline ignores max_seq_len parameter by @wptoux in https://github.com/huggingface/transformers/pull/15238
- Report only the failed imports in
requires_backends
by @tkukurin in https://github.com/huggingface/transformers/pull/15636 - Make Swin work with VisionEncoderDecoderModel by @NielsRogge in https://github.com/huggingface/transformers/pull/15527
- Remove redundant error logging in from_pretrained() method by @lewtun in https://github.com/huggingface/transformers/pull/15631
- Register feature extractor by @sgugger in https://github.com/huggingface/transformers/pull/15634
- fix bug for the log of RNG states are not properly loaded lead to exception. by @muzhi1991 in https://github.com/huggingface/transformers/pull/15638
- [SpeechEncoderDecoder] Make sure no EOS is generated in test by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15655
- Require
tokenizers>=0.11.1
by @aphedges in https://github.com/huggingface/transformers/pull/15266 - Fix ASR pipelines from local directories with wav2vec models that have language models attached by @versae in https://github.com/huggingface/transformers/pull/15590
- Fix typo in speech2text2 doc by @jonrbates in https://github.com/huggingface/transformers/pull/15617
- Allow custom code for Processors by @sgugger in https://github.com/huggingface/transformers/pull/15649
- add scores to Wav2Vec2WithLMOutput by @arampacha in https://github.com/huggingface/transformers/pull/15413
- Update bad_words_ids usage by @ngoquanghuy99 in https://github.com/huggingface/transformers/pull/15641
- Updated the RAG training with latest Pytorch Lightning library and the RAY by @shamanez in https://github.com/huggingface/transformers/pull/15653
- Add section about doc testing by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15659
- add a network debug script and document it by @stas00 in https://github.com/huggingface/transformers/pull/15652
- Re-export
KeyDataset
. by @Narsil in https://github.com/huggingface/transformers/pull/15645 - Add
decoder_kwargs
to send to LM on asr pipeline. by @Narsil in https://github.com/huggingface/transformers/pull/15646 - TF generate refactor - Greedy Search by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15562
- [pipeline doc] fix api by @stas00 in https://github.com/huggingface/transformers/pull/15660
- Fix TFSequenceSummary's activation by @ydshieh in https://github.com/huggingface/transformers/pull/15643
- Fix model equivalence tests by @LysandreJik in https://github.com/huggingface/transformers/pull/15670
- Fix vit test by @LysandreJik in https://github.com/huggingface/transformers/pull/15671
- Add a missing space in a deprecation message by @bryant1410 in https://github.com/huggingface/transformers/pull/15651
- [t5/t0/mt5 models] faster/leaner custom layer norm by @stas00 in https://github.com/huggingface/transformers/pull/14656
- Add push_to_hub method to processors by @sgugger in https://github.com/huggingface/transformers/pull/15668
- Usage examples for logger by @FrancescoSaverioZuppichini in https://github.com/huggingface/transformers/pull/15657
- Fix dec_attn_mask in TFTransfoXLMainLayer by @ydshieh in https://github.com/huggingface/transformers/pull/15665
- 🔥 Remove build_doc_test github action by @coyotte508 in https://github.com/huggingface/transformers/pull/15680
- Add register method to AutoProcessor by @sgugger in https://github.com/huggingface/transformers/pull/15669
- [Wav2Vec2ProcessorWithLM] Fix auto processor with lm by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15683
- Fix Funnel configuration doc by @ydshieh in https://github.com/huggingface/transformers/pull/15686
- Implementation of activations as pytorch modules by @eldarkurtic in https://github.com/huggingface/transformers/pull/15616
- Add image classification notebook by @NielsRogge in https://github.com/huggingface/transformers/pull/15667
- Minor fix on README.md by @ydshieh in https://github.com/huggingface/transformers/pull/15688
- Fix shape by @gchhablani in https://github.com/huggingface/transformers/pull/15696
- Add SimMIM by @NielsRogge in https://github.com/huggingface/transformers/pull/15586
- Adding a model, more doc for pushing to the hub by @FrancescoSaverioZuppichini in https://github.com/huggingface/transformers/pull/15690
- fix CLIP fast tokenizer and change some properties of the slow version by @SaulLu in https://github.com/huggingface/transformers/pull/15067
- Fix SiluActivation by @sgugger in https://github.com/huggingface/transformers/pull/15718
- Add initializer_std to TFFunnelModelTester with a default value 0.02 by @ydshieh in https://github.com/huggingface/transformers/pull/15684
- Fix DETR model deprecation warnings for int div by @gautierdag in https://github.com/huggingface/transformers/pull/15702
- Fix LongformerModel hidden states by @ydshieh in https://github.com/huggingface/transformers/pull/15537
- style_doc handles decorators in examples by @sgugger in https://github.com/huggingface/transformers/pull/15719
- Fix auto model tests by @LysandreJik in https://github.com/huggingface/transformers/pull/15706
- Fix
HfDeepSpeedConfig
argument inTrainer
by @jaketae in https://github.com/huggingface/transformers/pull/15711 - fix bug in PT speech-encoder-decoder by @sanchit-gandhi in https://github.com/huggingface/transformers/pull/15699
- Fix undoing preprocessing step in summarization example by @SSardorf in https://github.com/huggingface/transformers/pull/15741
- Fix minor comment typos by @Crabzmatic in https://github.com/huggingface/transformers/pull/15740
- add VisionTextDualEncoder and CLIP fine-tuning script by @patil-suraj in https://github.com/huggingface/transformers/pull/15701
- Add layer_idx to CrossAttention of GPT2 model by @hyunwoongko in https://github.com/huggingface/transformers/pull/15730
- TF text classification examples by @gante in https://github.com/huggingface/transformers/pull/15704
- revert temporary addition to test next version of CLIPTokenizerFast by @SaulLu in https://github.com/huggingface/transformers/pull/15717
- added link to our writing-doc document by @FrancescoSaverioZuppichini in https://github.com/huggingface/transformers/pull/15756
- TF train_step docstring by @gante in https://github.com/huggingface/transformers/pull/15755
- Gelu10 by @mfuntowicz in https://github.com/huggingface/transformers/pull/15676
- fixed pipeline code by @Moumeneb1 in https://github.com/huggingface/transformers/pull/15607
- Fix typo on examples/pytorch/question-answering by @dreamgonfly in https://github.com/huggingface/transformers/pull/15644
- Cleanup transformers-cli by @julien-c in https://github.com/huggingface/transformers/pull/15767
- Fix
HfArgumentParser
when passing a generator by @bryant1410 in https://github.com/huggingface/transformers/pull/15758 - Adding ZeroShotImageClassificationPipeline by @Narsil in https://github.com/huggingface/transformers/pull/12119
- [M2M100, XGLM] fix create_position_ids_from_inputs_embeds by @patil-suraj in https://github.com/huggingface/transformers/pull/15751
- Supporting Merges.txt files than contain an endline. (
hf-internal-testing/tiny-clip
for instance) by @Narsil in https://github.com/huggingface/transformers/pull/15782 - [CLIP] fix gradient checkpointing by @patil-suraj in https://github.com/huggingface/transformers/pull/15789
- [ViLT] Fix checkpoint url in config by @patil-suraj in https://github.com/huggingface/transformers/pull/15790
- Enable
image-segmentation
onAutoModelForSemanticSegmentation
by @Narsil in https://github.com/huggingface/transformers/pull/15647 - [doc] custom_models: mention security features of the Hub by @julien-c in https://github.com/huggingface/transformers/pull/15768
- [Wav2Vec2FeatureExtractor] Align documentation with code by @lsb in https://github.com/huggingface/transformers/pull/15468
- HTML dev docs by @coyotte508 in https://github.com/huggingface/transformers/pull/15678
- Fix indent in doc-builder CI by @coyotte508 in https://github.com/huggingface/transformers/pull/15798
- [Test refactor 1/5] Per-folder tests reorganization by @LysandreJik in https://github.com/huggingface/transformers/pull/15725
- [Test refactor 2/5] Tests fetcher by @LysandreJik in https://github.com/huggingface/transformers/pull/15726
- [Test refactor 3/5] Notification service improvement by @LysandreJik in https://github.com/huggingface/transformers/pull/15727
- [Test refactor 4/5] Improve the scheduled tests by @LysandreJik in https://github.com/huggingface/transformers/pull/15728
- [Test refactor 5/5] Build docker images by @LysandreJik in https://github.com/huggingface/transformers/pull/15729
- Fix build_documentation CI by @coyotte508 in https://github.com/huggingface/transformers/pull/15803
- Fix model templates by @LysandreJik in https://github.com/huggingface/transformers/pull/15806
- Fix add-new-model-like when old model checkpoint is not found by @sgugger in https://github.com/huggingface/transformers/pull/15805
- Fix from_pretrained with default base_model_prefix by @sgugger in https://github.com/huggingface/transformers/pull/15814
- Revert changes in logit size for semantic segmentation models by @sgugger in https://github.com/huggingface/transformers/pull/15722
- [Unispeech] Fix slow tests by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15818
- [Barthez Tokenizer] Fix saving by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15815
- [TFXLNet] Correct tf xlnet generate by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15822
- Fixes the "push" CI run by @LysandreJik in https://github.com/huggingface/transformers/pull/15807
- Fix semantic segmentation pipeline test by @sgugger in https://github.com/huggingface/transformers/pull/15826
- Fix dummy_inputs() to dummy_inputs in symbolic_trace doc string by @pbelevich in https://github.com/huggingface/transformers/pull/15776
- Add model specific output classes to PoolFormer model docs by @heytanay in https://github.com/huggingface/transformers/pull/15746
- HFTracer.trace should use self.graph to be compatible with torch.fx.Tracer by @pbelevich in https://github.com/huggingface/transformers/pull/15824
- Fix tf.concatenate + test past_key_values for TF models by @ydshieh in https://github.com/huggingface/transformers/pull/15774
- [examples/summarization and translation] fix readme by @patil-suraj in https://github.com/huggingface/transformers/pull/15833
- Add ONNX Runtime quantization for text classification notebook by @echarlaix in https://github.com/huggingface/transformers/pull/15817
- Re-enable doctests for the quicktour by @sgugger in https://github.com/huggingface/transformers/pull/15828
- Framework split model report by @LysandreJik in https://github.com/huggingface/transformers/pull/15825
- [UniSpeechSat] Revert previous incorrect change of slow tests by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15847
- Flax Speech-Encoder-Decoder Model by @sanchit-gandhi in https://github.com/huggingface/transformers/pull/15613
- Fix (deprecated) ONNX exporter to account for new tf2onnx API by @lewtun in https://github.com/huggingface/transformers/pull/15856
- Fixing the timestamps with chunking. by @Narsil in https://github.com/huggingface/transformers/pull/15843
- [TF-PT-Tests] Fix PyTorch - TF tests for different GPU devices by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15846
- [Benchmark tools] Deprecate all by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15848
- Add PT + TF automatic builds by @LysandreJik in https://github.com/huggingface/transformers/pull/15860
- Update TF LM examples by @gante in https://github.com/huggingface/transformers/pull/15855
- [ViLT] Add link to notebooks by @NielsRogge in https://github.com/huggingface/transformers/pull/15791
- Scatter should run on CUDA by @LysandreJik in https://github.com/huggingface/transformers/pull/15872
- [vision] Add problem_type support by @NielsRogge in https://github.com/huggingface/transformers/pull/15851
- use python 3.7 for flax self-push tests by @patil-suraj in https://github.com/huggingface/transformers/pull/15865
- Bump up doc node version to 16 by @mishig25 in https://github.com/huggingface/transformers/pull/15874
- No self-hosted by @LysandreJik in https://github.com/huggingface/transformers/pull/15710
- fix deepspeed tests by @stas00 in https://github.com/huggingface/transformers/pull/15881
- Remove stash for now by @LysandreJik in https://github.com/huggingface/transformers/pull/15882
- M2M100 support for ONNX export by @michaelbenayoun in https://github.com/huggingface/transformers/pull/15193
- [Bart] Fix implementation note doc by @patrickvonplaten in https://github.com/huggingface/transformers/pull/15879
- Add TF generate sample tests with all logit processors by @gante in https://github.com/huggingface/transformers/pull/15852
- TF: Update QA example by @gante in https://github.com/huggingface/transformers/pull/15870
- Updates in Trainer to support new features in SM Model Parallel library by @rahul003 in https://github.com/huggingface/transformers/pull/15877
- Fix tiny typo in docs by @rhjohnstone in https://github.com/huggingface/transformers/pull/15884
- Fix Bug in FlaxWav2Vec2 Slow Test by @sanchit-gandhi in https://github.com/huggingface/transformers/pull/15887
- [SegFormer] Add deprecation warning by @NielsRogge in https://github.com/huggingface/transformers/pull/15889
- TF generate refactor - Sample by @gante in https://github.com/huggingface/transformers/pull/15793
- [XGLM] run sampling test on CPU to be deterministic by @patil-suraj in https://github.com/huggingface/transformers/pull/15892
- Fix SegformerForImageClassification by @NielsRogge in https://github.com/huggingface/transformers/pull/15895
- Update delete-dev-doc job to match build-dev-doc by @sgugger in https://github.com/huggingface/transformers/pull/15891
Impressive community contributors
The community contributors below have significantly contributed to the v4.17.0 release. Thank you!
@sayakpaul, for contributing the TensorFlow version of ConvNext @gchhablani, for contributing PLBart @edugp, for contributing Data2Vec
New Contributors
- @Soonhwan-Kwon made their first contribution in https://github.com/huggingface/transformers/pull/13727
- @jonatasgrosman made their first contribution in https://github.com/huggingface/transformers/pull/15428
- @ToluClassics made their first contribution in https://github.com/huggingface/transformers/pull/15432
- @peregilk made their first contribution in https://github.com/huggingface/transformers/pull/15423
- @bugface made their first contribution in https://github.com/huggingface/transformers/pull/15480
- @AyushExel made their first contribution in https://github.com/huggingface/transformers/pull/14582
- @thinksoso made their first contribution in https://github.com/huggingface/transformers/pull/15403
- @davidleonfdez made their first contribution in https://github.com/huggingface/transformers/pull/15473
- @sanchit-gandhi made their first contribution in https://github.com/huggingface/transformers/pull/15519
- @arron1227 made their first contribution in https://github.com/huggingface/transformers/pull/15084
- @cimeister made their first contribution in https://github.com/huggingface/transformers/pull/15504
- @cwkeam made their first contribution in https://github.com/huggingface/transformers/pull/15416
- @Albertobegue made their first contribution in https://github.com/huggingface/transformers/pull/13831
- @derenrich made their first contribution in https://github.com/huggingface/transformers/pull/15614
- @tkukurin made their first contribution in https://github.com/huggingface/transformers/pull/15636
- @muzhi1991 made their first contribution in https://github.com/huggingface/transformers/pull/15638
- @versae made their first contribution in https://github.com/huggingface/transformers/pull/15590
- @jonrbates made their first contribution in https://github.com/huggingface/transformers/pull/15617
- @arampacha made their first contribution in https://github.com/huggingface/transformers/pull/15413
- @FrancescoSaverioZuppichini made their first contribution in https://github.com/huggingface/transformers/pull/15657
- @coyotte508 made their first contribution in https://github.com/huggingface/transformers/pull/15680
- @heytanay made their first contribution in https://github.com/huggingface/transformers/pull/15531
- @gautierdag made their first contribution in https://github.com/huggingface/transformers/pull/15702
- @SSardorf made their first contribution in https://github.com/huggingface/transformers/pull/15741
- @Crabzmatic made their first contribution in https://github.com/huggingface/transformers/pull/15740
- @dreamgonfly made their first contribution in https://github.com/huggingface/transformers/pull/15644
- @lsb made their first contribution in https://github.com/huggingface/transformers/pull/15468
- @pbelevich made their first contribution in https://github.com/huggingface/transformers/pull/15776
- @sayakpaul made their first contribution in https://github.com/huggingface/transformers/pull/15750
- @rahul003 made their first contribution in https://github.com/huggingface/transformers/pull/15877
- @rhjohnstone made their first contribution in https://github.com/huggingface/transformers/pull/15884
Full Changelog: https://github.com/huggingface/transformers/compare/v4.16.0...v4.17.0