MyGit

v4.17.0

huggingface/transformers

版本发布时间: 2022-03-03 23:19:06

huggingface/transformers最新发布版本:v4.47.1(2024-12-17 23:42:54)

New models

XGLM

The XGLM model was proposed in Few-shot Learning with Multilingual Language Models by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.

XGLM is a GPT3-like multilingual model trained on a balanced corpus covering a diverse set of languages.

ConvNext

The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.

ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.

PoolFormer

The PoolFormer model was proposed in MetaFormer is Actually What You Need for Vision by Sea AI Labs.

PLBart

The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation by Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, Kai-Wei Chang.

This is a BART-like model which can be used to perform code-summarization, code-generation, and code-translation tasks. The pre-trained model plbart-base has been trained using multilingual denoising task on Java, Python and English.

Data2Vec

The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli.

Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images. Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets.

Maskformer

The MaskFormer model was proposed in Per-Pixel Classification is Not All You Need for Semantic Segmentation by Bowen Cheng, Alexander G. Schwing, Alexander Kirillov.

MaskFormer addresses semantic segmentation with a mask classification paradigm instead of performing classic pixel-level classification.

Code in the Hub

This is a new experimental feature added to the library. It allows you to share a custom model (with configuration, tokenizer, feature extractor, processor) with anyone through the Model Hub while still using the Auto-classes API of the Transformers library.

See the documentation for more information!

Documentation

We are working on updating the existing guides in the documentation, and writing more!

Time Stamps for Speech models

Speech models that have been trained with the CTC loss (Wav2Vec2, XLS-R, HuBERT, WavLM, ...) can now output the time stamp in addition to the transcription of the input audio. E.g. one can retrieve the start and end time for every transcribed word via the Wav2Vec2CTCTokenizer.decode method or the Wav2Vec2ProcessorWithLM.decoder method. See the documentation here and here respectively.

This feature can also be directly used via the ASR pipeline - see here and this example.

Breaking change

Unfortunately, some bugs had crept into CLIPTokenizerFast : the tokenization produced by CLIPTokenizer and CLIPTokenizerFast were not equal. CLIPTokenizerFast has been corrected to encode the text with the same strategy as CLIPTokenizer.

What does this mean for you ? You need to use the tokenizer that was used to train the CLIP template you are using. For example:

To make CLIPTokenizerFast identical to CLIPTokenizer, the template of the tokenization of a sentence pair (A,B) has been modified. The previous template was <|startoftext|> A B <|endoftext|> and the new one is <|startoftext|> A <|endoftext|> <|endoftext|> B <|endoftext|>.

What's Changed

Impressive community contributors

The community contributors below have significantly contributed to the v4.17.0 release. Thank you!

@sayakpaul, for contributing the TensorFlow version of ConvNext @gchhablani, for contributing PLBart @edugp, for contributing Data2Vec

New Contributors

Full Changelog: https://github.com/huggingface/transformers/compare/v4.16.0...v4.17.0

相关地址:原始地址 下载(tar) 下载(zip)

查看:2022-03-03发行的版本