2.15.0
版本发布时间: 2024-02-06 22:03:02
huggingface/transformers.js最新发布版本:2.17.2(2024-05-29 22:36:30)
What's new?
🤖 Qwen1.5 Chat models (0.5B and 1.8B)
Yesterday, the Qwen team (Alibaba Group) released the Qwen1.5 series of chat models. As part of the release, they published several sub-2B-parameter models, including Qwen/Qwen1.5-0.5B-Chat and Qwen/Qwen1.5-1.8B-Chat, which both demonstrate strong performance despite their small sizes. The best part? They can run in the browser with Transformers.js (PR)! 🚀 See here for the full list of supported models.
Example: Text generation with Xenova/Qwen1.5-0.5B-Chat
.
import { pipeline } from '@xenova/transformers';
// Create text-generation pipeline
const generator = await pipeline('text-generation', 'Xenova/Qwen1.5-0.5B-Chat');
// Define the prompt and list of messages
const prompt = "Give me a short introduction to large language model."
const messages = [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": prompt }
]
// Apply chat template
const text = generator.tokenizer.apply_chat_template(messages, {
tokenize: false,
add_generation_prompt: true,
});
// Generate text
const output = await generator(text, {
max_new_tokens: 128,
do_sample: false,
});
console.log(output[0].generated_text);
// 'A large language model is a type of artificial intelligence system that can generate text based on the input provided by users, such as books, articles, or websites. It uses advanced algorithms and techniques to learn from vast amounts of data and improve its performance over time through machine learning and natural language processing (NLP). Large language models have become increasingly popular in recent years due to their ability to handle complex tasks such as generating human-like text quickly and accurately. They have also been used in various fields such as customer service chatbots, virtual assistants, and search engines for information retrieval purposes.'
🧍 MODNet for Portrait Image Matting
Next, we added support for MODNet, a small (but powerful) portrait image matting model (PR). Thanks to @cyio for the suggestion!
Example: Perform portrait image matting with Xenova/modnet
.
import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers';
// Load model and processor
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
const processor = await AutoProcessor.from_pretrained('Xenova/modnet');
// Load image from URL
const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
const image = await RawImage.fromURL(url);
// Pre-process image
const { pixel_values } = await processor(image);
// Predict alpha matte
const { output } = await model({ input: pixel_values });
// Save output mask
const mask = await RawImage.fromTensor(output[0].mul(255).to('uint8')).resize(image.width, image.height);
mask.save('mask.png');
Input image | Output mask |
---|---|
🧠 New text embedding models
We also added support for several new text embedding models, including:
- bge-m3 by BAAI.
- nomic-embed-text-v1 by Nomic AI.
- jina-embeddings-v2-base-de and jina-embeddings-v2-base-zh by Jina AI.
Check out the links for example usage.
🛠️ Other improvements
- Fix example links in documentation (https://github.com/xenova/transformers.js/pull/550).
- Improve unknown model warnings (https://github.com/xenova/transformers.js/pull/554).
- Update
jsdoc-to-markdown
dev dependency (https://github.com/xenova/transformers.js/pull/574).
Full Changelog: https://github.com/xenova/transformers.js/compare/2.14.2...2.15.0