MyGit

v1.12.0rc1

deepset-ai/haystack

版本发布时间: 2022-12-19 17:40:16

deepset-ai/haystack最新发布版本:v2.4.0(2024-08-15 17:39:00)

⭐ Highlights

Large Language Models with PromptNode

Introducing PromptNode, a new feature that brings the power of large language models (LLMs) to various NLP tasks. PromptNode is an easy-to-use, customizable node you can run on its own or in a pipeline. We've designed the API to be user-friendly and suitable for everyday experimentation, but also fully compatible with production-grade Haystack deployments.

By setting a prompt template for a PromptNode you define what task you want it to do. This way, you can have multiple PromptNodes in your pipeline, each performing a different task. But that's not all. You can also inject the output of one PromptNode into the input of another one.

Out of the box, we support both Google T5 Flan and OpenAI GPT-3 models, and you can even mix and match these models in your pipelines.

from haystack.nodes.prompt import PromptNode

# Initialize the node:
prompt_node = PromptNode("google/flan-t5-base")  # try also 'text-davinci-003' if you have an OpenAI key

prompt_node("What is the capital of Germany?")

This node can do a lot more than simply querying LLMs: they can manage prompt templates, run batches, share models among instances, be chained together in pipelines, and more. Check its documentation for details!

Support for BM25Retriever in InMemoryDocumentStore

InMemoryDocumentStore has always been the go-to document store for small prototypes. The addition of BM25 support makes it officially one of the document stores to support all Retrievers available to Haystack, just like FAISS and Elasticsearch-like stores, but without the external dependencies. Don't use it in your million-documents-throughput deployments to production, though. It's not the fastest document store out there.

:trophy: Honorable mention to @anakin87 for this outstanding contribution, among many many others! :trophy:

Haystack is always open to external contributions, and every little bit is appreciated. Don't know where to start? Have a look at the Contributors Guidelines.

Extended support for Cohere and OpenAI embeddings

We enabled EmbeddingRetriever to use the latest Cohere multilingual embedding models and OpenAI embedding models.

Simply use the model's full name (along with your API key) in EmbeddingRetriever to get them:

# Cohere
retriever = EmbeddingRetriever(embedding_model="multilingual-22-12", batch_size=16, api_key=api_key)
# OpenAI
retriever = EmbeddingRetriever(embedding_model="text-embedding-ada-002", batch_size=32, api_key=api_key, max_seq_len=8191)

Speeding up dense searches in batch mode (Elasticsearch and OpenSearch)

Whenever you need to execute multiple dense searches at once, ElasticsearchDocumentStore and OpenSearchDocumentStore can now do it in parallel. This not only speeds up run_batch and eval_batch for dense pipelines when used with those document stores but also significantly speeds up multi-embedding retrieval pipelines like, for example, MostSimilarDocumentsPipeline.

For this, we measured a speed up of up to 49% on a realistic dataset.

Under the hood, our newly introduced query_by_embedding_batch document store function uses msearch to unchain the full power of your Elasticsearch/OpenSearch cluster.

:warning: Deprecated Docker images discontinued

1.12 is the last release we're shipping with the old Docker images deepset/haystack-cpu, deepset/haystack-gpu, and their relative tags. We'll remove the corresponding, deprecated Docker files /Dockerfile, /Dockerfile-GPU, and /Dockerfile-GPU-minimal from the codebase after the release.

What's Changed

Pipeline

DocumentStores

Documentation

Contributors to Tutorials

Other Changes

New Contributors

Full Changelog: https://github.com/deepset-ai/haystack/compare/v1.11.1...v1.12.0rc1

相关地址:原始地址 下载(tar) 下载(zip)

查看:2022-12-19发行的版本