MyGit

v1.0.4

mistralai/mistral-inference

版本发布时间: 2024-05-23 00:30:02

mistralai/mistral-inference最新发布版本:v1.3.0(2024-07-18 22:01:35)

Mistral-inference is the official inference library for all Mistral models: 7B, 8x7B, 8x22B.

Install with:

pip install mistral-inference

Run with:

from mistral_inference.model import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.protocol.instruct.tool_calls import Function, Tool

tokenizer = MistralTokenizer.from_file("/path/to/tokenizer/file")  # change to extracted tokenizer file
model = Transformer.from_folder("./path/to/model/folder")  # change to extracted model dir

from mistral_common.protocol.instruct.tool_calls import Function, Tool

completion_request = ChatCompletionRequest(
    tools=[
        Tool(
            function=Function(
                name="get_current_weather",
                description="Get the current weather",
                parameters={
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "format": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The temperature unit to use. Infer this from the users location.",
                        },
                    },
                    "required": ["location", "format"],
                },
            )
        )
    ],
    messages=[
        UserMessage(content="What's the weather like today in Paris?"),
        ],
)

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)

相关地址:原始地址 下载(tar) 下载(zip)

查看:2024-05-23发行的版本