v0.0.61
版本发布时间: 2024-12-11 04:50:33
meta-llama/llama-stack最新发布版本:v0.0.63(2024-12-18 15:17:43)
What's Changed
- add NVIDIA NIM inference adapter by @mattf in https://github.com/meta-llama/llama-stack/pull/355
- Tgi fixture by @dineshyv in https://github.com/meta-llama/llama-stack/pull/519
- fixes tests & move braintrust api_keys to request headers by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/535
- allow env NVIDIA_BASE_URL to set NVIDIAConfig.url by @mattf in https://github.com/meta-llama/llama-stack/pull/531
- move playground ui to llama-stack repo by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/536
- fix[documentation]: Update links to point to correct pages by @sablair in https://github.com/meta-llama/llama-stack/pull/549
- Fix URLs to Llama Stack Read the Docs Webpages by @JeffreyLind3 in https://github.com/meta-llama/llama-stack/pull/547
- Fix Zero to Hero README.md Formatting by @JeffreyLind3 in https://github.com/meta-llama/llama-stack/pull/546
- Guide readme fix by @raghotham in https://github.com/meta-llama/llama-stack/pull/552
- Fix broken Ollama link by @aidando73 in https://github.com/meta-llama/llama-stack/pull/554
- update client cli docs by @dineshyv in https://github.com/meta-llama/llama-stack/pull/560
- reduce the accuracy requirements to pass the chat completion structured output test by @mattf in https://github.com/meta-llama/llama-stack/pull/522
- removed assertion in ollama.py and fixed typo in the readme by @wukaixingxp in https://github.com/meta-llama/llama-stack/pull/563
- Cerebras Inference Integration by @henrytwo in https://github.com/meta-llama/llama-stack/pull/265
- unregister API for dataset by @sixianyi0721 in https://github.com/meta-llama/llama-stack/pull/507
- [llama stack ui] add native eval & inspect distro & playground pages by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/541
- Telemetry API redesign by @dineshyv in https://github.com/meta-llama/llama-stack/pull/525
- Introduce GitHub Actions Workflow for Llama Stack Tests by @ConnorHack in https://github.com/meta-llama/llama-stack/pull/523
- specify the client version that works for current together server by @jeffxtang in https://github.com/meta-llama/llama-stack/pull/566
- remove unused telemetry related code by @dineshyv in https://github.com/meta-llama/llama-stack/pull/570
- Fix up safety client for versioned API by @stevegrubb in https://github.com/meta-llama/llama-stack/pull/573
- Add eval/scoring/datasetio API providers to distribution templates & UI developer guide by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/564
- Add ability to query and export spans to dataset by @dineshyv in https://github.com/meta-llama/llama-stack/pull/574
- Renames otel config from jaeger to otel by @codefromthecrypt in https://github.com/meta-llama/llama-stack/pull/569
- add telemetry docs by @dineshyv in https://github.com/meta-llama/llama-stack/pull/572
- Console span processor improvements by @dineshyv in https://github.com/meta-llama/llama-stack/pull/577
- doc: quickstart guide errors by @aidando73 in https://github.com/meta-llama/llama-stack/pull/575
- Add kotlin docs by @Riandy in https://github.com/meta-llama/llama-stack/pull/568
- Update android_sdk.md by @Riandy in https://github.com/meta-llama/llama-stack/pull/578
- Bump kotlin docs to 0.0.54.1 by @Riandy in https://github.com/meta-llama/llama-stack/pull/579
- Make LlamaStackLibraryClient work correctly by @ashwinb in https://github.com/meta-llama/llama-stack/pull/581
- Update integration type for Cerebras to hosted by @henrytwo in https://github.com/meta-llama/llama-stack/pull/583
- Use customtool's get_tool_definition to remove duplication by @jeffxtang in https://github.com/meta-llama/llama-stack/pull/584
- [#391] Add support for json structured output for vLLM by @aidando73 in https://github.com/meta-llama/llama-stack/pull/528
- Fix Jaeger instructions by @yurishkuro in https://github.com/meta-llama/llama-stack/pull/580
- fix telemetry import by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/585
- update template run.yaml to include openai api key for braintrust by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/590
- add tracing to library client by @dineshyv in https://github.com/meta-llama/llama-stack/pull/591
- Fixes for library client by @ashwinb in https://github.com/meta-llama/llama-stack/pull/587
- Fix issue 586 by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/594
New Contributors
- @sablair made their first contribution in https://github.com/meta-llama/llama-stack/pull/549
- @JeffreyLind3 made their first contribution in https://github.com/meta-llama/llama-stack/pull/547
- @aidando73 made their first contribution in https://github.com/meta-llama/llama-stack/pull/554
- @henrytwo made their first contribution in https://github.com/meta-llama/llama-stack/pull/265
- @sixianyi0721 made their first contribution in https://github.com/meta-llama/llama-stack/pull/507
- @ConnorHack made their first contribution in https://github.com/meta-llama/llama-stack/pull/523
- @yurishkuro made their first contribution in https://github.com/meta-llama/llama-stack/pull/580
Full Changelog: https://github.com/meta-llama/llama-stack/compare/v0.0.55...v0.0.61