v0.0.62
版本发布时间: 2024-12-18 10:39:43
meta-llama/llama-stack最新发布版本:v0.0.63(2024-12-18 15:17:43)
What's Changed
A few important updates some of which are backwards incompatible. You must update your run.yaml
s when upgrading. As always look to templates/<distro>/run.yaml
for reference.
- Make embedding generation go through inference by @dineshyv in https://github.com/meta-llama/llama-stack/pull/606
- [/scoring] add ability to define aggregation functions for scoring functions & refactors by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/597
- Update the "InterleavedTextMedia" type by @ashwinb in https://github.com/meta-llama/llama-stack/pull/635
- [NEW!] Experimental post-training APIs! https://github.com/meta-llama/llama-stack/pull/540, https://github.com/meta-llama/llama-stack/pull/593, etc.
A variety of fixes and enhancements. Some selected ones:
- [#342] RAG - fix PDF format in vector database by @aidando73 in https://github.com/meta-llama/llama-stack/pull/551
- add completion api support to nvidia inference provider by @mattf in https://github.com/meta-llama/llama-stack/pull/533
- add model type to APIs by @dineshyv in https://github.com/meta-llama/llama-stack/pull/588
- Allow using an "inline" version of Chroma using PersistentClient by @ashwinb in https://github.com/meta-llama/llama-stack/pull/567
- [docs] add playground ui docs by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/592
- add colab notebook & update docs by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/619
- [tests] add client-sdk pytests & delete client.py by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/638
- [bugfix] no shield_call when there's no shields configured by @yanxi0830 in https://github.com/meta-llama/llama-stack/pull/642
New Contributors
- @SLR722 made their first contribution in https://github.com/meta-llama/llama-stack/pull/540
- @iamarunbrahma made their first contribution in https://github.com/meta-llama/llama-stack/pull/636
Full Changelog: https://github.com/meta-llama/llama-stack/compare/v0.0.61...v0.0.62