v2.0.0
版本发布时间: 2023-12-04 21:22:52
mudler/LocalAI最新发布版本:v2.23.0(2024-11-11 01:07:39)
What's Changed
Breaking Changes 🛠
- :fire: add LLaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types by @mudler in https://github.com/mudler/LocalAI/pull/1254
- refactor: rename llama-stable to llama-ggml by @mudler in https://github.com/mudler/LocalAI/pull/1287
Bug fixes :bug:
- fix: respect OpenAI spec for response format by @mudler in https://github.com/mudler/LocalAI/pull/1289
- fix: handle grpc and llama-cpp with REBUILD=true by @mudler in https://github.com/mudler/LocalAI/pull/1328
- fix: propagate CMAKE_ARGS when building grpc by @mudler in https://github.com/mudler/LocalAI/pull/1334
- fix(vall-e-x): correctly install reqs in environment by @mudler in https://github.com/mudler/LocalAI/pull/1377
Exciting New Features 🎉
- feat(certificates): add support for custom CA certificates by @vitorstone in https://github.com/mudler/LocalAI/pull/880
- feat(conda): conda environments by @mudler in https://github.com/mudler/LocalAI/pull/1144
- refactor: move backends into the backends directory by @mudler in https://github.com/mudler/LocalAI/pull/1279
- feat: allow to run parallel requests by @mudler in https://github.com/mudler/LocalAI/pull/1290
- feat(transformers): add embeddings with Automodel by @mudler in https://github.com/mudler/LocalAI/pull/1308
- ci(core): add -core images without python deps by @mudler in https://github.com/mudler/LocalAI/pull/1309
- feat: initial watchdog implementation by @mudler in https://github.com/mudler/LocalAI/pull/1341
- feat: update whisper_cpp with CUBLAS, HIPBLAS, METAL, OPENBLAS, CLBLAST support by @wuxxin in https://github.com/mudler/LocalAI/pull/1302
👒 Dependencies
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1231
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1236
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1285
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1288
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1291
Other Changes
- Update .gitignore for backend/llama.cpp by @dave-gray101 in https://github.com/mudler/LocalAI/pull/1235
- llama index example by @sfxworks in https://github.com/mudler/LocalAI/pull/1237
- chianlit example by @sfxworks in https://github.com/mudler/LocalAI/pull/1238
- Fixes the bug 1196 by @diego-minguzzi in https://github.com/mudler/LocalAI/pull/1232
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1242
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1256
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1265
- deps(go-piper): update to 2023.11.6-3 by @M0Rf30 in https://github.com/mudler/LocalAI/pull/1257
- feat(llama.cpp): support lora with scale and yarn by @mudler in https://github.com/mudler/LocalAI/pull/1277
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1272
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1280
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1282
- feat: queue up requests if not running parallel requests by @mudler in https://github.com/mudler/LocalAI/pull/1296
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1297
- fix(api/config): allow YAML config with .yml by @Papawy in https://github.com/mudler/LocalAI/pull/1299
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1300
- llava.yaml (yaml format standardization) by @lunamidori5 in https://github.com/mudler/LocalAI/pull/1303
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1304
- :arrow_up: Update mudler/go-piper by @localai-bot in https://github.com/mudler/LocalAI/pull/1305
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1306
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1310
- fix: ExLlama Backend Context Size & Rope Scaling by @ok2sh in https://github.com/mudler/LocalAI/pull/1311
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1313
- docs: Initial import from localai-website by @mudler in https://github.com/mudler/LocalAI/pull/1312
- fix: move python header comments below shebang in some backends by @B4ckslash in https://github.com/mudler/LocalAI/pull/1321
- Feat: OSX Local Codesigning by @dave-gray101 in https://github.com/mudler/LocalAI/pull/1319
- docs: Add llava, update hot topics by @mudler in https://github.com/mudler/LocalAI/pull/1322
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1323
- docs: Update Features->Embeddings page to reflect backend restructuring by @B4ckslash in https://github.com/mudler/LocalAI/pull/1325
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1330
- fix: rename transformers.py to avoid circular import by @mudler in https://github.com/mudler/LocalAI/pull/1337
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1340
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1345
- feat(petals): add backend by @mudler in https://github.com/mudler/LocalAI/pull/1350
- fix: go-piper add libucd at linking time by @M0Rf30 in https://github.com/mudler/LocalAI/pull/1357
- docs: Add docker instructions, add community projects section in README by @mudler in https://github.com/mudler/LocalAI/pull/1359
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1351
- docs: Update getting started and GPU section by @mudler in https://github.com/mudler/LocalAI/pull/1362
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1363
- ci: limit concurrent jobs by @mudler in https://github.com/mudler/LocalAI/pull/1364
- fix/docs: Python backend dependencies by @B4ckslash in https://github.com/mudler/LocalAI/pull/1360
- ci: split into reusable workflows by @mudler in https://github.com/mudler/LocalAI/pull/1366
- fix: OSX Build Fix Part 1: Metal by @dave-gray101 in https://github.com/mudler/LocalAI/pull/1365
- docs: add fine-tuning example by @mudler in https://github.com/mudler/LocalAI/pull/1374
- docs: site/how-to clean up by @lunamidori5 in https://github.com/mudler/LocalAI/pull/1342
- :arrow_up: Update ggerganov/llama.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1375
- :arrow_up: Update ggerganov/whisper.cpp by @localai-bot in https://github.com/mudler/LocalAI/pull/1227
New Contributors
- @vitorstone made their first contribution in https://github.com/mudler/LocalAI/pull/880
- @sfxworks made their first contribution in https://github.com/mudler/LocalAI/pull/1237
- @diego-minguzzi made their first contribution in https://github.com/mudler/LocalAI/pull/1232
- @M0Rf30 made their first contribution in https://github.com/mudler/LocalAI/pull/1257
- @Papawy made their first contribution in https://github.com/mudler/LocalAI/pull/1299
- @ok2sh made their first contribution in https://github.com/mudler/LocalAI/pull/1311
- @B4ckslash made their first contribution in https://github.com/mudler/LocalAI/pull/1321
- @wuxxin made their first contribution in https://github.com/mudler/LocalAI/pull/1302
Full Changelog: https://github.com/mudler/LocalAI/compare/v1.40.0...v2.0.0
1、 local-ai-avx-Darwin-x86_64 328.49MB
2、 local-ai-avx-Linux-x86_64 307.25MB
3、 local-ai-avx2-Darwin-x86_64 328.69MB
4、 local-ai-avx2-Linux-x86_64 307.28MB
5、 local-ai-avx512-Darwin-x86_64 328.88MB
6、 local-ai-avx512-Linux-x86_64 307.31MB