v1.52.5-stable
版本发布时间: 2024-11-14 14:07:53
BerriAI/litellm最新发布版本:v1.54.0(2024-12-08 12:50:00)
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.5.staging1...v1.52.5-stable
Docker image ghcr.io/berriai/litellm:litellm_stable_nov12-stable
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:litellm_stable_nov12-stable
What's Changed
- Litellm dev 11 11 2024 by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6693
fix(init.py): add 'watsonx_text' as mapped llm api route fix(opentelemetry.py): fix passing parallel tool calls to otel fix(init.py): update provider-model mapping to include all known provider-model mappings feat(anthropic): support passing document in llm api call docs(anthropic.md): add pdf anthropic call to docs + expose new 'supports_pdf_input' function
- Add docs to export logs to Laminar by @dinmukhamedm in https://github.com/BerriAI/litellm/pull/6674
- (Feat) Add langsmith key based logging by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6682
- (fix) OpenAI's optional messages[].name does not work with Mistral API by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6701
- (feat) add xAI on Admin UI by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6680
- (docs) add benchmarks on 1K RPS by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6704
- (feat) add cost tracking stable diffusion 3 on Bedrock by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6676
- fix raise correct error 404 when /key/info is called on non-existent key by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6653
- (feat) Add support for logging to GCS Buckets with folder paths by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6675
- (feat) add bedrock image gen async support by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6672
- (feat) Add Bedrock Stability.ai Stable Diffusion 3 Image Generation models by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6673
- (Feat) 273% improvement GCS Bucket Logger - use Batched Logging by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6679
- Litellm Minor Fixes & Improvements (11/08/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6658
fix(deepseek/chat): convert content list to str test(test_deepseek_completion.py): implement base llm unit tests fix(router.py): support content policy violation fallbacks with default fallbacks fix(opentelemetry.py): refactor to move otel imports behing flag fix(opentelemtry.py): close span on success completion fix(user_api_key_auth.py): allow user_role to default to none
- (pricing): Fix multiple mistakes in Claude pricing by @Manouchehri in https://github.com/BerriAI/litellm/pull/6666
New Contributors
- @dinmukhamedm made their first contribution in https://github.com/BerriAI/litellm/pull/6674
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.5-stable
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 250.0 | 288.0333965427629 | 6.0955375578428805 | 0.0 | 1824 | 0 | 215.17615800001977 | 3641.4951400000177 |
Aggregated | Passed ✅ | 250.0 | 288.0333965427629 | 6.0955375578428805 | 0.0 | 1824 | 0 | 215.17615800001977 | 3641.4951400000177 |
1、 load_test.html 1.59MB
2、 load_test_stats.csv 540B