MyGit

janhq/cortex.cpp

Fork: 116 Star: 2058 (更新于 2024-11-07 07:14:38)

license: Apache-2.0

Language: C++ .

Local AI API Platform

最后发布版本: v0.5.1-rc1 ( 2024-09-12 14:31:21)

官方网址 GitHub网址

Cortex.cpp

Cortex cpp's Readme Banner

GitHub commit activity Github Last Commit Github Contributors GitHub closed issues Discord

Documentation - API Reference - Changelog - Bug reports - Discord

Cortex.cpp is currently in active development.

Overview

Cortex is a Local AI API Platform that is used to run and customize LLMs.

Key Features:

  • Straightforward CLI (inspired by Ollama)
  • Full C++ implementation, packageable into Desktop and Mobile apps
  • Pull from Huggingface, or Cortex Built-in Models
  • Models stored in universal file formats (vs blobs)
  • Swappable Engines (default: llamacpp, future: ONNXRuntime, TensorRT-LLM)
  • Cortex can be deployed as a standalone API server, or integrated into apps like Jan.ai

Cortex's roadmap is to implement the full OpenAI API including Tools, Runs, Multi-modal and Realtime APIs.

Local Installation

Cortex has an Local Installer that packages all required dependencies, so that no internet connection is required during the installation process.

Cortex also has a Network Installer which downloads the necessary dependencies from the internet during the installation.

Windows: cortex-windows-local-installer.exe

MacOS (Silicon/Intel): cortex-mac-local-installer.pkg

Linux: cortex-linux-local-installer.deb

  • For Linux: Download the installer and run the following command in terminal:
    sudo apt install ./cortex-local-installer.deb
  • The binary will be installed in the /usr/bin/ directory.

Usage

CLI

After installation, you can run Cortex.cpp from the command line by typing cortex --help.

cortex pull llama3.2                                    
cortex pull bartowski/Meta-Llama-3.1-8B-Instruct-GGUF
cortex run llama3.2                                  
cortex models ps                                      
cortex models stop llama3.2                       
cortex stop                                       

Refer to our Quickstart and CLI documentation for more details.

API:

Cortex.cpp includes a REST API accessible at localhost:39281.

Refer to our API documentation for more details.

Models

Cortex.cpp allows users to pull models from multiple Model Hubs, offering flexibility and extensive model access.

Currently Cortex supports pulling from:

  • Hugging Face: GGUF models eg author/Model-GGUF
  • Cortex Built-in Models

Once downloaded, the model .gguf and model.yml files are stored in ~\cortexcpp\models.

Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.

Cortex Built-in Models & Quantizations

Model /Engine llama.cpp Command
phi-3.5 cortex run phi3.5
llama3.2 cortex run llama3.1
llama3.1 cortex run llama3.1
codestral cortex run codestral
gemma2 cortex run gemma2
mistral cortex run mistral
ministral cortex run ministral
qwen2 cortex run qwen2.5
openhermes-2.5 cortex run openhermes-2.5
tinyllama cortex run tinyllama

View all Cortex Built-in Models.

Cortex supports multiple quantizations for each model.

❯ cortex-nightly pull llama3.2
Downloaded models:
    llama3.2:3b-gguf-q2-k

Available to download:
    1. llama3.2:3b-gguf-q3-kl
    2. llama3.2:3b-gguf-q3-km
    3. llama3.2:3b-gguf-q3-ks
    4. llama3.2:3b-gguf-q4-km (default)
    5. llama3.2:3b-gguf-q4-ks
    6. llama3.2:3b-gguf-q5-km
    7. llama3.2:3b-gguf-q5-ks
    8. llama3.2:3b-gguf-q6-k
    9. llama3.2:3b-gguf-q8-0

Select a model (1-9): 

Advanced Installation

Network Installer (Stable)

Cortex.cpp is available with a Network Installer, which is a smaller installer but requires internet connection during installation to download the necessary dependencies.

Windows: cortex-windows-network-installer.exe

MacOS (Universal): cortex-mac-network-installer.pkg

Linux: cortex-linux-network-installer.deb

Beta & Nightly Versions

Cortex releases 2 preview versions for advanced users to try new features early (we appreciate your feedback!)

  • Beta (early preview)
    • CLI command: cortex-beta
  • Nightly (released every night)
    • CLI Command: cortex-nightly
    • Nightly automatically pulls the latest changes from upstream llama.cpp repo, creates a PR and runs tests.
    • If all test pass, the PR is automatically merged into our repo, with the latest llama.cpp version.

Local Installer (Default)

Version Windows MacOS Linux
Beta (Preview) cortex-beta-windows-local-installer.exe cortex-beta-mac-local-installer.pkg cortex-beta-linux-local-installer.deb
Nightly (Experimental) cortex-nightly-windows-local-installer.exe cortex-nightly-mac-local-installer.pkg cortex-nightly-linux-local-installer.deb

Network Installer

Version Type Windows MacOS Linux
Beta (Preview) cortex-beta-windows-network-installer.exe cortex-beta-mac-network-installer.pkg cortex-beta-linux-network-installer.deb
Nightly (Experimental) cortex-nightly-windows-network-installer.exe cortex-nightly-mac-network-installer.pkg cortex-nightly-linux-network-installer.deb

Build from Source

Windows

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
  1. Build the Cortex.cpp inside the engine/build folder:
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
cmake --build . --config Release
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

MacOS

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the engine/build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

Linux

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the engine/build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder_in_cortex_repo/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Verify that Cortex.cpp is installed correctly by getting help information.
cortex -h

Uninstallation

Windows

  1. Open the Windows Control Panel.
  2. Navigate to Add or Remove Programs.
  3. Search for cortexcpp and double click to uninstall. (for beta and nightly builds, search for cortexcpp-beta and cortexcpp-nightly respectively)

MacOs

Run the uninstaller script:

sudo sh cortex-uninstall.sh

For MacOS, there is a uninstaller script comes with the binary and added to the /usr/local/bin/ directory. The script is named cortex-uninstall.sh for stable builds, cortex-beta-uninstall.sh for beta builds and cortex-nightly-uninstall.sh for nightly builds.

Linux

sudo apt remove cortexcpp

Contact Support

最近版本更新:(数据更新于 2024-09-14 23:55:55)

2024-09-12 14:31:21 v0.5.1-rc1

2024-08-26 16:38:11 v0.5.0-47

2024-08-23 14:48:31 v0.5.0-46

2024-08-16 11:39:23 v0.5.0-45

2024-08-15 12:21:05 v0.5.0-44

2024-08-09 17:26:39 v0.5.0-41

2024-08-08 18:57:28 v0.5.0-40

2024-08-08 14:49:52 v0.5.0-37

2024-08-08 18:27:05 v0.5.0-36

2024-08-07 23:00:54 v0.5.0-34

主题(topics):

gguf, llamacpp, onnx, onnxruntime, tensorrt-llm

janhq/cortex.cpp同语言 C++最近更新仓库

2024-11-21 04:48:41 PCSX2/pcsx2

2024-11-20 09:02:24 dail8859/NotepadNext

2024-11-20 04:28:15 microsoft/terminal

2024-11-18 22:35:05 ClickHouse/ClickHouse

2024-11-18 14:36:13 cxasm/notepad--

2024-11-18 00:19:27 MaaAssistantArknights/MaaAssistantArknights