MyGit

nikolaydubina/llama2.go

Fork: 9 Star: 192 (更新于 2024-11-26 20:10:42)

license: MIT

Language: Go .

LLaMA-2 in native Go

最后发布版本: v0.7.1 ( 2024-03-21 19:56:26)

GitHub网址

llama2.go

Go Report Card codecov Go Reference OpenSSF Scorecard

This is a native Go inference of LLaMA-2, as of 2023-08-19 state-of-the-art open source large language model from Meta. It is ported from github.com/karpathy/llama2.c@bd18228 on 2023-08-19. Additional features may be added.

How to run?

  1. get tokenizer.bin from llama2.c
  2. get weights wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin
  3. go install github.com/nikolaydubina/llama2.go@latest
  4. llama2.go -checkpoint=stories110M.bin -prompt="good morning said sun to trees"
$ llama2.go -checkpoint=stories110M.bin -prompt="good morning said sun to trees"
2023/07/29 09:30:22 config: llama2.Config{Dim:768, HiddenDim:2048, NumLayers:12, NumHeads:12, NumKVHeads:12, VocabSize:32000, SeqLen:1024}
<s>
good morning said sun to trees: "Let's organize an operation!"
The trees clapped their branches and asked "What will we do?"
Badger smiled and replied "We will build a treehouse together!"
The trees got blocks of wood and started to build. Badger put nails in the tiny pieces of wood, while the trees put the blocks together to make a
 solid base. 
When they finished their treehouse, Goodger and the trees sat inside. Badger said, "Look how fancy we made it!"
The trees smiled and nodded. They said, "It's very fancy! Thank you for helping us organize this operation." 
Then they lived happily in their fancy treehouse together!
<s>
Once upon a time, there was a boy named Timmy. Timmy was very hungry and wanted to eat his meal. He asked his mom, "What are we having for dinner
?" His mom said, "We are having chicken and rice." Timmy said, "Yum! I love chicken and rice."
While they were eating, Timmy's dad came in and said, "Hey Timmy, do you want to watch a movie after
2023/07/29 09:30:58 achieved tok/s: 28.619646

Performance

system model llama2.c llama.cpp llama2.go[^simple] llama2.go[^fast]
Apple M1 Max 10CPU 64GB stories110M 101.84 tok/s 10.47 tok/s 39.28 tok/s
Apple M1 Max 10CPU 64GB llama2_7b 1.83 tok/s 20.36 tok/s 0.87 tok/s
Apple M1 Max 10CPU 64GB llama2_13b (segfault) 11.71 tok/s 0.38 tok/s

Optimizations

  • transformer steps parallelism
  • loop unrolling
  • in-matrix parallelism
  • (todo) SIMD
  • (todo) quantization

All optimizations are Fuzz-tested against basic algorithm, which is itself tested. To disable optimizations update llama2/transformer.go import to package without optimizations and rebuild.

Related Work and References

[^simple]: No linear algebra optimizations [^fast]: All linear algebra optimizations

最近版本更新:(数据更新于 2024-09-27 17:31:42)

2024-03-21 19:56:26 v0.7.1

2023-08-19 14:13:23 v0.7.0

2023-08-07 15:59:36 v0.6.0

2023-07-31 18:13:01 v0.5.1

2023-07-30 01:18:45 v0.5.0

2023-07-29 10:12:18 v0.4.0

2023-07-29 09:47:47 v0.3.0

2023-07-28 00:53:26 v0.2.0

2023-07-26 02:56:28 v0.1.1

2023-07-26 02:52:42 v0.1.0

主题(topics):

inference, large-language-model, llama, llama2, llm, machine-learning, ml, neural-networks, small-code

nikolaydubina/llama2.go同语言 Go最近更新仓库

2024-12-22 07:52:58 navidrome/navidrome

2024-12-21 20:15:12 SagerNet/sing-box

2024-12-21 03:25:54 SpecterOps/BloodHound

2024-12-19 23:11:24 shadow1ng/fscan

2024-12-19 21:50:56 minio/minio

2024-12-19 10:04:39 istio/istio