Skip to content

Commit

Permalink
Update S0-L24.md
Browse files Browse the repository at this point in the history
  • Loading branch information
qiyanjun authored Mar 3, 2024
1 parent a250e57 commit b54daf2
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions _contents/S0-L24.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ In this session, our readings cover:

## More Readings:

### The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
+ Recent research, such as BitNet [23], is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.

### Langchain:
+ https://python.langchain.com/docs/get_started/introduction
Expand Down

0 comments on commit b54daf2

Please sign in to comment.