From 795e35b1615efac6b0a665c0b0e3768a5a078d90 Mon Sep 17 00:00:00 2001 From: Yanjun Qi / Jane Date: Sun, 24 Mar 2024 14:03:18 -0400 Subject: [PATCH] Update S0-L24.md --- _contents/S0-L24.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_contents/S0-L24.md b/_contents/S0-L24.md index a9edcbae..f5166923 100755 --- a/_contents/S0-L24.md +++ b/_contents/S0-L24.md @@ -28,7 +28,7 @@ In this session, our readings cover: ### Tool Use in LLMs + https://zorazrw.github.io/files/WhatAreToolsAnyway.pdf -+ - provides an overview of tool use in LLMs, including a formal definition of the tool-use paradigm, scenarios where LLMs leverage tool usage, and for which tasks this approach works well; it also provides an analysis of complex tool usage and summarize testbeds and evaluation metrics across LM tooling works ++ an overview of tool use in LLMs, including a formal definition of the tool-use paradigm, scenarios where LLMs leverage tool usage, and for which tasks this approach works well; it also provides an analysis of complex tool usage and summarize testbeds and evaluation metrics across LM tooling works ### The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits + Recent research, such as BitNet [23], is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.