Skip to content

Releases: LazyAGI/LazyLLM

v0.3.0a0

22 Nov 14:33
Compare
Choose a tag to compare
update version to 0.3.0

v0.2.5

29 Sep 04:02
3fae1ce
Compare
Choose a tag to compare
change default glm module to GLM-4-Flash (#287)

v0.2.4

19 Sep 05:22
9f0e00f
Compare
Choose a tag to compare
Cherry-pick web assistant to dev/0.2 (#259)

Co-authored-by: dorren002 <[email protected]>
Co-authored-by: wenduren <[email protected]>

v0.2.3

04 Sep 11:45
4a2e075
Compare
Choose a tag to compare
update version to 0.2.3 (#223)

v0.2.2

03 Sep 03:47
b8d8acc
Compare
Choose a tag to compare
  1. Refactored RAG to make it more user-friendly and extensible, allowing flexible customization.
  2. Added support for multimodal capabilities, including text-to-image, image-text understanding, speech-to-text, and text-to-speech/music.
  3. Refactored the microservice architecture to support the streaming output of results from any module.
  4. Added support for Function Call and SQL Call, as well as high-level agents like React, ReWOO, and PlanAndSolve.
  5. Introduced a new CookBook document and provided bilingual API documentation in both Chinese and English.

  1. 重构了RAG,使其更加易用,且具备更强的扩展性,方便大家灵活定制
  2. 支持了多模态的能力,包括文生图、图文理解、语音转文字和文字转语音/音乐
  3. 重构了微服务架构,支持了任意模块的结果的流式输出
  4. 支持了Function Call和Sql Call,支持了React、ReWOO、PlanAndSolve等高阶Agent
  5. 新增了CookBook文档,支持了中英双语的API文档

v0.1.2

25 Jun 11:00
946b9dd
Compare
Choose a tag to compare

LazyLLM v0.1 released!

Features

Convenient AI Application Assembly Process: Even if you are not familiar with large models, you can still easily assemble AI applications with multiple agents using our built-in data flow and functional modules, just like Lego building.

One-Click Deployment of Complex Applications: We offer the capability to deploy all modules with a single click. Specifically, during the POC (Proof of Concept) phase, LazyLLM simplifies the deployment process of multi-agent applications through a lightweight gateway mechanism, solving the problem of sequentially starting each submodule service (such as LLM, Embedding, etc.) and configuring URLs, making the entire process smoother and more efficient. In the application release phase, LazyLLM provides the ability to package images with one click, making it easy to utilize Kubernetes' gateway, load balancing, and fault tolerance capabilities.

Cross-Platform Compatibility: Switch IaaS platforms with one click without modifying code, compatible with bare-metal servers, development machines, Slurm clusters, public clouds, etc. This allows developed applications to be seamlessly migrated to other IaaS platforms, greatly reducing the workload of code modification.

Support for Grid Search Parameter Optimization: Automatically try different base models, retrieval strategies, and fine-tuning parameters based on user configurations to evaluate and optimize applications. This makes hyperparameter tuning efficient without requiring extensive intrusive modifications to application code, helping users quickly find the best configuration.

Efficient Model Fine-Tuning: Support fine-tuning models within applications to continuously improve application performance. Automatically select the best fine-tuning framework and model splitting strategy based on the fine-tuning scenario. This not only simplifies the maintenance of model iterations but also allows algorithm researchers to focus more on algorithm and data iteration, without handling tedious engineering tasks.

What can LazyLLM do

  1. Application Building: Defines workflows such as pipeline, parallel, diverter, if, switch, and loop. Developers can quickly build multi-agent AI applications based on any functions and modules. Supports one-click deployment for assembled multi-agent applications, and also supports partial or complete updates to the applications.
  2. Platform-independent: Consistent user experience across different computing platforms. Currently compatible with various platforms such as bare metal, Slurm, SenseCore, etc.
  3. Supports fine-tuning and inference for large models:
    • Offline (local) model services:
      • Supports fine-tuning frameworks: collie, peft
      • Supports inference frameworks: lightllm, vllm
      • Supports automatically selecting the most suitable framework and model parameters (such as micro-bs, tp, zero, etc.) based on user scenarios..
    • Online services:
      • Supports fine-tuning services: GPT, SenseNova, Tongyi Qianwen
      • Supports inference services: GPT, SenseNova, Kimi, Zhipu, Tongyi Qianwen
      • Supports embedding inference services: OpenAI, SenseNova, GLM, Tongyi Qianwen
    • Support developers to use local services and online services uniformly.
  4. Supports common RAG (Retrieval-Augmented Generation) components: Document, Parser, Retriever, Reranker, etc.
  5. Supports basic webs: such as chat interface and document management interface, etc.