Stars
An open source implementation of Microsoft's VALL-E X zero-shot TTS model. Demo is available in https://plachtaa.github.io/vallex/
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…
A multi-voice TTS system trained with an emphasis on quality
Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs
Elegant and Powerfull. Powered by OpenAI and Vercel.
Code for the paper "ViperGPT: Visual Inference via Python Execution for Reasoning"
The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".
tloen / llama-int8
Forked from meta-llama/llamaQuantized inference code for LLaMA models
Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"
[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
ChatGPT + DALL-E + WhatsApp = AI Assistant 🚀 🤖
Running large language models on a single GPU for throughput-oriented scenarios.
A method to increase the speed and lower the memory footprint of existing vision transformers.
🦜🔗 Build context-aware reasoning applications
LAVIS - A One-stop Library for Language-Vision Intelligence
LlamaIndex is a data framework for your LLM applications
Cramming the training of a (BERT-type) language model into limited compute.
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"