Skip to content

Latest commit

 

History

History
35 lines (17 loc) · 2.66 KB

README.md

File metadata and controls

35 lines (17 loc) · 2.66 KB

Fine-Tuning Open Source LLMs

This repo is part of the Certified Cloud Native Applied Generative AI Engineer program. It covers the fifth quarter of the course work:

Quarter 6: Fine-Tuning Open-Source Large Language Models

This comprehensive course is designed to guide learners through the process of fine-tuning open-source Large Language Models (LLMs) such as Meta LLaMA 3.1 using PyTorch, with a particular emphasis on cloud-native training and deployment. The course covers everything from the fundamentals to advanced concepts, ensuring students acquire both theoretical knowledge and practical skills. The journey begins with an introduction to LLMs, focusing on their architecture, capabilities, and the specific features of Meta LLaMA 3.1. Students will also set up their development environment, including tools like Anaconda, Jupyter Notebooks, and PyTorch, to prepare for hands-on learning.

Fine-tuning Meta LLaMA 3 with PyTorch forms a significant part of the course. Students will delve into the architecture of Meta LLaMA 3.1, learn how to load pre-trained models, and apply fine-tuning techniques.

The course culminates in a capstone project, where students apply all the skills they have learned to fine-tune and deploy Meta LLaMA 3.1 on a chosen platform. This project allows students to demonstrate their understanding and proficiency in the entire process, from data preparation to cloud-native deployment.

Study Material

Introducing Llama 3.1: Our most capable models to date

Step-By-Step Tutorial: How to Fine-tune Llama 3 (8B) with Unsloth + Google Colab & deploy it to Ollama

Working with Llama 3

Insanely Fast LLAMA-3 on Groq Playground and API for FREE

Introducing Llama-3-Groq-Tool-Use Models

Llama 3 Groq 8B Tool Use - Install and Do Actual Function Calling Locally

Superfast RAG with Llama 3 and Groq

Groq’s open-source Llama AI model tops leaderboard, outperforming GPT-4o and Claude in function calling

How to Fine Tune Llama 3 LLM (or) any LLM in Colab | PEFT | Unsloth