Skip to content

Commit

Permalink
Added RLHF Blog
Browse files Browse the repository at this point in the history
  • Loading branch information
hetulvp committed Mar 19, 2024
1 parent 19d5248 commit 6aa8e17
Show file tree
Hide file tree
Showing 16 changed files with 1,589 additions and 16 deletions.
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ A multi-part seminar series on Large Language Models (LLMs).

![Session 1](images/home_page/Large%20Language%20Models.png)

<p align="center"><a href="https://xmind.works/share/cmFNh1uK?xid=SjTLV1U0">Large Language Models Full Topic List</a></p>
<p align="center">Click here for <a href="https://xmind.works/share/cmFNh1uK?xid=SjTLV1U0">Large Language Models Full Topic List</a></p>

## [Emergence, Fundamentals and Landscape of LLMs](session_1)

Expand All @@ -34,14 +34,12 @@ Explore diverse applications of Large Language Models (LLMs) and the frameworks

Coming soon...

## ✨ Training and Evaluating LLMs On Custom Datasets
## [Training and Evaluating LLMs On Custom Datasets](session_4)

Delve into the intricacies of training and evaluating Large Language Models (LLMs) on your custom datasets. Gain insights into optimizing performance, fine-tuning, and assessing model effectiveness tailored to your specific data.

![Session 4](images/home_page/Session%204.png)

Coming soon...

## ✨ Optimizing LLMs For Inference and Deployment Techniques

Learn techniques to optimize Large Language Models (LLMs) for efficient inference. Explore strategies for seamless deployment, ensuring optimal performance in real-world applications.
Expand All @@ -50,7 +48,7 @@ Learn techniques to optimize Large Language Models (LLMs) for efficient inferenc

Coming soon...

## ✨ Open Challanges With LLMs
## ✨ Open Challenges With LLMs

Delve into the dichotomy of small vs large LLMs, navigating production challenges, addressing research hurdles, and understanding the perils associated with the utilization of pretrained LLMs. Explore the evolving landscape of challenges within the realm of Large Language Models.

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/site/infocusp_logo_blue.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 0 additions & 5 deletions images/site/infocusp_logo_blue.svg

This file was deleted.

9 changes: 7 additions & 2 deletions mkdocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ docs_dir: .
site_dir: ../site
theme:
name: material
# logo: assets/icons8-code-64.png
logo: images/site/infocusp_logo_blue.png
palette:
primary: white
features:
Expand All @@ -22,6 +22,7 @@ theme:
markdown_extensions:
- toc:
permalink: true
toc_depth: 3
- pymdownx.highlight:
anchor_linenums: true
line_spans: __span
Expand All @@ -41,6 +42,8 @@ extra_css:
extra:
generator: false
social:
- icon: fontawesome/solid/globe
link: https://infocusp.com
- icon: fontawesome/brands/linkedin
link: https://in.linkedin.com/company/infocusp
- icon: fontawesome/brands/github
Expand All @@ -51,4 +54,6 @@ extra:
link: https://twitter.com/_infocusp
plugins:
- search
- same-dir
- same-dir
- mkdocs-jupyter:
ignore_h1_titles: True
3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
mkdocs>=1.2.2
mkdocs-material>=7.1.11
mkdocs-static-i18n>=0.18
mkdocs-same-dir>=0.1.3
mkdocs-same-dir>=0.1.3
mkdocs-jupyter>=0.16.1
1 change: 1 addition & 0 deletions session_1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
Covers important building blocks of what we call an LLM today, where they came from, etc. and then we'll dive into the deep universe that has sprung to life around these LLMs.

This session is aimed to help:

* People who are new to LLMs
* People who have just started working on them
* People who are working on different use cases surrounding LLMs and need a roadmap.
Expand Down
4 changes: 2 additions & 2 deletions session_1/part_3_landscape_of_llms/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -465,9 +465,9 @@

</details>

## Challanges with LLMs
## Challenges with LLMs

![Challanges with LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-challanges.png)
![Challenges with LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-challenges.png)

<details markdown>

Expand Down
34 changes: 34 additions & 0 deletions session_4/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# Session 4 - Training and Evaluating LLMs On Custom Datasets

<p align="center"><img src="../images/home_page/Session%204.png" alt="Session 4"/></p>

This session aims to equip you with the knowledge to train Large Language Models (LLMs) by exploring techniques like unsupervised pretraining and supervised fine-tuning with various preference optimization methods. It will also cover efficient fine-tuning techniques, retrieval-based approaches, and language agent fine-tuning. Additionally, the session will discuss LLM training frameworks and delve into evaluation methods for LLMs, including evaluation-driven development and using LLMs for evaluation itself.

This session is aimed to help:

* People who are already familiar basics of LLMs and Transformers
* People who already knows how to use pre-trained LLMs prompt engineering and RAG
* People who want train or finetune their own LLMs on custom data.
* People who want to lear how to evaluate LLMs

## Outline

### Part 1: Training Foundational LLMs

Coming soon...

### Part 2: [Finetuning LMs To Human Preferences](part_2_finetuning_lms_to_human_preferences/RLHF.ipynb)

#### Details

* Date: 14 March, 2024
* Speaker: [Abhor Gupta](https://in.linkedin.com/in/abhor-gupta-565386145)
* Location: [Infocusp Innovations LLP](https://www.infocusp.com/)

#### Material

* Recording: TODO

### Part 3: LLM Training Frameworks

Coming soon...
Loading

0 comments on commit 6aa8e17

Please sign in to comment.