Skip to content

Commit

Permalink
Made more updates to documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
dmatekenya committed Nov 8, 2024
1 parent 9aba6b9 commit 91c1b16
Show file tree
Hide file tree
Showing 19 changed files with 1,267 additions and 50 deletions.
Empty file removed data/.gitkeep
Empty file.
40 changes: 31 additions & 9 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,15 +16,15 @@ parts:
- file: docs/tunisia-may-24/tun-module-2
- file: docs/tunisia-may-24/tun-module-3
- file: docs/tunisia-may-24/tun-module-4
- file: notebooks/tunisia-may-24/tun-notebooks-intro.md
sections:
- file: notebooks/tunisia-may-24/1-text2sql-demo.ipynb
- file: notebooks/tunisia-may-24/2-document-classification-with-sklearn.ipynb
- file: notebooks/tunisia-may-24/3-intro-langchain.ipynb
- file: docs/tunisia-may-24/tun-project
sections:
- file: docs/tunisia-may-24/tun-project-ideas
- file: docs/tunisia-may-24/streamlit-app-deployment
- file: notebooks/tunisia-may-24/tun-notebooks-intro
sections:
- file: notebooks/tunisia-may-24/text2sql-demo.ipynb
- file: notebooks/tunisia-may-24/document-classification-with-sklearn.ipynb
- file: notebooks/tunisia-may-24/intro-langchain.ipynb
- caption: Malawi, Upcoming (Dec 24)
chapters:
- file: docs/malawi-nov-24/malawi
Expand All @@ -34,13 +34,35 @@ parts:
- file: docs/malawi-nov-24/mw-module-2
- file: docs/malawi-nov-24/mw-module-3
- file: docs/malawi-nov-24/mw-module-4
- file: docs/malawi-nov-24/mw-project.md
sections:
- file: docs/malawi-nov-24/mw-project-ideas
- file: docs/malawi-nov-24/mw-streamlit-app
- file: notebooks/malawi-nov-24/mw-notebooks-intro
sections:
- file: notebooks/malawi-nov-24/1-text2sqL-demo.ipynb
- file: notebooks/malawi-nov-24/2-document-classification-with-sklearn.ipynb
- file: notebooks/malawi-nov-24/3-intro-langchain.ipynb
- file: docs/malawi-nov-24/mw-project.md
sections:
- file: docs/malawi-nov-24/mw-project-ideas
- file: docs/malawi-nov-24/mw-streamlit-app
- caption: Examples and Demos
chapters:
- file: docs/chatbot-examples
sections:
- url: https://github.com/worldbank/RAG-Based-ChatBot-Example
title: RAG Based Chatbot with Streamlit
- url: https://github.com/worldbank/WhatsApp-RAG-Example
title: RAG Based Chatbot with WhatsApp Frontend
- url: https://github.com/dmatekenya/Phoso-AI
title: PhosoAI- A Food Prices Chatbot
- file: notebooks/text-classification/intro-text-classification
sections:
- file: notebooks/text-classification/protest-classification-gpt.ipynb
- caption: Additional Resources
chapters:
- file: docs/additional-resources/intro-open-source-llms
sections:
- file: docs/additional-resources/llm3
- caption: Instructors and Contributors
chapters:
- file: docs/contributors/dunstanmatekenya


18 changes: 18 additions & 0 deletions docs/additional-resources/intro-open-source-llms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Introduction to Open-Source Large Language Models
Open-source Large Language Models (LLMs) are language models whose code, training data (in some cases), and weights are freely accessible for public use, modification, and distribution. Unlike proprietary LLMs, which are controlled by specific companies and often come with restricted access or usage limits, open-source LLMs enable developers, researchers, and organizations to explore, modify, and even improve upon the models without licensing restrictions. These models typically offer greater transparency, allowing the community to understand the model’s architecture, training processes, and potential biases. Examples of open-source LLMs include models like GPT-Neo, BLOOM, and LLaMA, which have been developed with collaboration across academic, research, and industry communities.

The availability of open-source LLMs has significantly expanded the accessibility and flexibility of AI technology. Organizations can tailor these models to suit specialized needs, fine-tune them on proprietary or domain-specific data, and even deploy them in sensitive or closed environments where data privacy is paramount. Furthermore, open-source LLMs often allow for full offline deployment, giving users complete control over their applications’ data flow and reducing dependency on third-party API services. This is particularly beneficial in areas where data security and compliance are critical, such as healthcare, finance, and government.

Open-source LLMs also encourage innovation and collaboration within the AI community. By making powerful language models publicly available, researchers and developers can collectively address challenges such as reducing bias, improving efficiency, and enhancing interpretability. Additionally, open-source models often serve as valuable educational resources, allowing newcomers and experts alike to study state-of-the-art model architectures and contribute to their evolution. The open-source movement thus not only democratizes access to advanced AI technology but also drives progress and ethical advancements in the field.

### Key Open-Source LLMs We'll Focus On

In this section, we will explore some of the leading open-source LLMs, each with unique strengths and applications:

- **LLaMA 3**: The latest in Meta’s LLaMA series, LLaMA 3 is designed to balance high performance with computational efficiency, making it an excellent choice for resource-constrained environments.
- **GPT-Neo and GPT-J**: Developed by EleutherAI, these models aim to replicate the capabilities of OpenAI’s GPT models with a fully open-source approach, offering strong general-purpose language capabilities.
- **BLOOM**: Created by BigScience, BLOOM is a multilingual model that supports over 50 languages and is optimized for diverse, global applications.
- **Falcon**: Known for its high efficiency and accuracy, Falcon is another open-source LLM popular for real-world tasks like summarization and question answering.
- **MPT (MosaicML)**: Developed by MosaicML, MPT models are optimized for high throughput and are especially useful for deploying LLMs in production settings.

These models represent some of the best open-source LLMs available, and each offers unique features that make it suitable for various tasks and domains. We’ll dive into these models in detail, exploring their architectures, strengths, and how they can be adapted for specific use cases.
40 changes: 40 additions & 0 deletions docs/additional-resources/llama3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
## Introduction to LLaMA 3

LLaMA 3 is the latest iteration in Meta’s LLaMA (Large Language Model Meta AI) series, designed to offer state-of-the-art language model capabilities with a focus on accessibility, efficiency, and performance. With improvements in both architecture and training methodology, LLaMA 3 provides enhanced capabilities in language understanding, generation, and task-specific adaptation. Like its predecessors, LLaMA 3 is released as an open-source model, making it widely accessible to researchers, developers, and organizations who seek powerful, adaptable language technology without relying on proprietary solutions.

### Model Versions and Sizes

LLaMA 3 is available in multiple versions, each differing in the number of model parameters, which allows users to choose a model that best suits their resource constraints and performance requirements:

- **7B**: The 7-billion parameter model is the most lightweight version, ideal for applications requiring efficient inference on less powerful hardware, such as smaller servers or personal devices.
- **13B**: With 13 billion parameters, this version offers a good balance between efficiency and performance, making it suitable for medium-scale applications and research environments.
- **33B**: This larger model version is designed for tasks requiring higher language understanding and generation accuracy, though it requires more computational resources.
- **70B**: As the largest available version, the 70-billion parameter model provides the most sophisticated language capabilities. It is well-suited for complex, high-stakes tasks, such as detailed summarization, content generation, and sophisticated language understanding in production environments.

These versions allow users to select a model that aligns with their computational capabilities while still providing strong language performance.

### Use Cases for LLaMA 3

LLaMA 3’s flexibility and open-source nature have encouraged various innovative applications across industries and research fields. Here are some prominent use cases:

- **Customer Support Chatbots**: Companies are using LLaMA 3 to build highly responsive, conversational chatbots that assist customers with inquiries, troubleshooting, and recommendations.
- **Content Generation**: LLaMA 3 is used to produce articles, blogs, social media posts, and other types of content, benefiting industries focused on digital marketing, media, and publishing.
- **Scientific Research Assistance**: Researchers leverage LLaMA 3 for summarizing papers, identifying key insights, and generating literature reviews, which accelerates the research process.
- **Data Querying with Natural Language**: The larger versions of LLaMA 3, like the 70B model, are effective in text-to-SQL tasks, where users can ask complex database queries in natural language, enabling non-technical users to extract insights from structured data.
- **Language Translation and Localization**: LLaMA 3 is being used to build advanced translation tools, supporting multi-language communication and localization for businesses operating in diverse linguistic regions.

### Best Practices for Adapting and Fine-Tuning LLaMA 3

Adapting LLaMA 3 to specific tasks involves several best practices to optimize performance while maintaining efficiency:

1. **Task-Specific Fine-Tuning**: Fine-tune LLaMA 3 on domain-specific data to improve model accuracy for specialized applications, such as legal, medical, or financial contexts.
2. **Data Preprocessing**: Ensure that the training data is clean, well-labeled, and representative of the end-task to maximize the effectiveness of fine-tuning. Properly formatted and balanced datasets lead to better generalization.
3. **Parameter Selection**: Select an appropriate model version (e.g., 13B for balanced performance or 70B for high accuracy) based on available hardware and the complexity of the task.
4. **Training with Smaller Batches**: For memory-limited setups, consider using gradient accumulation with smaller batch sizes to fine-tune the model without overloading the hardware.
5. **Prompt Engineering**: Carefully design prompts for zero-shot or few-shot tasks, especially when full fine-tuning is not feasible. Prompt tuning can significantly improve the accuracy of LLaMA 3 on many tasks.

### Supported Languages

LLaMA 3 supports a wide range of languages, making it suitable for multilingual applications. The model has been trained on diverse datasets covering major languages, with improved capabilities for understanding and generating text in English, Spanish, French, German, and many other global languages. Additionally, LLaMA 3’s architecture is designed to handle multilingual tasks with minimal performance drop, which makes it ideal for projects requiring cross-lingual or multi-language support.

By combining its open-source accessibility with robust performance across various tasks and languages, LLaMA 3 represents a powerful tool for anyone seeking to integrate advanced language technology into their projects.
8 changes: 8 additions & 0 deletions docs/chatbot-examples.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
## Building Chatbots with LLMs Using the LangChain Framework
Large Language Models (LLMs) have transformed the development of chatbots, enabling them to handle complex conversations, provide accurate answers, and even perform specialized tasks based on natural language inputs. Using the LangChain framework, developers can seamlessly integrate LLMs into chatbots, creating dynamic applications that adapt to different needs and user environments. LangChain offers an efficient, modular approach to building LLM-based chatbots, allowing developers to chain together various functionalities such as question answering, data querying, and even complex reasoning tasks. By leveraging LangChain, we can develop chatbots with various front ends—from web-based interfaces to WhatsApp integrations—tailored to diverse user requirements.

In this section, we present several chatbot examples that vary in complexity, functionality, and use case. Some examples will illustrate simple question-and-answer bots that respond to straightforward user queries, demonstrating the ease with which LLMs can be used to retrieve and present information conversationally. More advanced examples will focus on text-to-SQL (text2sql) functionality, where the chatbot can interpret natural language questions, extract relevant information from structured tabular data, and deliver accurate responses in human-readable text. These demonstrations will showcase how LangChain enables efficient interaction with databases, making it ideal for applications requiring data-driven insights.

Additionally, we’ll demonstrate chatbots capable of processing voice inputs, adding a new layer of accessibility and convenience for users who prefer or require voice interaction. This capability broadens the chatbot’s usability, especially in settings where hands-free interaction or accessibility for visually impaired users is a priority. By combining LLMs with LangChain’s flexible infrastructure, these examples highlight the range of chatbot applications possible, from simple informational tools to sophisticated virtual assistants capable of complex data extraction and voice processing.

All examples are provided as self-contained GitHub repositories, complete with full instructions and extensive documentation on replicating the chatbot building process. By reviewing these applications, you can essentially clone the repository, make a few customizations, and deploy your own chatbot quickly and efficiently. This approach allows you to leverage robust, pre-built solutions while tailoring them to your unique requirements.
8 changes: 8 additions & 0 deletions docs/contributors/dunstanmatekenya.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Dunstan Matekenya
## Lead Author
Dr. Dunstan Matekenya is a consummate Data Scientist with 15 years’ experience in both traditional statistics and modern data science methods. Currently, he works as a Data Scientist at the World Bank Group Headquarters in Washington DC. Prior to joining the WBG, Dunstan completed his PhD at the University of Tokyo in 2016. His PhD research focused on use of machine learning methods to explore insights from mobile phone data. Before re-orienting his career into Data Science, Dunstan earlier worked as a Statistician at the National Statistical Office in Malawi from 2007 up until 2017. While there he actively contributed to flagship projects such as the 2008 Malawi Population and Housing Census and also led the GIS unit. His passion includes contributing to modernization of official statistics in developing countries with use of alternative data sources such as mobile phone data as well improving capacity in Data Science.
``` {image} ../images/dunstan.jpeg
:alt: Dunstan Matekenya
:width: 300px
:align: center
```
Binary file added docs/images/dunstan.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/images/github-template-action-enable.png
Binary file not shown.
Binary file removed docs/images/github-template-action.png
Binary file not shown.
Binary file removed docs/images/github-template-create.png
Binary file not shown.
Binary file removed docs/images/github-template-pages.png
Binary file not shown.
Binary file removed docs/images/github-template.png
Binary file not shown.
Loading

0 comments on commit 91c1b16

Please sign in to comment.