diff --git a/.github/workflows/github_pages.yaml b/.github/workflows/github_pages.yaml
new file mode 100644
index 0000000..1895923
--- /dev/null
+++ b/.github/workflows/github_pages.yaml
@@ -0,0 +1,28 @@
+name: ci
+on:
+ push:
+ branches:
+ - development
+permissions:
+ contents: write
+jobs:
+ deploy:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - name: Configure Git Credentials
+ run: |
+ git config user.name github-actions[bot]
+ git config user.email 41898282+github-actions[bot]@users.noreply.github.com
+ - uses: actions/setup-python@v5
+ with:
+ python-version: 3.x
+ - run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
+ - uses: actions/cache@v4
+ with:
+ key: mkdocs-material-${{ env.cache_id }}
+ path: .cache
+ restore-keys: |
+ mkdocs-material-
+ - run: pip install -r requirements.txt
+ - run: mkdocs gh-deploy --force
\ No newline at end of file
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..9c220aa
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,2 @@
+site/*
+.vscode/*
\ No newline at end of file
diff --git a/README.md b/README.md
index 3bc3b61..6223031 100644
--- a/README.md
+++ b/README.md
@@ -9,7 +9,7 @@ A multi-part seminar series on Large Language Models (LLMs).
Large Language Models Full Topic List
-## 👉 [Emergence, Fundamentals and Landscape of LLMs](session_1)
+## ✨ [Emergence, Fundamentals and Landscape of LLMs](session_1)
Covers important building blocks of what we call an LLM today, where they came from, etc. and then we'll dive into the deep universe that has sprung to life around these LLMs.
@@ -17,7 +17,7 @@ Covers important building blocks of what we call an LLM today, where they came f
-## 👉 Universe of Pretrained LLMs and Prompt Engineering
+## ✨ Universe of Pretrained LLMs and Prompt Engineering
In this session, we will introduce various pretrained LLMs, encompassing both open source and proprietary options. We will explore differrnt prompt engineering techniques to use pretrained LLMs for different tasks.
@@ -26,7 +26,7 @@ In this session, we will introduce various pretrained LLMs, encompassing both op
Coming soon...
-## 👉 Applications of LLMs and Application Development Frameworks
+## ✨ Applications of LLMs and Application Development Frameworks
Explore diverse applications of Large Language Models (LLMs) and the frameworks essential for streamlined application development. Uncover how LLMs can revolutionize tasks and leverage frameworks for efficient integration into real-world applications.
@@ -34,7 +34,7 @@ Explore diverse applications of Large Language Models (LLMs) and the frameworks
Coming soon...
-## 👉 Training and Evaluating LLMs On Custom Datasets
+## ✨ Training and Evaluating LLMs On Custom Datasets
Delve into the intricacies of training and evaluating Large Language Models (LLMs) on your custom datasets. Gain insights into optimizing performance, fine-tuning, and assessing model effectiveness tailored to your specific data.
@@ -42,7 +42,7 @@ Delve into the intricacies of training and evaluating Large Language Models (LLM
Coming soon...
-## 👉 Optimizing LLMs For Inference and Deployment Techniques
+## ✨ Optimizing LLMs For Inference and Deployment Techniques
Learn techniques to optimize Large Language Models (LLMs) for efficient inference. Explore strategies for seamless deployment, ensuring optimal performance in real-world applications.
@@ -50,7 +50,7 @@ Learn techniques to optimize Large Language Models (LLMs) for efficient inferenc
Coming soon...
-## 👉 Open Challanges With LLMs
+## ✨ Open Challanges With LLMs
Delve into the dichotomy of small vs large LLMs, navigating production challenges, addressing research hurdles, and understanding the perils associated with the utilization of pretrained LLMs. Explore the evolving landscape of challenges within the realm of Large Language Models.
@@ -58,7 +58,7 @@ Delve into the dichotomy of small vs large LLMs, navigating production challenge
Coming soon...
-## 👉 LLM Courses
+## ✨ LLM Courses
List of courses to learn LLMs at your own pace.
diff --git a/images/site/infocusp_logo_blue.svg b/images/site/infocusp_logo_blue.svg
new file mode 100644
index 0000000..39cc1bc
--- /dev/null
+++ b/images/site/infocusp_logo_blue.svg
@@ -0,0 +1,5 @@
+
diff --git a/mkdocs.yaml b/mkdocs.yaml
new file mode 100644
index 0000000..886ddaf
--- /dev/null
+++ b/mkdocs.yaml
@@ -0,0 +1,54 @@
+site_name: Large Language Models Seminar Series
+copyright: Copyright © 2024 Infocusp Innovations LLP
+repo_url: https://github.com/InFoCusp/llm_seminar_series
+docs_dir: .
+site_dir: ../site
+theme:
+ name: material
+ # logo: assets/icons8-code-64.png
+ palette:
+ primary: white
+ features:
+ - content.code.copy
+ - navigation.footer
+ - navigation.tabs
+ - navigation.expand
+ - navigation.path
+ - navigation.indexes
+ - navigation.top
+ - search.suggest
+ - search.share
+ - search.highlight
+markdown_extensions:
+ - toc:
+ permalink: true
+ - pymdownx.highlight:
+ anchor_linenums: true
+ line_spans: __span
+ pygments_lang_class: true
+ - pymdownx.inlinehilite
+ - pymdownx.snippets
+ - pymdownx.superfences
+ - admonition
+ - md_in_html
+ - pymdownx.superfences:
+ custom_fences:
+ - name: mermaid
+ class: mermaid
+ format: !!python/name:pymdownx.superfences.fence_code_format
+extra_css:
+ - stylesheets/extra.css
+extra:
+ generator: false
+ social:
+ - icon: fontawesome/brands/linkedin
+ link: https://in.linkedin.com/company/infocusp
+ - icon: fontawesome/brands/github
+ link: https://github.com/InFoCusp
+ - icon: fontawesome/brands/instagram
+ link: https://instagram.com/infocuspinnovations
+ - icon: fontawesome/brands/x-twitter
+ link: https://twitter.com/_infocusp
+plugins:
+ - search
+ - same-dir
\ No newline at end of file
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..8be8bd5
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,4 @@
+mkdocs>=1.2.2
+mkdocs-material>=7.1.11
+mkdocs-static-i18n>=0.18
+mkdocs-same-dir>=0.1.3
\ No newline at end of file
diff --git a/session_1/README.md b/session_1/README.md
index 939b58e..eeb5e1c 100644
--- a/session_1/README.md
+++ b/session_1/README.md
@@ -1,6 +1,6 @@
-## Session 1 - Emergence, Fundamentals and Landscape of LLMs
+# Session 1 - Emergence, Fundamentals and Landscape of LLMs
-
+
Covers important building blocks of what we call an LLM today, where they came from, etc. and then we'll dive into the deep universe that has sprung to life around these LLMs.
diff --git a/session_1/part_1_emergence_of_llms/README.md b/session_1/part_1_emergence_of_llms/README.md
index e422242..f79c38a 100644
--- a/session_1/part_1_emergence_of_llms/README.md
+++ b/session_1/part_1_emergence_of_llms/README.md
@@ -1,4 +1,4 @@
-## The Emergence of LLMs
+# The Emergence of LLMs
```mermaid
diff --git a/session_1/part_3_landscape_of_llms/README.md b/session_1/part_3_landscape_of_llms/README.md
index 7687af7..23b2e28 100644
--- a/session_1/part_3_landscape_of_llms/README.md
+++ b/session_1/part_3_landscape_of_llms/README.md
@@ -2,26 +2,17 @@
![LLM Landscape](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-Main.png)
-# Table of Contents
-* [Pretrained LLMs](#-pretrained-llms)
-* [Prompt Engineering](#-prompt-engineering)
-* [Training LLMs](#-training-llms)
-* [Evaluating LLMs](#-evaluating-llms)
-* [LLMs Deployment](#-llms-deployment)
-* [LLMs Inference Optimisation](#-llms-inference-optimisation)
-* [LLMs with Large Context Window](#-llms-with-large-context-window)
-* [Challanges with LLMs](#-challanges-with-llms)
-* [LLM Applications](#-llm-applications)
-* [LLM Application Development](#-llm-application-development)
-* [LLM Courses](#-llm-courses)
-
-## 👉 Pretrained LLMs
-![Pretrained LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-pretrained-llms.png)
+## Pretrained LLMs
+![Pretrained LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-pretrained-llms.png)
### Opensource LLMs
+
+
+References
+
- [Can we stop relying on proprietary LLMs to evaluate open LLMs?](https://www.linkedin.com/posts/danielvanstrien_paper-page-prometheus-inducing-fine-grained-activity-7120763227533139969-ML2K?utm_source=share&utm_medium=member_ios)
`Evaluation` `Open LLM` `Proprietary LLM` `GPT-4` `Feedback Collection dataset` `Prometheus model`
@@ -76,20 +67,32 @@
LLama 1 & 2 opened the floodgates of open source LLMs. MistralAI released the most powerful 7B base LLM remotely inspired by the success of LLama 2. HuggingFace H4 released Zephyr trained on on a mix of publicly available, synthetic datasets using DPO. TsinghuaNLP released the UltraChat dataset, a large-scale, multi-round dialogue dataset. OpenBMB released the UltraFeedback dataset, a large-scale, fine-grained, diverse preference dataset for RLHF and DPO. Huggingface H4 team fine-tuned Zephyr using UltraChat (supervised fine tuning) and UltraFeedback (DPO for alignment). ArgillaIO fixed some data issues and improved on Zephyr to release Notus-7B.
-## 👉 Prompt Engineering
+
+
+## Prompt Engineering
![Prompt Engineering](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-prompt-engineering.png)
+
+
+References
+
- [Prompt Engineering Guide](https://www.promptingguide.ai/)
`Prompt Engineering`
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs).
-## 👉 Training LLMs
+
+
+## Training LLMs
![Training LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-training.png)
+
+
+References
+
- [Efficient Deep Learning Optimization Libraries for Large Language Model Training](https://www.linkedin.com/posts/ashishpatel2604_datascience-machinelearning-artificialintelligence-activity-7082215572150587392-SPN-?utm_source=share&utm_medium=member_ios)
`DeepSpeed` `Megatron-DeepSpeed` `FairScale` `Megatron-LM` `Colossal-AI` `BMTrain` `Mesh TensorFlow` `max text` `Alpa` `GPT-NeoX`
@@ -125,10 +128,15 @@
`language models` `reinforcement learning` `direct preference optimization`
The paper "Direct Preference Optimization: Your Language Model is Secretly a Reward Model" introduces a novel algorithm that gets rid of the two stages of RL, namely - fitting a reward model, and training a policy to optimize the reward via sampling. This new algorithm, called Direct Preference Optimization (DPO), trains the LLM using a new loss function which encourages it to increase the likelihood of the better completion and decrease the likelihood of the worse completion. DPO has been shown to achieve comparable performance to RL-based methods, but is much simpler to implement and scale.
-
+
+
### Supervised Finetuning
+
+
+References
+
- [Fine-tuning Llama-2 on your own data](https://www.linkedin.com/posts/alphasignal_llama-2-can-now-be-fine-tuned-on-your-activity-7116422223191576576-ZPb5?utm_source=share&utm_medium=member_ios)
`LLM` `Fine-tuning` `Natural Language Processing`
@@ -195,11 +203,16 @@
This blog explores alternatives to Reinforcement Learning from Human Feedback (RLHF) for fine-tuning large language models. The alternatives discussed include supervised fine-tuning and direct preference optimization. The blog also provides a hands-on guide to preparing human preference data and using the Transformers Reinforcement Learning library to fine-tune a large language model using direct preference optimization.
+
-## 👉 Evaluating LLMs
+## Evaluating LLMs
![Evaluating LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-evaluating-llms.png)
+
+
+References
+
- [LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs](https://optimalscale.github.io/LMFlow/blogs/benchmark.html)
`LLM Evaluation` `Chatbot Arena` `GPT-4` `LMFlow Benchmark`
@@ -242,11 +255,18 @@
Recommender systems output a ranking list of items. Hit ratio, MRR, Precision, Recall, MAP, NDCG are commonly used metrics to evaluate the performance of recommender systems.
+
+
-## 👉 LLMs Deployment
+## LLMs Deployment
![LLMs Deployment](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-deployment.png)
+
+
+References
+
+
- [Model Serving Frameworks for 2023](https://www.linkedin.com/posts/aboniasojasingarayar_llm-llmops-mlops-activity-7117777649896210432-IA5B?utm_source=share&utm_medium=member_ios)
`Model Serving` `AI` `Machine Learning` `MLOps`
@@ -295,9 +315,14 @@
LoRAX is a new kind of LLM inference solution designed to make it cost effective and scalable to serve many fine-tuned models in production at once, conserving precious GPUs by dynamically exchanging in and out fine-tuned LoRA models within a single LLM deployment.
+
### Running LLMs Locally
+
+
+References
+
- [Run Large Language Models on Your CPU with Llama.cpp](https://pub.towardsai.net/high-speed-inference-with-llama-cpp-and-vicuna-on-cpu-136d28e7887b)
`LLM` `Inference` `CPU` `GPU` `ChatGPT` `Vicuna` `GPT4ALL` `Alpaca` `ggml`
@@ -345,23 +370,35 @@
`GPU` `CPU` `Linux` `Windows` `Mac` `Llama 2` `gradio UI` `Generative Agents/Apps`
This project enables users to run any Llama 2 model locally with a gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). It uses `llama2-wrapper` as the local llama2 backend for Generative Agents/Apps.
-
+
+
### Semantic Cache for LLMs
+
+
+References
+
+
- [GPTCache: Semantic Cache for LLMs](https://github.com/zilliztech/GPTCache/tree/main)
`LLM` `Semantic Caching` `LangChain` `Llama Index`
GPTCache is a semantic cache for LLMs that helps reduce the cost and latency of LLM API calls. It uses embedding algorithms to convert queries into embeddings and uses a vector store for similarity search on these embeddings. This allows GPTCache to identify and retrieve similar or related queries from the cache storage, thereby increasing cache hit probability and enhancing overall caching efficiency.
+
-## 👉 LLMs Inference Optimisation
+## LLMs Inference Optimisation
![LLMs Inference Optimisation](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-quantisation.png)
### LLM Quantization
+
+
+References
+
+
- **[BitNet: Scaling 1-bit Transformers for Large Language Models](https://arxiv.org/abs/2310.11453)**
`Transformers` `Quantization` `LLM`
@@ -404,10 +441,16 @@
Cerebras Systems has released a cleaned and de-duplicated version of the RedPajama Dataset, reducing its size by 49%. Additionally, RedPajama has released a model family of 7B size, including chat, instruction fine-tuned, and base models. The instruction fine-tuned model shows promising performance on the HELM benchmark.
-## 👉 LLMs with Large Context Window
+
+
+## LLMs with Large Context Window
![LLMs with Large Context Window](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-large-context.png)
+
+
+References
+
- [How to use 100K context window in LLMs](https://blog.gopenai.com/how-to-speed-up-llms-and-use-100k-context-window-all-tricks-in-one-place-ffd40577b4c?gi=01371942e829)
`LLM Training` `Model Size` `Attention Mechanisms`
@@ -419,12 +462,17 @@
`LLM` `NLP` `Machine Learning` `Artificial Intelligence`
XGen is a new state-of-the-art 7B LLM with standard dense attention on up to 8K sequence length. It achieves comparable or better results than other open-source LLMs of similar model size on standard NLP benchmarks. XGen also shows benefits on long sequence modeling benchmarks and achieves great results on both text and code tasks.
-
-## 👉 Challanges with LLMs
+
+
+## Challanges with LLMs
![Challanges with LLMs](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-challanges.png)
+
+
+References
+
- [Challenges in Building LLM Applications for Production](https://home.mlops.community/public/videos/building-llm-applications-for-production?utm_campaign=LLM%20II%20%231&utm_content=LLM%20in%20production%20keynotes%20are%20out%21&utm_medium=email&utm_source=ActiveCampaign)
`Consistency` `Hallucinations` `Privacy` `Context Length` `Data Drift` `Model Updates and Compatibility` `LM on the Edge` `Model Size` `Non-English Languages` `Chat vs. Search as an Interface` `Data Bottleneck` `Hype Cycles and the Importance of Data`
@@ -443,9 +491,14 @@
Reusing pre-trained language models without careful consideration can lead to negative impacts on downstream tasks due to issues such as over-training, under-training, or over-parameterization. WeightWatchers is an open-source diagnostic tool that can be used to analyze DNNs without access to training or test data, helping to identify potential issues before deployment.
+
### Large vs Small Langage Models
+
+
+References
+
- [Small language models can outperform LLMs in specific domains](https://www.linkedin.com/posts/sebastien-bubeck-6b558a1a5_textbooks-are-all-you-need-activity-7077091292077330433-w2vZ?utm_source=share&utm_medium=member_ios)
`LLM` `NLP` `Machine Learning`
@@ -458,32 +511,46 @@
The author discusses the recent trend of focusing on model sizes in the field of LLMs and argues that data quality is often overlooked. They cite the example of phi-1, a 1.3B parameter Transformer-based model by Microsoft, which achieved surprisingly good results. The author concludes that we should pay more attention to data quality when developing LLMs.
+
-## 👉 LLM Applications
+## LLM Applications
![LLM Applications](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-applications.png)
-
### LLMs for Translation
+
+
+References
+
- [ParroT: Enhancing and Regulating Translation Abilities in Chatbots with Open-Source LLMs](https://www.linkedin.com/posts/ricky-costa-nlp_github-wxjiaoparrot-the-parrot-framework-activity-7095024621321662464-5HWn?utm_source=share&utm_medium=member_ios)
`LLM` `Translation` `Chatbots` `ParroT`
The ParroT framework enhances and regulates the translation abilities of chatbots by leveraging open-source LLMs and human-written translation and evaluation data.
+
### LLMs For Mobile App Developers
+
+
+References
+
- [Hugging Face releases tools for Swift developers to incorporate language models in their apps](https://www.linkedin.com/posts/sahar-mor_a-big-step-forward-for-on-device-llms-activity-7095782887811125248-H5f2?utm_source=share&utm_medium=member_ios)
`Hugging Face` `Swift` `transformers` `Core ML` `Llama` `Falcon`
Hugging Face has released a package and tools to help Swift developers incorporate language models in their apps, including swift-transformers, swift-chat, transformers-to-coreml, and ready-to-use LLMs such as Llama 2 7B and Falcon 7B.
+
### LLM Assistants
+
+
+References
+
- [Comparing coding assistants](https://www.linkedin.com/posts/kalyanksnlp_llms-opensource-nlproc-activity-7077128568891207680-umhc?utm_source=share&utm_medium=member_ios)
`Rust` `coding assistants` `best practices`
@@ -520,9 +587,14 @@
The article provides a comprehensive overview of LLM agents, including their capabilities, limitations, and potential applications. It also discusses the challenges involved in developing and deploying LLM agents, and the ethical considerations that need to be taken into account.
+
### Retrieval Augmented Generation
+
+
+References
+
- [RAG & Enterprise: A Match Made in Heaven](https://www.linkedin.com/posts/prithivirajdamodaran_usecase-retrieval-augmented-generation-for-activity-7076430925122691072-nYMj?utm_source=share&utm_medium=member_ios)
`RAG` `LLM` `Enterprise Search` `Information Retrieval`
@@ -709,9 +781,14 @@
This article discusses 8 key considerations for building production-grade LLM apps over your data. These considerations include using different chunks for retrieval and synthesis, using embeddings that live in a different latent space than the raw text, dynamically loading/updating the data, designing the pipeline for scalability, storing data in a hierarchical fashion, using robust data pipelines, and using hybrid search for entity lookup.
+
### Embeddings for Retrieval
+
+
+References
+
- [How to use Aleph Alpha's semantic embeddings](https://python.langchain.com/docs/modules/data_connection/text_embedding/integrations/aleph_alpha)
`embeddings` `semantic embeddings` `Aleph Alpha`
@@ -730,40 +807,61 @@
This article introduces a new way to generate sentence embeddings using LLM. The method is based on the HuggingFaceEmbeddings integration, which allows users to use SentenceTransformers embeddings directly. The article also provides an example of how to use the new method.
+
### Evaluating RAGs
+
+
+References
+
- [Ragas: A Framework for Evaluating Retrieval Augmented Generation Pipelines](https://github.com/explodinggradients/ragas)
`LLM` `RAG` `NLP` `Machine Learning`
Ragas is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. It provides you with the tools based on the latest research for evaluating LLM-generated text to give you insights about your RAG pipeline. Ragas can be integrated with your CI/CD to provide continuous checks to ensure performance.
-
+
+
### Integrating LLMs with Knowledge Graphs
+
+
+References
+
- [LLMs and Knowledge Graphs](https://www.linkedin.com/posts/llamaindex_google-colaboratory-activity-7101315253833011200-zEOX?utm_source=share&utm_medium=member_ios)
`Knowledge Graphs` `LLMs` `RAG` `Vector Databases` `ChromaDB`
This article discusses the advantages and disadvantages of using Knowledge Graphs (KGs) with LLMs. It also provides a link to a Colab notebook and a video tutorial on the topic.
+
### LLM Watermarking
+
+
+References
+
- [AI generated text? New research shows watermark removal is harder than one thinks!](https://www.linkedin.com/posts/srijankr_ai-activity-7076757082720403456-eVjm?utm_source=share&utm_medium=member_ios)
`LLM` `Watermarking` `Text Generation` `AI Ethics`
Researchers from the University of Maryland have found that it is much harder to remove watermarks from AI-generated text than previously thought. This has implications for the use of watermarks to detect machine-generated content, such as spam and harmful content.
-## 👉 LLM Application Development
+
+
+## LLM Application Development
![LLM Application Development](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-app-development.png)
### LLM SaaS Apps
+
+
+References
+
- [Introducing LLM Studio: A Powerful Platform for Building and Deploying Language Models](https://www.rungalileo.io/blog/announcing-llm-studio)
`LLM` `NLP` `Machine Learning`
@@ -775,14 +873,21 @@
`LLM` `Open Source` `Search Engine`
Verba is an open-source LLM-based search engine that supports a broad spectrum of open-source libraries and custom features. It is easy to install and use, and it does not require users to give away any of their data.
-
-## 👉 LLM Courses
+
+
+## LLM Courses
![LLM Courses](./../../images/session_1/part_3_landscape_of_llms/Large%20Language%20Models-courses.png)
+
+
+References
+
- [LLM course recommendations](https://www.linkedin.com/posts/manishsgupta_%3F%3F%3F%3F%3F-%3F%3F%3F%3F%3F%3F-%3F%3F-%3F%3F%3F%3F%3F%3F-activity-7085475843833016320-rvZ9?utm_source=share&utm_medium=member_ios)
`LLM` `NLP` `AI`
- The article recommends some short courses on LLM. The author also recommends some YouTube channels and videos on LLM.
\ No newline at end of file
+ The article recommends some short courses on LLM. The author also recommends some YouTube channels and videos on LLM.
+
+
\ No newline at end of file
diff --git a/stylesheets/extra.css b/stylesheets/extra.css
new file mode 100644
index 0000000..858b297
--- /dev/null
+++ b/stylesheets/extra.css
@@ -0,0 +1,3 @@
+.md-grid {
+ max-width: 1520px;
+ }
\ No newline at end of file