From 1a6183cd65dd7c029da0930940708d65ed4858e4 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Mon, 1 Jul 2024 15:19:49 +1000 Subject: [PATCH 01/10] Update scenario6.mdx --- data/hackathon/scenario6.mdx | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/data/hackathon/scenario6.mdx b/data/hackathon/scenario6.mdx index b2fdda9..0230be1 100644 --- a/data/hackathon/scenario6.mdx +++ b/data/hackathon/scenario6.mdx @@ -8,10 +8,10 @@ authors: ['default'] summary: "Demonstrate how Red Hat enables non-data-science users to `instruction-tune` AI models, using some simple RHEL-AI and Instruct Lab tooling" --- -The ACME Financial Services team is on the GenAI hype train and gaining mommentum. They did a lot of experimentation, finetuning, RAG, prompt engineering, but they just found that the hallucinations increased the more finetuning they do, and even the most well engineered prompts would not be a 100% guarantee of the GenAI model not hallucinating. +While the ACME Financial Services (ACME FS) team is evaluating OpenShift AI they admit that they are on the GenAI hype train and that train is gaining momentum in the ACME executive ranks. They did a lot of experimentation, finetuning, RAG, prompt engineering, but they just found that the hallucinations increased the more finetuning they do, and even the most well engineered prompts would not be a 100% guarantee of the GenAI model not hallucinating. So the announcement of Instruct LAB is pretty much what they've been looking for. -Your challenge is to +Your challenge is to help ACME by showcasing the following things: 1) Setup the InstructLab environment - use the showroom UI if you feel a need for speed. 2) Chat with the model (student model) and see what it knows about itself (InstructLab) 3) Add new knowledge @@ -25,10 +25,10 @@ Your challenge is to Documentation you may find helpful is: - https://github.com/instructlab/instructlab -- https://shonpaz.medium.com/rewiring-the-way-we-think-on-ai-part-1-model-fine-tuning-using-instructlab-ebba7017e5d5 -- In case you want to build your RHEL AI image yourself later - still dev preview: https://github.com/RedHatOfficial/rhelai-dev-preview +- FYI - In case you want to build your RHEL AI image yourself later, have a look at the dev preview approach here: https://github.com/RedHatOfficial/rhelai-dev-preview ## 6.1 - Setting Up +The ACMEFS team wants to start with a true blue through-and-through open source approach. This means a model that's avaialable as open source and which was trained on open source data and hence they want to use the Granite model familiy. For that your job as the supporting Red Hatter is is to help the team show how to download a model and serve it. - Go to demo.redhat.com and order your teams' InstructLab RHEL VM (Nvidia/CUDA) environment - Use the virtual python environment if you want GPU acceleration - Install the instruct lab command line tooling @@ -36,19 +36,23 @@ Documentation you may find helpful is: - Serve the Model ## 6.2 - Chat and test knowledge +The team now wants to first try and test-drive the newly downloaded model by chatting with it to get a feel for it. - Chat with the model and test its knowledge about Instruct Lab If you find the answers somewhat peculiar, your mission is to fix that - should you accept it. And no, this message will not self-destruct. Should you be happy with the answer you can select a different knowledge area to improve. ## 6.3 - Add new knowledge +The real reason organisations want to use AI is because they can encode institutional knowledge that leads to either competitive advtange or reducing cost by supporting internal processes. With finetuning this is hard to accomplish as models seem to struggle with connecting new and unknown content with existing pre-trained content. That's why model alignment is such a gamechanger. It allows to encode new organisational knowledge into an AI model, the same way RDMBS databases allowed data or 'knowledge' to be connected and related to via foreign key constraints. Now you show how this is done with InstructLab by adding new knowledge: - Utilise the existing InstructLab taxonomy on your image - Add new knowledge (the InstructLab example knowledge file is at ~/files/qna.yaml) - Optional, but recommended: Verify that the taxonomy tree is A-OK. ## 6.4 - Generate synthetic data +Once you are confident that your taxonomy tree is ok (you might want to show that command to the ACMEFS team) you are now showing how synthetic data is generated to ACME: - Generate new synthetic data with a teacher model - Discuss: Does synthetic data generation need a model being served? Why/Why not? ## 6.5 - Verify expected outcomes +You now proof to the ACME FS team that there was indeed synthetic data generated: - Verify the synthetic data generation via the critic model output - Discuss: Does the critic model _need_ to be a different model compared to the student or teacher model? - Create a screenshot showing the files generated via the generate phase and the discarded data from the critic model and post it into the slack channel. @@ -56,18 +60,21 @@ If you find the answers somewhat peculiar, your mission is to fix that - should (not so) Fun Fact: The upstream version of ilab doesn't actually use a critic model at the moment, it just checks for format errors. RHEL AI will use a critic model. ## 6.6 - Train the student model +Now as you have shown that new data is available you use this data to train (or align) the existing granite model with the new data - Does training require a model being served? Why or Why not? -- Decide if you want hardware acceleration, let iLab know, then train the model +- Decide if you want hardware acceleration, let iLab know when kicking off to train the model - Discuss: When would you / should you use quantisation? -## 6.7 - Chat & verify newly added knowledge -- Chat with the newly trained model and verify if it has the additional knowledge you added. -- Create a screenshot and post it in the slack channel. +The ACMFS team has experiemented in the past with quantization, they know how it works so no need to show that now, they are more interested in seeing the new data being baked into the new version of the granite model. +## 6.7 - Chat & verify newly added knowledge +- Chat with the newly trained model and check if it has the additional knowledge you added. +- Create a screenshot of the model served as well as the model prompt and output and post it in the slack channel. -## 1.5 - Hints! -The first hint is free: In scenario 6, you will need to provision 15 minutes time for synthetic data generation as well as 20 minutes for model training. You might want to make this part of your strategy to win. +## 6.8 - Hints! +The first hint is free: In scenario 6, you will need to provision 15 minutes time for synthetic data generation as well as 20 minutes for model training. You might want to make this part of your strategy while presenting to ACME FS by using a smart mix of slides and demo. +# HINTS If you get stuck on a question, fear not, perhaps try a different approach. If you have tried everything you can think of and are still stuck you can unlock a hint for `5` points by posting a message in the `#event-anz-ocp-ai-hackathon` channel with the message: > [team name] are stuck on [exercise] and are unlocking a hint. @@ -75,8 +82,3 @@ If you get stuck on a question, fear not, perhaps try a different approach. If y A hackathon organiser will join your breakout room to share the hint with you 🤫. -TODO Tom - move this to a Google Doc -# HINTS -- [6.1]: If you get stuck, have a closer look at: https://shonpaz.medium.com/rewiring-the-way-we-think-on-ai-part-1-model-fine-tuning-using-instructlab-ebba7017e5d5 - - From 6f39e98535c7d727533fd2c00997183464f41981 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Mon, 1 Jul 2024 15:35:17 +1000 Subject: [PATCH 02/10] Update scenario6.mdx --- data/hackathon/scenario6.mdx | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/data/hackathon/scenario6.mdx b/data/hackathon/scenario6.mdx index 0230be1..de5bb8b 100644 --- a/data/hackathon/scenario6.mdx +++ b/data/hackathon/scenario6.mdx @@ -8,7 +8,7 @@ authors: ['default'] summary: "Demonstrate how Red Hat enables non-data-science users to `instruction-tune` AI models, using some simple RHEL-AI and Instruct Lab tooling" --- -While the ACME Financial Services (ACME FS) team is evaluating OpenShift AI they admit that they are on the GenAI hype train and that train is gaining momentum in the ACME executive ranks. They did a lot of experimentation, finetuning, RAG, prompt engineering, but they just found that the hallucinations increased the more finetuning they do, and even the most well engineered prompts would not be a 100% guarantee of the GenAI model not hallucinating. +While the ACME Financial Services (ACME FS) team is evaluating OpenShift AI they admit that they are on the GenAI hype train and that train is gaining momentum amongst the ACME executive ranks. They did a lot of experimentation, finetuning, RAG, prompt engineering, but they just found that the hallucinations increased the more finetuning they do, and even the most well engineered prompts would not be a 100% guarantee of the GenAI model not hallucinating. So the announcement of Instruct LAB is pretty much what they've been looking for. Your challenge is to help ACME by showcasing the following things: @@ -28,26 +28,26 @@ Documentation you may find helpful is: - FYI - In case you want to build your RHEL AI image yourself later, have a look at the dev preview approach here: https://github.com/RedHatOfficial/rhelai-dev-preview ## 6.1 - Setting Up -The ACMEFS team wants to start with a true blue through-and-through open source approach. This means a model that's avaialable as open source and which was trained on open source data and hence they want to use the Granite model familiy. For that your job as the supporting Red Hatter is is to help the team show how to download a model and serve it. +The ACMEFS team wants to start with a true blue through-and-through open source approach. This means a model that's avaialable as open source and which was trained on open source data and hence they want to use the Granite model familiy. For that your job as the supporting Red Hatter is to help the team show how to download a model and serve it. - Go to demo.redhat.com and order your teams' InstructLab RHEL VM (Nvidia/CUDA) environment - Use the virtual python environment if you want GPU acceleration -- Install the instruct lab command line tooling +- Install the instruct lab command line tooling - make sure it's version v0.16.1 - Download the granite model from instructlab/granite-7b-lab-GGUF and name it 'granite-7b-lab-Q4_K_M.gguf' - Serve the Model ## 6.2 - Chat and test knowledge -The team now wants to first try and test-drive the newly downloaded model by chatting with it to get a feel for it. +The team now wants to first try and test-drive the newly downloaded model by chatting with it to get a feel for a Granite model. - Chat with the model and test its knowledge about Instruct Lab If you find the answers somewhat peculiar, your mission is to fix that - should you accept it. And no, this message will not self-destruct. Should you be happy with the answer you can select a different knowledge area to improve. ## 6.3 - Add new knowledge -The real reason organisations want to use AI is because they can encode institutional knowledge that leads to either competitive advtange or reducing cost by supporting internal processes. With finetuning this is hard to accomplish as models seem to struggle with connecting new and unknown content with existing pre-trained content. That's why model alignment is such a gamechanger. It allows to encode new organisational knowledge into an AI model, the same way RDMBS databases allowed data or 'knowledge' to be connected and related to via foreign key constraints. Now you show how this is done with InstructLab by adding new knowledge: +The real reason organisations want to use AI is because they can encode institutional knowledge that leads to either a competitive advtange or reducing cost by supporting internal processes. With finetuning this is hard to accomplish as models seem to struggle with connecting new and unknown content with existing pre-trained content. That's why model alignment is such a gamechanger. It allows organisations to encode their knowledge into an AI model, the same way RDMBS databases allowed data or 'knowledge' to be connected and related to via foreign keys back in the days. Now you show how this is done with InstructLab by adding new knowledge to their Granite foundation model: - Utilise the existing InstructLab taxonomy on your image -- Add new knowledge (the InstructLab example knowledge file is at ~/files/qna.yaml) -- Optional, but recommended: Verify that the taxonomy tree is A-OK. +- Add new knowledge (the InstructLab example knowledge file containing InstructLab knowledge is at ~/files/qna.yaml) +- Show ACME how to verify that the taxonomy tree is A-OK before you proceed. ## 6.4 - Generate synthetic data -Once you are confident that your taxonomy tree is ok (you might want to show that command to the ACMEFS team) you are now showing how synthetic data is generated to ACME: +Once you are confident that your taxonomy tree is ok you are now showing how synthetic data is generated: - Generate new synthetic data with a teacher model - Discuss: Does synthetic data generation need a model being served? Why/Why not? @@ -60,19 +60,19 @@ You now proof to the ACME FS team that there was indeed synthetic data generated (not so) Fun Fact: The upstream version of ilab doesn't actually use a critic model at the moment, it just checks for format errors. RHEL AI will use a critic model. ## 6.6 - Train the student model -Now as you have shown that new data is available you use this data to train (or align) the existing granite model with the new data -- Does training require a model being served? Why or Why not? -- Decide if you want hardware acceleration, let iLab know when kicking off to train the model +Now as you have shown that new data is available you use this data to instruction-tune / train (or align) the existing granite model with the new data +- Discuss: Does training require a model being served? Why or Why not? +- Decide if you want hardware acceleration, let iLab know when kicking off training the model - Discuss: When would you / should you use quantisation? The ACMFS team has experiemented in the past with quantization, they know how it works so no need to show that now, they are more interested in seeing the new data being baked into the new version of the granite model. ## 6.7 - Chat & verify newly added knowledge - Chat with the newly trained model and check if it has the additional knowledge you added. -- Create a screenshot of the model served as well as the model prompt and output and post it in the slack channel. +- Create a screenshot of the model served as well as the model prompt and answer and post it in the slack channel. ## 6.8 - Hints! -The first hint is free: In scenario 6, you will need to provision 15 minutes time for synthetic data generation as well as 20 minutes for model training. You might want to make this part of your strategy while presenting to ACME FS by using a smart mix of slides and demo. +The first hint is free: In scenario 6, you will need to provision 15 minutes time for synthetic data generation as well as 20 minutes for model training. You might want to make this part of your execution strategy. # HINTS If you get stuck on a question, fear not, perhaps try a different approach. If you have tried everything you can think of and are still stuck you can unlock a hint for `5` points by posting a message in the `#event-anz-ocp-ai-hackathon` channel with the message: From 710956a2d6d7661edcad097016afdd164c1700a5 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Mon, 1 Jul 2024 15:35:53 +1000 Subject: [PATCH 03/10] Update scenario6.mdx --- data/hackathon/scenario6.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/hackathon/scenario6.mdx b/data/hackathon/scenario6.mdx index de5bb8b..d1067b3 100644 --- a/data/hackathon/scenario6.mdx +++ b/data/hackathon/scenario6.mdx @@ -28,7 +28,7 @@ Documentation you may find helpful is: - FYI - In case you want to build your RHEL AI image yourself later, have a look at the dev preview approach here: https://github.com/RedHatOfficial/rhelai-dev-preview ## 6.1 - Setting Up -The ACMEFS team wants to start with a true blue through-and-through open source approach. This means a model that's avaialable as open source and which was trained on open source data and hence they want to use the Granite model familiy. For that your job as the supporting Red Hatter is to help the team show how to download a model and serve it. +The ACMEFS team wants to start with a true blue through-and-through open source approach. This means a model that's available as open source and which was trained on open source data and hence they want to use the Granite model familiy. For that your job as the supporting Red Hatter is to help the team show how to download a model and serve it. - Go to demo.redhat.com and order your teams' InstructLab RHEL VM (Nvidia/CUDA) environment - Use the virtual python environment if you want GPU acceleration - Install the instruct lab command line tooling - make sure it's version v0.16.1 From 0bd09ee2b4962b66a501cf6aab0a34edb4779f05 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Mon, 1 Jul 2024 16:09:10 +1000 Subject: [PATCH 04/10] Update scenario6.mdx --- data/hackathon/scenario6.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/hackathon/scenario6.mdx b/data/hackathon/scenario6.mdx index d1067b3..8ce8368 100644 --- a/data/hackathon/scenario6.mdx +++ b/data/hackathon/scenario6.mdx @@ -53,7 +53,7 @@ Once you are confident that your taxonomy tree is ok you are now showing how syn ## 6.5 - Verify expected outcomes You now proof to the ACME FS team that there was indeed synthetic data generated: -- Verify the synthetic data generation via the critic model output +- Verify the synthetic data generation via the critic model by looking into the newly generated `/home/instruct/instructlab/generated` directory and verifying that there are files in it - Discuss: Does the critic model _need_ to be a different model compared to the student or teacher model? - Create a screenshot showing the files generated via the generate phase and the discarded data from the critic model and post it into the slack channel. From 57d4c103ddb50b1af3b2f9f72f75e1815e1d3490 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Mon, 1 Jul 2024 22:48:22 +1000 Subject: [PATCH 05/10] Create scenario7.mdx --- data/hackathon/scenario7.mdx | 51 ++++++++++++++++++++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 data/hackathon/scenario7.mdx diff --git a/data/hackathon/scenario7.mdx b/data/hackathon/scenario7.mdx new file mode 100644 index 0000000..98e719d --- /dev/null +++ b/data/hackathon/scenario7.mdx @@ -0,0 +1,51 @@ +--- +title: Accelerate GenAI experimentation by using an open source Enterprise-grade tool and software supply chain to get started +exercise: 7 +date: '2024-06-25' +tags: ['podman desktop','ai lab','genai', 'granite','ai applications'] +draft: false +authors: ['default'] +summary: "Demonstrate Podman Desktops in-built" +--- + +ACME loved the flexibility at enterprise grade quality and scale of OpenShift AI and also the game changing approach to LLMs through InstructLab. +But all this excitement needs to be funneled and pointed to the right direction with the right tooling. +"Can you help us?" they want to know - "I am glad you asked" is what you are saying. + +In this scenario you are showcasing the best tooling to get started with Enterprise grade as the destination in mind. +1) Get Podman Desktop running on your laptop. +2) Install the AI Lab extension +3) Use the AI Lab extension to build your demo app and get a link to your code +4) Utilize the Recipie catalogue to deploy the Summarizer recipe +5) Change the application code so that it says 'Hello from Team !' +6) Take a pdf document and let the Summarizer app summarize it. +7) Take a screenshot and show that you changed the Greeting as well as that the summarizer app ran successfully. + +## 7.0 - Be in the Know... +Documentation you might find useful +- https://podman-desktop.io/extensions/ai-lab +- https://developers.redhat.com/articles/2024/05/07/podman-ai-lab-getting-started?source=sso#catalog_of_open_source_models + +## 7.1 - Setting Up +The ACMEFS team wants to see how you install Podman Desktop and the AI Lab extension on your laptop +- Demonstrate the installation process or upgrade process of Podman Desktop. + +## 7.2 - Introducing Podman +Explain what you see when going through podman desktop. +- The team wants to know what version of Podman is required to install the AILab extension. +- Show the ACME team how you can view and access containers, images and volumes +- Browse the current models installed and show how you can download a new model. + +## 7.3 - Text Summarizer +Demonstrate the ACME team how you have out of the box recipies you can leverage. +- Use the Summarizer recipe, change the code such that it shows a welcome message with your team name in it. +- Take a screenshot and post into the slack channel. + +# HINTS +If you get stuck on a question, fear not, perhaps try a different approach. If you have tried everything you can think of and are still stuck you can unlock a hint for `5` points by posting a message in the `#event-anz-ocp-ai-hackathon` channel with the message: + +> [team name] are stuck on [exercise] and are unlocking a hint. + +A hackathon organiser will join your breakout room to share the hint with you 🤫. + + From 13ac4db63df82c13976d24f15062213e3b3489e3 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Mon, 1 Jul 2024 22:54:16 +1000 Subject: [PATCH 06/10] Update scenario7.mdx --- data/hackathon/scenario7.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/data/hackathon/scenario7.mdx b/data/hackathon/scenario7.mdx index 98e719d..1b111b0 100644 --- a/data/hackathon/scenario7.mdx +++ b/data/hackathon/scenario7.mdx @@ -9,14 +9,14 @@ summary: "Demonstrate Podman Desktops in-built" --- ACME loved the flexibility at enterprise grade quality and scale of OpenShift AI and also the game changing approach to LLMs through InstructLab. -But all this excitement needs to be funneled and pointed to the right direction with the right tooling. +But all this excitement needs to be pointed in to the right direction with the right tooling right from when experimentation starts. "Can you help us?" they want to know - "I am glad you asked" is what you are saying. -In this scenario you are showcasing the best tooling to get started with Enterprise grade as the destination in mind. +In this scenario you showcase the Podman tooling to get started while having Enterprise grade deployments as the destination in mind. 1) Get Podman Desktop running on your laptop. 2) Install the AI Lab extension -3) Use the AI Lab extension to build your demo app and get a link to your code -4) Utilize the Recipie catalogue to deploy the Summarizer recipe +3) Use the AI Lab extension to build your demo app and to access to your code +4) Utilize the Recipe catalogue to deploy the Summarizer recipe 5) Change the application code so that it says 'Hello from Team !' 6) Take a pdf document and let the Summarizer app summarize it. 7) Take a screenshot and show that you changed the Greeting as well as that the summarizer app ran successfully. @@ -37,7 +37,7 @@ Explain what you see when going through podman desktop. - Browse the current models installed and show how you can download a new model. ## 7.3 - Text Summarizer -Demonstrate the ACME team how you have out of the box recipies you can leverage. +Demonstrate the ACME team how you have out of the box recipes you can leverage. - Use the Summarizer recipe, change the code such that it shows a welcome message with your team name in it. - Take a screenshot and post into the slack channel. From 84b8191dae8c654586596de59f0940fb9d611309 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Tue, 2 Jul 2024 08:38:00 +1000 Subject: [PATCH 07/10] Update data/hackathon/scenario7.mdx Co-authored-by: James Blair --- data/hackathon/scenario7.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/hackathon/scenario7.mdx b/data/hackathon/scenario7.mdx index 1b111b0..27ffddf 100644 --- a/data/hackathon/scenario7.mdx +++ b/data/hackathon/scenario7.mdx @@ -1,7 +1,7 @@ --- title: Accelerate GenAI experimentation by using an open source Enterprise-grade tool and software supply chain to get started exercise: 7 -date: '2024-06-25' +date: '2024-07-02' tags: ['podman desktop','ai lab','genai', 'granite','ai applications'] draft: false authors: ['default'] From 73e87f1af3d8d85cbebb5d7d8b05ab40bc0876ec Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Tue, 2 Jul 2024 08:38:20 +1000 Subject: [PATCH 08/10] Update data/hackathon/scenario7.mdx Co-authored-by: James Blair --- data/hackathon/scenario7.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/hackathon/scenario7.mdx b/data/hackathon/scenario7.mdx index 27ffddf..04988f8 100644 --- a/data/hackathon/scenario7.mdx +++ b/data/hackathon/scenario7.mdx @@ -17,7 +17,7 @@ In this scenario you showcase the Podman tooling to get started while having Ent 2) Install the AI Lab extension 3) Use the AI Lab extension to build your demo app and to access to your code 4) Utilize the Recipe catalogue to deploy the Summarizer recipe -5) Change the application code so that it says 'Hello from Team !' +5) Change the application code so that it says 'Hello from Team [Your team name]!' 6) Take a pdf document and let the Summarizer app summarize it. 7) Take a screenshot and show that you changed the Greeting as well as that the summarizer app ran successfully. From 8528ceca91411872f98f8a7ae19cdce5428e24ae Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Tue, 2 Jul 2024 08:38:34 +1000 Subject: [PATCH 09/10] Update data/hackathon/scenario7.mdx Co-authored-by: James Blair --- data/hackathon/scenario7.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/hackathon/scenario7.mdx b/data/hackathon/scenario7.mdx index 04988f8..57ddb27 100644 --- a/data/hackathon/scenario7.mdx +++ b/data/hackathon/scenario7.mdx @@ -1,5 +1,5 @@ --- -title: Accelerate GenAI experimentation by using an open source Enterprise-grade tool and software supply chain to get started +title: Accelerate GenAI experimentation with Podman exercise: 7 date: '2024-07-02' tags: ['podman desktop','ai lab','genai', 'granite','ai applications'] From 83d4f6a4673fd6727f10cc52067b04d6cbfc7779 Mon Sep 17 00:00:00 2001 From: Andreas Spanner Date: Tue, 2 Jul 2024 08:40:51 +1000 Subject: [PATCH 10/10] Update data/hackathon/scenario7.mdx Co-authored-by: James Blair --- data/hackathon/scenario7.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/data/hackathon/scenario7.mdx b/data/hackathon/scenario7.mdx index 57ddb27..f724631 100644 --- a/data/hackathon/scenario7.mdx +++ b/data/hackathon/scenario7.mdx @@ -5,7 +5,7 @@ date: '2024-07-02' tags: ['podman desktop','ai lab','genai', 'granite','ai applications'] draft: false authors: ['default'] -summary: "Demonstrate Podman Desktops in-built" +summary: "Can you pull of a clutch live demo for Podman Desktops new AI features?" --- ACME loved the flexibility at enterprise grade quality and scale of OpenShift AI and also the game changing approach to LLMs through InstructLab.