Skip to content

Commit

Permalink
Updates mainly to Scenario 5 docs (#20)
Browse files Browse the repository at this point in the history
* Updates to docs

* Updates to docs

* Updates to docs

* Updates to scanario 5 docs

* Updates to scanario docs
  • Loading branch information
tnscorcoran authored Jun 24, 2024
1 parent 666c222 commit 8329995
Show file tree
Hide file tree
Showing 2 changed files with 51 additions and 6 deletions.
4 changes: 2 additions & 2 deletions data/hackathon/scenario4.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Your newly imported image should be visible in the `Image selection` dropdown wi
TODO Add screenshots

## 4.5 - Check your work
If your ACME Financial Services custom image `acme-workbench-ai-custom` is available in the dropdownplease post a screenshot of it to #event-anz-ai-hackathon with the message:
If your ACME Financial Services custom image `acme-workbench-ai-custom` is available in the dropdown, please post a screenshot of it to #event-anz-ai-hackathon with the message:

Please review [team name] solution for exercise 4.

Expand All @@ -60,7 +60,7 @@ This exercise is worth 25 points. The event team will reply in slack to confirm
TODO PULL all Check your work sections from https://github.com/jmhbnz/workshops/blob/main/data/workshop/scenario1.mdx


HINTS
# HINTS
[4.2.1] TODO - move the contents of this repo to the final github location:
Solution via Helm Chart is available here: https://github.com/butler54/rhoai-custom-image

Expand Down
53 changes: 49 additions & 4 deletions data/hackathon/scenario5.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,55 @@ authors: ['default']
summary: "Let's show integration between AI and Intelligent Application workloads, and the effieciencies acheived by using one platform for both"
---

The ACME Financial Services team are underway with the OpenShift AI Proof of Concept and are now in postition to integrate the AI model with a client applicaiton that uses the model for inference.
The ACME team is using more traditional `Predictive AI` use case that uses an object detection model to determine what object appears in side a webpage that uses a webcam to detect certain objects.
The ACME Financial Services team are underway with the OpenShift AI Proof of Concept and are now in position to integrate the AI model with a client application that uses the model for inference.
The ACME team is using more traditional `Predictive AI` use case that uses an object detection AI model to determine what object appears inside a webpage that uses a webcam to detect certain objects.

Their Data Science team have already deployed such a model, in a similar fashion to the model server you deployed earlier.

You challenge is to integrate their web application with the AI model. You will need to a) deploy it, b) configure it to talk to the model server.

## 5.1 - Deploy Application
Your first task
Your first task to deploy the web application inside a new OpenShift project to house this web app.

## 5.1.1 - Deploy Application helper
Prior to your challenge, there is a helper deployment, that you will require. The first thing you will need to do is deply that. Here it is:
https://raw.githubusercontent.com/rh-aiservices-bu/mad_m6_workshop/main/deployment/pre_post_processor_deployment.yaml

## 5.1.2 - Deploy the application.
Here are some data point you will require
- the container image for the web app is here: quay.io/rh-aiservices-bu/mad-m6-workshop-intelligent-application:latest
- These Environment variables should be set:
- OBJECT_DETECTION_URL: The model server host with path `/predictions` appended
- DISPLAY_BOX: true


Documentation you may find helpful is:
-
- https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/creating-applications#odc-deploying-container-image_odc-creating-applications-using-developer-perspective
- https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html


## 5.2 - Test Application
Once you have deployed your application, open its Route in the Developer > Topology view.
You will need to allow webcam access to the application. Go ahead and take a photo. It should detect any of these:
- bottles
- caps/hats
- tshirts
The app should draw a bounding box around any of these - or indeed of anything you placed in front of the webcam and snapped a shot.


## 5.3 - Check your work
If your ACME Financial Services web app has successfully made an inference call to the Object Detection AI model server, it should have drawn a bounding box on the screen.
Please post a screenshot of it to #event-anz-ai-hackathon with the message:

Please review [team name] solution for exercise 5.

This exercise is worth 25 points. The event team will reply in slack to confirm your updated team total score.






# HINTS
- [5.1.2]: The actual web app yaml (already with the configuration to talk to the model server) is available here: https://raw.githubusercontent.com/rh-aiservices-bu/mad_m6_workshop/main/deployment/intelligent_application_deployment.yaml

0 comments on commit 8329995

Please sign in to comment.