Skip to content

Commit

Permalink
Merge pull request #2 from redhat-ai-services/doc_updates
Browse files Browse the repository at this point in the history
Doc updates
  • Loading branch information
carlmes authored Sep 5, 2024
2 parents c1a0542 + 764cf7f commit 76ef1d2
Show file tree
Hide file tree
Showing 10 changed files with 227 additions and 33 deletions.
103 changes: 103 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
.idea/
venv/
dictionary.dic
tmp/
*.drawio.bkp

# Created by https://www.toptal.com/developers/gitignore/api/visualstudiocode,macos,windows,linux
# Edit at https://www.toptal.com/developers/gitignore?templates=visualstudiocode,macos,windows,linux

### Linux ###
*~

# temporary files which can be created if a process still has a handle open of a deleted file
.fuse_hidden*

# KDE directory preferences
.directory

# Linux trash folder which might appear on any partition or disk
.Trash-*

# .nfs files are created when an open file is removed but is still being accessed
.nfs*

### macOS ###
# General
.DS_Store
.AppleDouble
.LSOverride

# Icon must end with two \r
Icon


# Thumbnails
._*

# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd
.Spotlight-V100
.TemporaryItems
.Trashes
.VolumeIcon.icns
.com.apple.timemachine.donotpresent

# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk

### macOS Patch ###
# iCloud generated files
*.icloud

### VisualStudioCode ###
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets

# Local History for Visual Studio Code
.history/

# Built Visual Studio Code Extensions
*.vsix

### VisualStudioCode Patch ###
# Ignore all local history of files
.history
.ionide

### Windows ###
# Windows thumbnail cache files
Thumbs.db
Thumbs.db:encryptable
ehthumbs.db
ehthumbs_vista.db

# Dump file
*.stackdump

# Folder config file
[Dd]esktop.ini

# Recycle Bin used on file shares
$RECYCLE.BIN/

# Windows Installer files
*.cab
*.msi
*.msix
*.msm
*.msp

# Windows shortcuts
*.lnk

# End of https://www.toptal.com/developers/gitignore/api/visualstudiocode,macos,windows,linux
4 changes: 2 additions & 2 deletions content/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
* 1. xref:01_welcome.adoc[Welcome!]
* 1. xref:01_welcome.adoc[Welcome and Introduction]
* 2. xref:05_environment_provisioning.adoc[Environment Provisioning]
Expand All @@ -12,7 +12,7 @@
* 7. xref:32_model_training_car.adoc[Model Training]
* 8. xref:34_boto3.adoc[Boto3]
* 8. xref:34_using_s3_storage.adoc[Using S3 Storage]
* 9. xref:36_deploy_model.adoc[Deploy Model]
Expand Down
11 changes: 10 additions & 1 deletion content/modules/ROOT/pages/01_welcome.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,13 @@ It's possible to install all the various components by hand, making notes such a

The source code for the AI Accelerator can be found at: https://github.com/redhat-ai-services/ai-accelerator

The accelerator was created and is currently maintained by the Red Hat AI Services organization, however contributions from anywhere are always greatly appreciated!
The accelerator was created and is currently maintained by the Red Hat AI Services organization, however contributions from anywhere are always greatly appreciated!

## Questions for Further Consideration

Additional questions that could be discussed for this topic:

. When would I want to manually install RHOAI and associated components, instead of using the AI Accelerator framework?
. Who maintains the AI Accelerator framework project?
. What if I find a bug, or have a new feature to contribute?
. Can I add my own additional components into the AI Accelerator?
24 changes: 18 additions & 6 deletions content/modules/ROOT/pages/05_environment_provisioning.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ In this section we will order all three clusters at demo.redhat.com. Note that t

The demo clusters typically take 1-2 hours to provision, although they may take a little longer on Monday mornings as they are using AWS spot instances and demand is usually high at the start of the work week. So it's suggested to provision them, and continue with the subsequent sections that don't require cluster access yet.

## Provision a Demo Cluster
## Provision the Demo Cluster

The first cluster where we will run our demonstration projects requires a little more resources that the other two, so lets provision this first.

Expand All @@ -36,13 +36,25 @@ image::clustersettings_Dev_Prod.png[width=50%]

## While You Wait

The provisioning process will take a while to complete, so why not take some time to check out some of the documentation in the AI Accelerator project that we will be installing once the clusters are ready:
The provisioning process will take a while to complete, so why not take some time to check out some of the documentation in the AI Accelerator project that we will be bootstrapping, once the new clusters are ready:

* https://github.com/redhat-ai-services/ai-accelerator[Project Introduction README]
* https://github.com/redhat-ai-services/ai-accelerator/blob/main/documentation/overview.md[AI Accelerator Overview].
* https://github.com/redhat-ai-services/ai-accelerator/blob/main/documentation/installation.md[AI Accelerator Installation Procedure].
* https://github.com/redhat-ai-services/ai-accelerator/tree/main/tenants[Tenants documentation].
* https://github.com/redhat-ai-services/ai-accelerator/blob/main/documentation/overview.md[AI Accelerator Overview]
* https://github.com/redhat-ai-services/ai-accelerator/blob/main/documentation/installation.md[AI Accelerator Installation Procedure]
* https://github.com/redhat-ai-services/ai-accelerator/tree/main/tenants[Tenants documentation]
## When the Cluster is Ready

Once the clusters have been provisioned, you should receive an email containing the cluster URLs as well as an administrative user (such as `kubeadmin`) and password. You can also obtain these from the status dashboard at https://demo.redhat.com[demo.redhat.com] as well as perform administrative functions on your clusters, such as starting/stopping or extending the lifespan if desired.
Once the clusters have been provisioned, you should receive an email containing the cluster URLs as well as an administrative user (such as `kubeadmin`) and password.

You can also obtain these URLs and credentials from your services dashboard at https://demo.redhat.com/[demo.redhat.com]. The dashboard also allows you to perform administrative functions on your clusters, such as starting/stopping or extending the lifespan if desired.

## Questions for Further Consideration

Additional questions that could be discussed for this topic:

. How long can we use the demo.redhat.com OpenShift cluster? When will it get deleted?
. I want to install a demonstration cluster that might last several months for a RHOAI evaluation period. What options are available?
. Can we use our own AWS based OpenShift cluster, other than one from demo.redhat.com?
. Could I install this on my own hardware, such as my desktop PC that is running a single node OpenShift cluster?
. The topic of being able to easily repeat an installation, as discussed in the following GitOps sections may be interesting to discuss, since this means that work done to configure an environment is not lost if the environment is destroyed.
19 changes: 15 additions & 4 deletions content/modules/ROOT/pages/07_installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Clone (download) the Git repository containing the AI Accelerator, since we will

TIP: If you can't or prefer not to run the installation from your local machine (such as in a locked down corporate environment), you can also use the Bastion host instead. This is a Linux virtual machine running on AWS, the SSH login details are provided in the provisioning email you received from demo.redhat.com. Just be aware that the Basion host is deprovisioned when the cluster is deleted, so be sure to git commit any changes frequently.

[start=2]
[start=4]
. Git clone the following repository to your local machine. If you're using a fork, then change the repository URL in the command below to match yours:
[.console-input]
[source,adoc]
Expand All @@ -33,7 +33,7 @@ git clone https://github.com/redhat-ai-services/ai-accelerator.git

Carefully follow the instructions found in https://github.com/redhat-ai-services/ai-accelerator/blob/main/documentation/installation.md[`documentation/installation.md`], with the following specifics:

[start=3]
[start=5]
. Use the _**Demo**_ cluster credentials when logging into OpenShift
. Select number 3 when prompted:
[.bordershadow]
Expand All @@ -44,7 +44,7 @@ This will install all the applications in the bootstrap script and also provide
[.bordershadow]
image::Bootstrap_argo_url.png[]

[start=4]
[start=7]
. Log into the Argo CD link with the Openshift credentials and wait till everything syncs successfully.
[.bordershadow]
image::Argo_home_screen.png[]
Expand All @@ -70,4 +70,15 @@ oc delete deployment granite-predictor-00001-deployment -n ai-example-single-mod
We will cover the ai-accelerator project overview in a later section.

---
Continue using the _**DEMO**_ cluster for the subsequent exercises.
Continue using the _**DEMO**_ cluster for the subsequent exercises.

## Questions for Further Consideration

Additional questions that could be discussed for this topic:

. What's the difference between "bootstrapping" and "installing" the new OpenShift cluster?
. Why is forking an open source project a good idea?
. How can a project fork be used to contribute back to the parent project with bug fixes, updates and new features?
. Could, or should the bootstrap shell script be converted to Ansible?
. How does the bootstrap script provision GPU resources in the new OpenShift cluster? Hint: a quick walk through the logic in the source code should be a useful exercise, time permitting.
. Where can I get help if the bootstrap process breaks?
21 changes: 18 additions & 3 deletions content/modules/ROOT/pages/20_ai-accelerator_review.adoc
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# AI-Accelerator Project
# AI Accelerator Project

The AI-accelerator project is an automated way to deploy RHOAI, relevent operators, and examples to a cluster.
The AI Accelerator project is an automated way to deploy RHOAI, relevent operators, and examples to a cluster.

Navigate to the ai-accelerator github project: https://github.com/redhat-ai-services/ai-accelerator

5. The ai-accelerator project installs Open Shift GitOps (ArgoCD) first, then uses ArgoCD to install RHOAI and the related operators. It also deploys examples such as inference services, a workbench, and pipelines.
The ai-accelerator project installs Open Shift GitOps (ArgoCD) first, then uses ArgoCD to install RHOAI and the related operators. It also deploys examples such as inference services, a workbench, and pipelines.

* The list of operators it can install are in the https://github.com/redhat-ai-services/ai-accelerator/tree/main/components/operators[operators] folder.
* Examples of model serving, workbench, and pipelines can be found in the https://github.com/redhat-ai-services/ai-accelerator/tree/main/tenants[tenants] folder.
Expand All @@ -16,6 +16,7 @@ This project is set up with ArgoCD and Kustomize in mind. Meaning ArgoCD will ha
If you are unfamiliar with Kustomize, this is a very good tutorial: https://devopscube.com/kustomize-tutorial/[Learn more about Kustomize].

### Overview of Kustomize in AI-Accelerator Project

Let's try to understand how Kustomize is being used to deploy the different resources in the ai-accelerator.

1. When running the _**bootstrap.sh**_ script, it will apply the Open Shift GitOps operator by using Kustomize on the https://github.com/redhat-ai-services/ai-accelerator/blob/b90f025691e14d8e8a8d5ff3452107f8a0c8f48d/scripts/bootstrap.sh#L11[GitOps_Overlay] https://github.com/redhat-ai-services/ai-accelerator/tree/b90f025691e14d8e8a8d5ff3452107f8a0c8f48d/components/operators/openshift-gitops/operator/overlays/latest[folder]:
Expand Down Expand Up @@ -120,3 +121,17 @@ If you are using a disconnected environment, you will need to first setup:
- the registry for the images
- the git repositories, proxies and credentials
====

## References

* Red Hat Blog: https://www.redhat.com/en/blog/your-guide-to-continuous-delivery-with-openshift-gitops-and-kustomize[Your Guide to Continuous Delivery with OpenShift GitOps and Kustomize] - a good article explaining more GitOps concepts
* GitHub: https://github.com/gnunn-gitops/standards/blob/master/folders.md[GitOps Folder Structure] - the original inspiration for the folder structure in the AI Accelerator project
* Red Hat Blog: https://www.redhat.com/en/blog/enterprise-mlops-reference-design[Enterprise MLOps Reference Design] - a conceptual reference design for performing Machine Learning Operations (MLOps)
* Topic: https://www.redhat.com/en/topics/devops/what-is-gitops[What is GitOps?] - 7-minute read on the topic of GitOps
## Questions for Further Consideration

Additional questions that could be discussed for this topic:

. Where can I find a list of curated components that follow the GitOps pattern? Hint, see the https://github.com/redhat-cop/gitops-catalog[GitOps Catalog] GitHub page.
. Wow this project structure is complicated! Is there a way to simplify the project folder structures? Hint, a good discussion could be had on where we came from and how we got here in terms of project design and layout.
9 changes: 9 additions & 0 deletions content/modules/ROOT/pages/30_gitops_env_setup_dev_prod.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Environment Install and Setup: DEV and PROD Cluster

## Parasol-insurance-dev cluster

Follow the following steps to complete the install and setup:

* After the cluster is running and ready, log in as the admin
Expand Down Expand Up @@ -313,3 +314,11 @@ When running the bootstrap script, select `bootstrap/overlays/parasol-insurance-
====
To check your work please refer to https://github.com/redhat-ai-services/ai-accelerator-qa/tree/30_gitops_env_setup_prod[This Prod Branch]
====

## Questions for Further Consideration

Additional questions that could be discussed for this topic:

. How familiar are your development teams with CI/CD concepts?
. How do you currently deploy project to development, QA and Production environments?
. Is ArgoCD new to the team?
11 changes: 11 additions & 0 deletions content/modules/ROOT/pages/31_custom_notebook.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -304,3 +304,14 @@ image::01_custom_workbench.png[Custom workbench]
====
Verify your work against https://github.com/redhat-ai-services/ai-accelerator-qa/pull/new/31_custom_notebook:[This custom-workbench branch]
====

## Questions for Further Consideration

Additional questions that could be discussed for this topic:

. How many Python packages are included in your typical data scientist development environment? Are there any packages that are unique to your team?
. How do you handle continuous updates in your development environment, remembering that AI/ML is an evolving landscape, and new packages are released all the time, and existing packages are undergoing very frequent updates?
. Can data scientists ask for new packages in a securely controlled development environment?
. Where do you store source code for model experimentation and training?
. Do you think that cluster storage (such as an OpenShift PVC) is a good permanent location for source code, so that in the event of failure the source is not lost?
. How do your teams of data scientists collaborate on notebooks when training models or performing other experiments?
4 changes: 2 additions & 2 deletions content/modules/ROOT/pages/32_model_training_car.adoc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Model Training with Custom Notebook

## In this module you will sync a git repo and run through a the parasol-insurnace Jupyter notebooks.
In this module you will sync a git repo, and then execute the logic contained in the parasol-insurance Jupyter notebooks.

We will use the custom image we created and uploaded before and we will create a workbench with the customer image we uploaded in module 04.
To perform this task, we will use the custom data science notebook image we created and uploaded in the previous module.

## Steps to create workbench with a custom notebook

Expand Down
Loading

0 comments on commit 76ef1d2

Please sign in to comment.