Skip to content

Commit

Permalink
Upgrade to docs gen 0.2.0 + changes doe to api changes
Browse files Browse the repository at this point in the history
Signed-off-by: David Kornel <[email protected]>
  • Loading branch information
kornys committed Jul 3, 2024
1 parent bf7636e commit 9dde6fb
Show file tree
Hide file tree
Showing 13 changed files with 406 additions and 16 deletions.
43 changes: 43 additions & 0 deletions docs/io.odh.test.e2e.standard.DataScienceClusterST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# DataScienceClusterST

**Description:** Verifies simple setup of ODH by spin-up operator, setup DSCI, and setup DSC.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete ODH operator and all created resources | Operator is removed and all other resources as well |

**Labels:**

* `smoke` (description file doesn't exist)

<hr style="border:1px solid">

## createDataScienceCluster

**Description:** Creates default DSCI and DSC and see if operator configure everything properly. Check that operator set status of the resources properly.

**Contact:** `David Kornel <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create default DSCI | DSCI is created and ready |
| 2. | Create default DSC | DSC is created and ready |
| 3. | Check that DSC has expected states for all components | DSC status is set properly based on configuration |

**Labels:**

* `smoke` (description file doesn't exist)

58 changes: 58 additions & 0 deletions docs/io.odh.test.e2e.standard.DistributedST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# DistributedST

**Description:** Verifies simple setup of ODH for distributed workloads by spin-up operator, setup DSCI, and setup DSC.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |
| 5. | Deploy DSCI | DSCI is created and ready |
| 6. | Deploy DSC | DSC is created and ready |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete ODH operator and all created resources | Operator is removed and all other resources as well |

<hr style="border:1px solid">

## testDistributedWorkloadWithAppWrapper

**Description:** Check that user can create, run and delete a RayCluster through Codeflare AppWrapper from a DataScience project

**Contact:** `Jiri Danek <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create namespace for AppWrapper with proper name, labels and annotations | Namespace is created |
| 2. | Create AppWrapper for RayCluster using Codeflare-generated yaml | AppWrapper instance has been created |
| 3. | Wait for Ray dashboard endpoint to come up | Ray dashboard service is backed by running pods |
| 4. | Deploy workload through the route | The workload execution has been successful |
| 5. | Delete the AppWrapper | The AppWrapper has been deleted |


## testDistributedWorkloadWithKueue

**Description:** Check that user can create, run and delete a RayCluster through Codeflare RayCluster backed by Kueue from a DataScience project

**Contact:** `Jiri Danek <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create OAuth token | OAuth token has been created |
| 2. | Create namespace for RayCluster with proper name, labels and annotations | Namespace is created |
| 3. | Create required Kueue custom resource instances | Kueue queues have been created |
| 4. | Create RayCluster using Codeflare-generated yaml | AppWrapper instance has been created |
| 5. | Wait for Ray dashboard endpoint to come up | Ray dashboard service is backed by running pods |
| 6. | Deploy workload through the route | The workload execution has been successful |
| 7. | Delete the AppWrapper | The AppWrapper has been deleted |

42 changes: 42 additions & 0 deletions docs/io.odh.test.e2e.standard.ModelServingST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# ModelServingST

**Description:** Verifies simple setup of ODH for model serving by spin-up operator, setup DSCI, and setup DSC.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |
| 5. | Deploy DSCI | DSCI is created and ready |
| 6. | Deploy DSC | DSC is created and ready |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete ODH operator and all created resources | Operator is removed and all other resources as well |

<hr style="border:1px solid">

## testMultiModelServerInference

**Description:** Check that user can create, run inference and delete MultiModelServing server from a DataScience project

**Contact:** `Jiri Danek <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create namespace for ServingRuntime application with proper name, labels and annotations | Namespace is created |
| 2. | Create a serving runtime using the processModelServerTemplate method | Serving runtime instance has been created |
| 3. | Create a secret that exists, even though it contains no useful information | Secret has been created |
| 4. | Create an inference service | Inference service has been created |
| 5. | Perform model inference through the route | The model inference execution has been successful |
| 6. | Delete the Inference Service | The Inference service has been deleted |
| 7. | Delete the secret | The secret has been deleted |
| 8. | Delete the serving runtime | The serving runtime has been deleted |

38 changes: 38 additions & 0 deletions docs/io.odh.test.e2e.standard.NotebookST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# NotebookST

**Description:** Verifies deployments of Notebooks via GitOps approach

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |
| 5. | Deploy DSCI | DSCI is created and ready |
| 6. | Deploy DSC | DSC is created and ready |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete ODH operator and all created resources | Operator is removed and all other resources as well |

<hr style="border:1px solid">

## testCreateSimpleNotebook

**Description:** Create simple Notebook with all needed resources and see if Operator creates it properly

**Contact:** `Jakub Stejskal <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create namespace for Notebook resources with proper name, labels and annotations | Namespace is created |
| 2. | Create PVC with proper labels and data for Notebook | PVC is created |
| 3. | Create Notebook resource with Jupyter Minimal image in pre-defined namespace | Notebook resource is created |
| 4. | Wait for Notebook pods readiness | Notebook pods are up and running, Notebook is in ready state |

44 changes: 44 additions & 0 deletions docs/io.odh.test.e2e.standard.PipelineServerST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# PipelineServerST

**Description:** Verifies simple setup of ODH by spin-up operator, setup DSCI, and setup DSC.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |
| 5. | Deploy DSCI | DSCI is created and ready |
| 6. | Deploy DSC | DSC is created and ready |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete ODH operator and all created resources | Operator is removed and all other resources as well |

<hr style="border:1px solid">

## testUserCanCreateRunAndDeleteADSPipelineFromDSProject

**Description:** Check that user can create, run and deleted DataSciencePipeline from a DataScience project

**Contact:** `Jiri Danek <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create namespace for DataSciencePipelines application with proper name, labels and annotations | Namespace is created |
| 2. | Create Minio secret with proper data for access s3 | Secret is created |
| 3. | Create DataSciencePipelinesApplication with configuration for new Minio instance and new MariaDB instance | DataSciencePipelinesApplication resource is created |
| 4. | Wait for DataSciencePipelines server readiness | DSP API endpoint is available and it return proper data |
| 5. | Import pipeline to a pipeline server via API | Pipeline is imported |
| 6. | List imported pipeline via API | Server return list with imported pipeline info |
| 7. | Trigger pipeline run for imported pipeline | Pipeline is triggered |
| 8. | Wait for pipeline success | Pipeline succeeded |
| 9. | Delete pipeline run | Pipeline run is deleted |
| 10. | Delete pipeline | Pipeline is deleted |

44 changes: 44 additions & 0 deletions docs/io.odh.test.e2e.standard.PipelineV2ServerST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# PipelineV2ServerST

**Description:** Verifies simple setup of ODH by spin-up operator, setup DSCI, and setup DSC.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |
| 5. | Deploy DSCI | DSCI is created and ready |
| 6. | Deploy DSC | DSC is created and ready |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete ODH operator and all created resources | Operator is removed and all other resources as well |

<hr style="border:1px solid">

## testUserCanOperateDSv2PipelineFromDSProject

**Description:** Check that user can create, run and deleted DataSciencePipeline from a DataScience project

**Contact:** `Jiri Danek <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create namespace for DataSciencePipelines application with proper name, labels and annotations | Namespace is created |
| 2. | Create Minio secret with proper data for access s3 | Secret is created |
| 3. | Create DataSciencePipelinesApplication with configuration for new Minio instance and new MariaDB instance | DataSciencePipelinesApplication resource is created |
| 4. | Wait for DataSciencePipelines server readiness | DSP API endpoint is available and it return proper data |
| 5. | Import pipeline to a pipeline server via API | Pipeline is imported |
| 6. | List imported pipeline via API | Server return list with imported pipeline info |
| 7. | Trigger pipeline run for imported pipeline | Pipeline is triggered |
| 8. | Wait for pipeline success | Pipeline succeeded |
| 9. | Delete pipeline run | Pipeline run is deleted |
| 10. | Delete pipeline | Pipeline is deleted |

33 changes: 33 additions & 0 deletions docs/io.odh.test.e2e.standard.UninstallST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# UninstallST

**Description:** Verifies that uninstall process removes all resources created by ODH installation

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |
| 4. | Install ODH operator | Operator is up and running and is able to serve it's operands |
| 5. | Deploy DSCI | DSCI is created and ready |
| 6. | Deploy DSC | DSC is created and ready |

<hr style="border:1px solid">

## testUninstallSimpleScenario

**Description:** Check that user can create, run and deleted DataSciencePipeline from a DataScience project

**Contact:** `Jan Stourac <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Create uninstall configmap | ConfigMap exists |
| 2. | Wait for controllers namespace deletion | Controllers namespace is deleted |
| 3. | Check that relevant resources are deleted (Subscription, InstallPlan, CSV) | All relevant resources are deleted |
| 4. | Check that all related namespaces are deleted (monitoring, notebooks, controllers) | All related namespaces are deleted |
| 5. | Remove Operator namespace | Operator namespace is deleted |

47 changes: 47 additions & 0 deletions docs/io.odh.test.e2e.upgrade.BundleUpgradeST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# BundleUpgradeST

**Description:** Verifies upgrade path from previously released version to latest available build. Operator installation and upgrade is done via bundle of yaml files.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |

**After tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Delete all ODH related resources in the cluster | All ODH related resources are gone |

**Labels:**

* `bundle-upgrade` (description file doesn't exist)

<hr style="border:1px solid">

## testUpgradeBundle

**Description:** Creates default DSCI and DSC and see if operator configure everything properly. Check that operator set status of the resources properly.

**Contact:** `David Kornel <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Install operator via bundle of yaml files with specific version | Operator is up and running |
| 2. | Deploy DSC (see UpgradeAbstract for more info) | DSC is created and ready |
| 3. | Deploy Notebook to namespace test-odh-notebook-upgrade | All related pods are up and running. Notebook is in ready state. |
| 4. | Apply latest yaml files with latest Operator version | Yaml file is applied |
| 5. | Wait for RollingUpdate of Operator pod to a new version | Operator update is finished and pod is up and running |
| 6. | Verify that Dashboard pods are stable for 2 minutes | Dashboard pods are stable por 2 minutes after upgrade |
| 7. | Verify that Notebook pods are stable for 2 minutes | Notebook pods are stable por 2 minutes after upgrade |
| 8. | Check that ODH operator doesn't contain any error logs | ODH operator log is error free |

**Labels:**

* `bundle-upgrade` (description file doesn't exist)

41 changes: 41 additions & 0 deletions docs/io.odh.test.e2e.upgrade.OlmUpgradeST.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# OlmUpgradeST

**Description:** Verifies upgrade path from previously released version to latest available build. Operator installation and upgrade is done via OLM.

**Before tests execution steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Deploy Pipelines Operator | Pipelines operator is available on the cluster |
| 2. | Deploy ServiceMesh Operator | ServiceMesh operator is available on the cluster |
| 3. | Deploy Serverless Operator | Serverless operator is available on the cluster |

**Labels:**

* `olm-upgrade` (description file doesn't exist)

<hr style="border:1px solid">

## testUpgradeOlm

**Description:** Creates default DSCI and DSC and see if operator configure everything properly. Check that operator set status of the resources properly.

**Contact:** `Jakub Stejskal <[email protected]>`

**Steps:**

| Step | Action | Result |
| - | - | - |
| 1. | Install operator via OLM with manual approval and specific version | Operator is up and running |
| 2. | Deploy DSC (see UpgradeAbstract for more info) | DSC is created and ready |
| 3. | Deploy Notebook to namespace test-odh-notebook-upgrade | All related pods are up and running. Notebook is in ready state. |
| 4. | Approve install plan for new version | Install plan is approved |
| 5. | Wait for RollingUpdate of Operator pod to a new version | Operator update is finished and pod is up and running |
| 6. | Verify that Dashboard pods are stable for 2 minutes | Dashboard pods are stable por 2 minutes after upgrade |
| 7. | Verify that Notebook pods are stable for 2 minutes | Notebook pods are stable por 2 minutes after upgrade |
| 8. | Check that ODH operator doesn't contain any error logs | ODH operator log is error free |

**Labels:**

* `olm-upgrade` (description file doesn't exist)

Loading

0 comments on commit 9dde6fb

Please sign in to comment.