diff --git a/.gitbook.yaml b/.gitbook.yaml index 8a1dc252feb..24efea93fb6 100644 --- a/.gitbook.yaml +++ b/.gitbook.yaml @@ -202,3 +202,66 @@ redirects: docs/reference/how-do-i: reference/how-do-i.md docs/reference/community-and-content: reference/community-and-content.md docs/reference/faq: reference/faq.md + + # The new Manage ZenML Server redirects + how-to/advanced-topics/manage-zenml-server/: how-to/manage-zenml-server/README.md + how-to/project-setup-and-management/connecting-to-zenml/: how-to/manage-zenml-server/connecting-to-zenml/README.md + how-to/project-setup-and-management/connecting-to-zenml/connect-in-with-your-user-interactive: how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md + how-to/project-setup-and-management/connecting-to-zenml/connect-with-a-service-account: how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md + how-to/advanced-topics/manage-zenml-server/upgrade-zenml-server: how-to/manage-zenml-server/upgrade-zenml-server.md + how-to/advanced-topics/manage-zenml-server/best-practices-upgrading-zenml: how-to/manage-zenml-server/best-practices-upgrading-zenml.md + how-to/advanced-topics/manage-zenml-server/using-zenml-server-in-prod: how-to/manage-zenml-server/using-zenml-server-in-prod.md + how-to/advanced-topics/manage-zenml-server/troubleshoot-your-deployed-server: how-to/manage-zenml-server/troubleshoot-your-deployed-server.md + how-to/advanced-topics/manage-zenml-server/migration-guide/migration-guide: how-to/manage-zenml-server/migration-guide/migration-guide.md + how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-twenty: how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md + how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-thirty: how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md + how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-forty: how-to/manage-zenml-server/migration-guide/migration-zero-forty.md + how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-sixty: how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md + + how-to/project-setup-and-management/setting-up-a-project-repository/using-project-templates: how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md + how-to/project-setup-and-management/setting-up-a-project-repository/create-your-own-template: how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md + how-to/project-setup-and-management/setting-up-a-project-repository/shared-components-for-teams: how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md + how-to/project-setup-and-management/setting-up-a-project-repository/stacks-pipelines-models: how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md + how-to/project-setup-and-management/setting-up-a-project-repository/access-management: how-to/project-setup-and-management/collaborate-with-team/access-management.md + how-to/interact-with-secrets: how-to/project-setup-and-management/interact-with-secrets.md + + how-to/project-setup-and-management/develop-locally/: how-to/pipeline-development/develop-locally/README.md + how-to/project-setup-and-management/develop-locally/local-prod-pipeline-variants: how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md + how-to/project-setup-and-management/develop-locally/keep-your-dashboard-server-clean: how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md + + how-to/advanced-topics/training-with-gpus/: how-to/pipeline-development/training-with-gpus/README.md + how-to/advanced-topics/training-with-gpus/accelerate-distributed-training: how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md + + how-to/advanced-topics/run-remote-notebooks/: how-to/pipeline-development/run-remote-notebooks/README.md + how-to/advanced-topics/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells: how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md + how-to/advanced-topics/run-remote-notebooks/run-a-single-step-from-a-notebook: how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md + + how-to/infrastructure-deployment/configure-python-environments/: how-to/pipeline-development/configure-python-environments/README.md + how-to/infrastructure-deployment/configure-python-environments/handling-dependencies: how-to/pipeline-development/configure-python-environments/handling-dependencies.md + how-to/infrastructure-deployment/configure-python-environments/configure-the-server-environment: how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md + + how-to/infrastructure-deployment/customize-docker-builds/: how-to/customize-docker-builds/README.md + how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline: how-to/customize-docker-builds/docker-settings-on-a-pipeline.md + how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-step: how-to/customize-docker-builds/docker-settings-on-a-step.md + how-to/infrastructure-deployment/customize-docker-builds/use-a-prebuilt-image: how-to/customize-docker-builds/use-a-prebuilt-image.md + how-to/infrastructure-deployment/customize-docker-builds/specify-pip-dependencies-and-apt-packages: how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md + how-to/infrastructure-deployment/customize-docker-builds/how-to-use-a-private-pypi-repository: how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md + how-to/infrastructure-deployment/customize-docker-builds/use-your-own-docker-files: how-to/customize-docker-builds/use-your-own-docker-files.md + how-to/infrastructure-deployment/customize-docker-builds/which-files-are-built-into-the-image: how-to/customize-docker-builds/which-files-are-built-into-the-image.md + how-to/infrastructure-deployment/customize-docker-builds/how-to-reuse-builds: how-to/customize-docker-builds/how-to-reuse-builds.md + how-to/infrastructure-deployment/customize-docker-builds/define-where-an-image-is-built: how-to/customize-docker-builds/define-where-an-image-is-built.md + + how-to/data-artifact-management/handle-data-artifacts/datasets: how-to/data-artifact-management/complex-usecases/datasets.md + how-to/data-artifact-management/handle-data-artifacts/manage-big-data: how-to/data-artifact-management/complex-usecases/manage-big-data.md + how-to/data-artifact-management/handle-data-artifacts/unmaterialized-artifacts: how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md + how-to/data-artifact-management/handle-data-artifacts/passing-artifacts-between-pipelines: how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md + how-to/data-artifact-management/handle-data-artifacts/registering-existing-data: how-to/data-artifact-management/complex-usecases/registering-existing-data.md + + how-to/advanced-topics/control-logging/: how-to/control-logging/README.md + how-to/advanced-topics/control-logging/view-logs-on-the-dasbhoard: how-to/control-logging/view-logs-on-the-dasbhoard.md + how-to/advanced-topics/control-logging/enable-or-disable-logs-storing: how-to/control-logging/enable-or-disable-logs-storing.md + how-to/advanced-topics/control-logging/set-logging-verbosity: how-to/control-logging/set-logging-verbosity.md + how-to/advanced-topics/control-logging/disable-rich-traceback: how-to/control-logging/disable-rich-traceback.md + how-to/advanced-topics/control-logging/disable-colorful-logging: how-to/control-logging/disable-colorful-logging.md + + \ No newline at end of file diff --git a/docs/book/component-guide/data-validators/deepchecks.md b/docs/book/component-guide/data-validators/deepchecks.md index b24d827f0b5..cab1d0c2663 100644 --- a/docs/book/component-guide/data-validators/deepchecks.md +++ b/docs/book/component-guide/data-validators/deepchecks.md @@ -78,7 +78,7 @@ RUN apt-get update RUN apt-get install ffmpeg libsm6 libxext6 -y ``` -Then, place the following snippet above your pipeline definition. Note that the path of the `dockerfile` are relative to where the pipeline definition file is. Read [the containerization guide](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) for more details: +Then, place the following snippet above your pipeline definition. Note that the path of the `dockerfile` are relative to where the pipeline definition file is. Read [the containerization guide](../../how-to/customize-docker-builds/README.md) for more details: ```python import zenml diff --git a/docs/book/component-guide/experiment-trackers/mlflow.md b/docs/book/component-guide/experiment-trackers/mlflow.md index b41cffe90c5..9f480648a56 100644 --- a/docs/book/component-guide/experiment-trackers/mlflow.md +++ b/docs/book/component-guide/experiment-trackers/mlflow.md @@ -82,7 +82,7 @@ zenml stack register custom_stack -e mlflow_experiment_tracker ... --set {% endtab %} {% tab title="ZenML Secret (Recommended)" %} -This method requires you to [configure a ZenML secret](../../how-to/interact-with-secrets.md) to store the MLflow tracking service credentials securely. +This method requires you to [configure a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) to store the MLflow tracking service credentials securely. You can create the secret using the `zenml secret create` command: @@ -106,7 +106,7 @@ zenml experiment-tracker register mlflow \ ``` {% hint style="info" %} -Read more about [ZenML Secrets](../../how-to/interact-with-secrets.md) in the ZenML documentation. +Read more about [ZenML Secrets](../../how-to/project-setup-and-management/interact-with-secrets.md) in the ZenML documentation. {% endhint %} {% endtab %} {% endtabs %} diff --git a/docs/book/component-guide/experiment-trackers/neptune.md b/docs/book/component-guide/experiment-trackers/neptune.md index 68cf15eb097..c999ccabe16 100644 --- a/docs/book/component-guide/experiment-trackers/neptune.md +++ b/docs/book/component-guide/experiment-trackers/neptune.md @@ -37,7 +37,7 @@ You need to configure the following credentials for authentication to Neptune: {% tabs %} {% tab title="ZenML Secret (Recommended)" %} -This method requires you to [configure a ZenML secret](../../how-to/interact-with-secrets.md) to store the Neptune tracking service credentials securely. +This method requires you to [configure a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) to store the Neptune tracking service credentials securely. You can create the secret using the `zenml secret create` command: @@ -61,7 +61,7 @@ zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` {% hint style="info" %} -Read more about [ZenML Secrets](../../how-to/interact-with-secrets.md) in the ZenML documentation. +Read more about [ZenML Secrets](../../how-to/project-setup-and-management/interact-with-secrets.md) in the ZenML documentation. {% endhint %} {% endtab %} diff --git a/docs/book/component-guide/experiment-trackers/wandb.md b/docs/book/component-guide/experiment-trackers/wandb.md index ee19b7c0492..1f0bbbfd32e 100644 --- a/docs/book/component-guide/experiment-trackers/wandb.md +++ b/docs/book/component-guide/experiment-trackers/wandb.md @@ -55,7 +55,7 @@ zenml stack register custom_stack -e wandb_experiment_tracker ... --set {% endtab %} {% tab title="ZenML Secret (Recommended)" %} -This method requires you to [configure a ZenML secret](../../how-to/interact-with-secrets.md) to store the Weights & Biases tracking service credentials securely. +This method requires you to [configure a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) to store the Weights & Biases tracking service credentials securely. You can create the secret using the `zenml secret create` command: @@ -79,7 +79,7 @@ zenml experiment-tracker register wandb_tracker \ ``` {% hint style="info" %} -Read more about [ZenML Secrets](../../how-to/interact-with-secrets.md) in the ZenML documentation. +Read more about [ZenML Secrets](../../how-to/project-setup-and-management/interact-with-secrets.md) in the ZenML documentation. {% endhint %} {% endtab %} {% endtabs %} diff --git a/docs/book/component-guide/image-builders/gcp.md b/docs/book/component-guide/image-builders/gcp.md index 32b87042893..00d9ec937a3 100644 --- a/docs/book/component-guide/image-builders/gcp.md +++ b/docs/book/component-guide/image-builders/gcp.md @@ -185,7 +185,7 @@ zenml stack register -i ... --set As described in this [Google Cloud Build documentation page](https://cloud.google.com/build/docs/build-config-file-schema#network), Google Cloud Build uses containers to execute the build steps which are automatically attached to a network called `cloudbuild` that provides some Application Default Credentials (ADC), that allow the container to be authenticated and therefore use other GCP services. -By default, the GCP Image Builder is executing the build command of the ZenML Pipeline Docker image with the option `--network=cloudbuild`, so the ADC provided by the `cloudbuild` network can also be used in the build. This is useful if you want to install a private dependency from a GCP Artifact Registry, but you will also need to use a [custom base parent image](../../how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md) with the [`keyrings.google-artifactregistry-auth`](https://pypi.org/project/keyrings.google-artifactregistry-auth/) installed, so `pip` can connect and authenticate in the private artifact registry to download the dependency. +By default, the GCP Image Builder is executing the build command of the ZenML Pipeline Docker image with the option `--network=cloudbuild`, so the ADC provided by the `cloudbuild` network can also be used in the build. This is useful if you want to install a private dependency from a GCP Artifact Registry, but you will also need to use a [custom base parent image](../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md) with the [`keyrings.google-artifactregistry-auth`](https://pypi.org/project/keyrings.google-artifactregistry-auth/) installed, so `pip` can connect and authenticate in the private artifact registry to download the dependency. ```dockerfile FROM zenmldocker/zenml:latest diff --git a/docs/book/component-guide/image-builders/kaniko.md b/docs/book/component-guide/image-builders/kaniko.md index 20f0227370e..c9c15553b7c 100644 --- a/docs/book/component-guide/image-builders/kaniko.md +++ b/docs/book/component-guide/image-builders/kaniko.md @@ -50,7 +50,7 @@ For more information and a full list of configurable attributes of the Kaniko im The Kaniko image builder will create a Kubernetes pod that is running the build. This build pod needs to be able to pull from/push to certain container registries, and depending on the stack component configuration also needs to be able to read from the artifact store: * The pod needs to be authenticated to push to the container registry in your active stack. -* In case the [parent image](../../how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md#using-a-custom-parent-image) you use in your `DockerSettings` is stored in a private registry, the pod needs to be authenticated to pull from this registry. +* In case the [parent image](../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md#using-a-custom-parent-image) you use in your `DockerSettings` is stored in a private registry, the pod needs to be authenticated to pull from this registry. * If you configured your image builder to store the build context in the artifact store, the pod needs to be authenticated to read files from the artifact store storage. ZenML is not yet able to handle setting all of the credentials of the various combinations of container registries and artifact stores on the Kaniko build pod, which is you're required to set this up yourself for now. The following section outlines how to handle it in the most straightforward (and probably also most common) scenario, when the Kubernetes cluster you're using for the Kaniko build is hosted on the same cloud provider as your container registry (and potentially the artifact store). For all other cases, check out the [official Kaniko repository](https://github.com/GoogleContainerTools/kaniko) for more information. diff --git a/docs/book/component-guide/model-deployers/seldon.md b/docs/book/component-guide/model-deployers/seldon.md index 152337bbae4..7c2ed3cf015 100644 --- a/docs/book/component-guide/model-deployers/seldon.md +++ b/docs/book/component-guide/model-deployers/seldon.md @@ -239,7 +239,7 @@ If you want to use a custom persistent storage with Seldon Core, or if you prefe **Advanced: Configuring a Custom Seldon Core Secret** -The Seldon Core model deployer stack component allows configuring an additional `secret` attribute that can be used to specify custom credentials that Seldon Core should use to authenticate to the persistent storage service where models are located. This is useful if you want to connect Seldon Core to a persistent storage service that is not supported as a ZenML Artifact Store, or if you don't want to configure or use the same credentials configured for your Artifact Store. The `secret` attribute must be set to the name of [a ZenML secret](../../how-to/interact-with-secrets.md) containing credentials configured in the format supported by Seldon Core. +The Seldon Core model deployer stack component allows configuring an additional `secret` attribute that can be used to specify custom credentials that Seldon Core should use to authenticate to the persistent storage service where models are located. This is useful if you want to connect Seldon Core to a persistent storage service that is not supported as a ZenML Artifact Store, or if you don't want to configure or use the same credentials configured for your Artifact Store. The `secret` attribute must be set to the name of [a ZenML secret](../../how-to/project-setup-and-management/interact-with-secrets.md) containing credentials configured in the format supported by Seldon Core. {% hint style="info" %} This method is not recommended, because it limits the Seldon Core model deployer to a single persistent storage service, whereas using the Artifact Store credentials gives you more flexibility in combining the Seldon Core model deployer with any Artifact Store in the same ZenML stack. diff --git a/docs/book/component-guide/orchestrators/airflow.md b/docs/book/component-guide/orchestrators/airflow.md index a5e9de12dda..7fd0fcb8ea6 100644 --- a/docs/book/component-guide/orchestrators/airflow.md +++ b/docs/book/component-guide/orchestrators/airflow.md @@ -159,7 +159,7 @@ of your Airflow deployment. {% hint style="info" %} ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Airflow. Check -out [this page](/docs/book/how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn +out [this page](/docs/book/how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} @@ -210,7 +210,7 @@ more information on how to specify settings. #### Enabling CUDA for GPU-backed hardware Note that if you wish to use this orchestrator to run steps on a GPU, you will need to -follow [the instructions on this page](/docs/book/how-to/advanced-topics/training-with-gpus/README.md) to ensure that it +follow [the instructions on this page](/docs/book/how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. diff --git a/docs/book/component-guide/orchestrators/azureml.md b/docs/book/component-guide/orchestrators/azureml.md index e0c32f5adb8..e47b4d8e9f2 100644 --- a/docs/book/component-guide/orchestrators/azureml.md +++ b/docs/book/component-guide/orchestrators/azureml.md @@ -80,7 +80,7 @@ assign it the correct permissions and use it to [register a ZenML Azure Service For each pipeline run, ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in AzureML. Check out -[this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to +[this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. ## AzureML UI diff --git a/docs/book/component-guide/orchestrators/custom.md b/docs/book/component-guide/orchestrators/custom.md index 14f18744839..539aecdd6bf 100644 --- a/docs/book/component-guide/orchestrators/custom.md +++ b/docs/book/component-guide/orchestrators/custom.md @@ -215,6 +215,6 @@ To see a full end-to-end worked example of a custom orchestrator, [see here](htt ### Enabling CUDA for GPU-backed hardware -Note that if you wish to use your custom orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use your custom orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/databricks.md b/docs/book/component-guide/orchestrators/databricks.md index 9f57b5d95e8..b87aec68111 100644 --- a/docs/book/component-guide/orchestrators/databricks.md +++ b/docs/book/component-guide/orchestrators/databricks.md @@ -182,7 +182,7 @@ With these settings, the orchestrator will use a GPU-enabled Spark version and a #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/hyperai.md b/docs/book/component-guide/orchestrators/hyperai.md index 5093d296e58..3baa8ae9098 100644 --- a/docs/book/component-guide/orchestrators/hyperai.md +++ b/docs/book/component-guide/orchestrators/hyperai.md @@ -78,6 +78,6 @@ python file_that_runs_a_zenml_pipeline.py #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/kubeflow.md b/docs/book/component-guide/orchestrators/kubeflow.md index 174cb56e82e..505bee559fb 100644 --- a/docs/book/component-guide/orchestrators/kubeflow.md +++ b/docs/book/component-guide/orchestrators/kubeflow.md @@ -181,7 +181,7 @@ We can then register the orchestrator and use it in our active stack. This can b {% endtabs %} {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes all required software dependencies and use it to run your pipeline steps in Kubeflow. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes all required software dependencies and use it to run your pipeline steps in Kubeflow. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} You can now run any ZenML pipeline using the Kubeflow orchestrator: @@ -260,7 +260,7 @@ Check out the [SDK docs](https://sdkdocs.zenml.io/latest/integration\_code\_docs #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. ### Important Note for Multi-Tenancy Deployments @@ -346,7 +346,7 @@ kubeflow_settings = KubeflowOrchestratorSettings( ) ``` -See full documentation of using ZenML secrets [here](../../how-to/interact-with-secrets.md). +See full documentation of using ZenML secrets [here](../../how-to/project-setup-and-management/interact-with-secrets.md). For more information and a full list of configurable attributes of the Kubeflow orchestrator, check out the [SDK Docs](https://sdkdocs.zenml.io/latest/integration\_code\_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow\_orchestrator.KubeflowOrchestrator) . diff --git a/docs/book/component-guide/orchestrators/kubernetes.md b/docs/book/component-guide/orchestrators/kubernetes.md index 65b38fc936f..2a6ca6ea60e 100644 --- a/docs/book/component-guide/orchestrators/kubernetes.md +++ b/docs/book/component-guide/orchestrators/kubernetes.md @@ -98,7 +98,7 @@ We can then register the orchestrator and use it in our active stack. This can b ``` {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Kubernetes. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Kubernetes. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} You can now run any ZenML pipeline using the Kubernetes orchestrator: @@ -296,6 +296,6 @@ For more information and a full list of configurable attributes of the Kubernete #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/local-docker.md b/docs/book/component-guide/orchestrators/local-docker.md index 076f9e0fb4e..52dfcfa1ab5 100644 --- a/docs/book/component-guide/orchestrators/local-docker.md +++ b/docs/book/component-guide/orchestrators/local-docker.md @@ -68,6 +68,6 @@ def simple_pipeline(): #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/orchestrators.md b/docs/book/component-guide/orchestrators/orchestrators.md index f75e915f842..d5e34cec84b 100644 --- a/docs/book/component-guide/orchestrators/orchestrators.md +++ b/docs/book/component-guide/orchestrators/orchestrators.md @@ -13,7 +13,7 @@ steps of your pipeline) are available. {% hint style="info" %} Many of ZenML's remote orchestrators build [Docker](https://www.docker.com/) images in order to transport and execute your pipeline code. If you want to learn more about how Docker images are built by ZenML, check -out [this guide](../../how-to/infrastructure-deployment/customize-docker-builds/README.md). +out [this guide](../../how-to/customize-docker-builds/README.md). {% endhint %} ### When to use it diff --git a/docs/book/component-guide/orchestrators/sagemaker.md b/docs/book/component-guide/orchestrators/sagemaker.md index 1e287af471e..64643339347 100644 --- a/docs/book/component-guide/orchestrators/sagemaker.md +++ b/docs/book/component-guide/orchestrators/sagemaker.md @@ -101,7 +101,7 @@ python run.py # Authenticates with `default` profile in `~/.aws/config` {% endtabs %} {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Sagemaker. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Sagemaker. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} You can now run any ZenML pipeline using the Sagemaker orchestrator: @@ -337,6 +337,6 @@ This approach allows for more granular tagging, giving you flexibility in how yo ### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/tekton.md b/docs/book/component-guide/orchestrators/tekton.md index 507c29ae007..562aeeb912c 100644 --- a/docs/book/component-guide/orchestrators/tekton.md +++ b/docs/book/component-guide/orchestrators/tekton.md @@ -135,7 +135,7 @@ We can then register the orchestrator and use it in our active stack. This can b ``` {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Tekton. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Tekton. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} You can now run any ZenML pipeline using the Tekton orchestrator: @@ -231,6 +231,6 @@ For more information and a full list of configurable attributes of the Tekton or #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/orchestrators/vertex.md b/docs/book/component-guide/orchestrators/vertex.md index 35e52b786da..210d34f931c 100644 --- a/docs/book/component-guide/orchestrators/vertex.md +++ b/docs/book/component-guide/orchestrators/vertex.md @@ -163,7 +163,7 @@ zenml stack register -o ... --set ``` {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Vertex AI. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your pipeline steps in Vertex AI. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} You can now run any ZenML pipeline using the Vertex orchestrator: @@ -291,6 +291,6 @@ For more information and a full list of configurable attributes of the Vertex or ### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this orchestrator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/step-operators/azureml.md b/docs/book/component-guide/step-operators/azureml.md index 93bc7d06117..55681f151c4 100644 --- a/docs/book/component-guide/step-operators/azureml.md +++ b/docs/book/component-guide/step-operators/azureml.md @@ -93,7 +93,7 @@ def trainer(...) -> ...: ``` {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your steps in AzureML. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your steps in AzureML. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} #### Additional configuration @@ -152,6 +152,6 @@ You can check out the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zen #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/step-operators/custom.md b/docs/book/component-guide/step-operators/custom.md index a5ad065b23e..7328d9314a5 100644 --- a/docs/book/component-guide/step-operators/custom.md +++ b/docs/book/component-guide/step-operators/custom.md @@ -120,6 +120,6 @@ The design behind this interaction lets us separate the configuration of the fla #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use your custom step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use your custom step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/step-operators/kubernetes.md b/docs/book/component-guide/step-operators/kubernetes.md index c3859829879..4ecfe9af27f 100644 --- a/docs/book/component-guide/step-operators/kubernetes.md +++ b/docs/book/component-guide/step-operators/kubernetes.md @@ -93,7 +93,7 @@ def trainer(...) -> ...: ``` {% hint style="info" %} -ZenML will build a Docker images which includes your code and use it to run your steps in Kubernetes. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker images which includes your code and use it to run your steps in Kubernetes. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} @@ -225,6 +225,6 @@ For more information and a full list of configurable attributes of the Kubernete #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/step-operators/sagemaker.md b/docs/book/component-guide/step-operators/sagemaker.md index 3bd02eba90f..28e285aeb4b 100644 --- a/docs/book/component-guide/step-operators/sagemaker.md +++ b/docs/book/component-guide/step-operators/sagemaker.md @@ -84,7 +84,7 @@ def trainer(...) -> ...: ``` {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your steps in SageMaker. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your steps in SageMaker. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} #### Additional configuration @@ -95,6 +95,6 @@ For more information and a full list of configurable attributes of the SageMaker #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/component-guide/step-operators/step-operators.md b/docs/book/component-guide/step-operators/step-operators.md index b96b8488522..146e91eb91b 100644 --- a/docs/book/component-guide/step-operators/step-operators.md +++ b/docs/book/component-guide/step-operators/step-operators.md @@ -63,12 +63,12 @@ def my_step(...) -> ...: #### Specifying per-step resources If your steps require additional hardware resources, you can specify them on your steps as -described [here](../../how-to/advanced-topics/training-with-gpus/README.md). +described [here](../../how-to/pipeline-development/training-with-gpus/README.md). #### Enabling CUDA for GPU-backed hardware Note that if you wish to use step operators to run steps on a GPU, you will need to -follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure +follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. diff --git a/docs/book/component-guide/step-operators/vertex.md b/docs/book/component-guide/step-operators/vertex.md index aecfef49441..697f771876b 100644 --- a/docs/book/component-guide/step-operators/vertex.md +++ b/docs/book/component-guide/step-operators/vertex.md @@ -92,7 +92,7 @@ def trainer(...) -> ...: ``` {% hint style="info" %} -ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your steps in Vertex AI. Check out [this page](../../how-to/infrastructure-deployment/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. +ZenML will build a Docker image called `/zenml:` which includes your code and use it to run your steps in Vertex AI. Check out [this page](../../how-to/customize-docker-builds/README.md) if you want to learn more about how ZenML builds these images and how you can customize them. {% endhint %} #### Additional configuration @@ -133,6 +133,6 @@ For more information and a full list of configurable attributes of the Vertex st #### Enabling CUDA for GPU-backed hardware -Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/advanced-topics/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration. +Note that if you wish to use this step operator to run steps on a GPU, you will need to follow [the instructions on this page](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure that it works. It requires adding some extra settings customization and is essential to enable CUDA for the GPU to give its full acceleration.
ZenML Scarf
diff --git a/docs/book/getting-started/system-architectures.md b/docs/book/getting-started/system-architectures.md index 369fe2dbcf6..79fec7edea2 100644 --- a/docs/book/getting-started/system-architectures.md +++ b/docs/book/getting-started/system-architectures.md @@ -122,7 +122,7 @@ secret store directly to the ZenML server that is managed by us. All ZenML secrets used by running pipelines to access infrastructure services and resources are stored in the customer secret store. This allows users to use [service connectors](../how-to/infrastructure-deployment/auth-management/service-connectors-guide.md) -and the [secrets API](../how-to/interact-with-secrets.md) to authenticate +and the [secrets API](../how-to/project-setup-and-management/interact-with-secrets.md) to authenticate ZenML pipelines and the ZenML Pro to third-party services and infrastructure while ensuring that credentials are always stored on the customer side. {% endhint %} diff --git a/docs/book/how-to/advanced-topics/control-logging/README.md b/docs/book/how-to/advanced-topics/control-logging/README.md deleted file mode 100644 index 64b775efe28..00000000000 --- a/docs/book/how-to/advanced-topics/control-logging/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -icon: memo-circle-info -description: Configuring ZenML's default logging behavior ---- - -# Control logging - -ZenML produces various kinds of logs: - -* The [ZenML Server](../../../getting-started/deploying-zenml/README.md) produces server logs (like any FastAPI server). -* The [Client or Runner](../../infrastructure-deployment/configure-python-environments/README.md#client-environment-or-the-runner-environment) environment produces logs, for example after running a pipeline. These are steps that are typically before, after, and during the creation of a pipeline run. -* The [Execution environment](../../infrastructure-deployment/configure-python-environments/README.md#execution-environments) (on the orchestrator level) produces logs when it executes each step of a pipeline. These are logs that are typically written in your steps using the python `logging` module. - -This section talks about how users can control logging behavior in these various environments. - -
ZenML Scarf
diff --git a/docs/book/how-to/control-logging/README.md b/docs/book/how-to/control-logging/README.md new file mode 100644 index 00000000000..ef2d55e352f --- /dev/null +++ b/docs/book/how-to/control-logging/README.md @@ -0,0 +1,16 @@ +--- +icon: memo-circle-info +description: Configuring ZenML's default logging behavior +--- + +# Control logging + +ZenML produces various kinds of logs: + +* The [ZenML Server](../../getting-started/deploying-zenml/README.md) produces server logs (like any FastAPI server). +* The [Client or Runner](../pipeline-development/configure-python-environments/README.md#client-environment-or-the-runner-environment) environment produces logs, for example after running a pipeline. These are steps that are typically before, after, and during the creation of a pipeline run. +* The [Execution environment](../pipeline-development/configure-python-environments/README.md#execution-environments) (on the orchestrator level) produces logs when it executes each step of a pipeline. These are logs that are typically written in your steps using the python `logging` module. + +This section talks about how users can control logging behavior in these various environments. + +
ZenML Scarf
diff --git a/docs/book/how-to/advanced-topics/control-logging/disable-colorful-logging.md b/docs/book/how-to/control-logging/disable-colorful-logging.md similarity index 63% rename from docs/book/how-to/advanced-topics/control-logging/disable-colorful-logging.md rename to docs/book/how-to/control-logging/disable-colorful-logging.md index e536fa989be..20adaabe1f2 100644 --- a/docs/book/how-to/advanced-topics/control-logging/disable-colorful-logging.md +++ b/docs/book/how-to/control-logging/disable-colorful-logging.md @@ -10,7 +10,7 @@ By default, ZenML uses colorful logging to make it easier to read logs. However, ZENML_LOGGING_COLORS_DISABLED=true ``` -Note that setting this on the [client environment](../../infrastructure-deployment/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will automatically disable colorful logging on remote pipeline runs. If you wish to only disable it locally, but turn on for remote pipeline runs, you can set the `ZENML_LOGGING_COLORS_DISABLED` environment variable in your pipeline runs environment as follows: +Note that setting this on the [client environment](../pipeline-development/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will automatically disable colorful logging on remote pipeline runs. If you wish to only disable it locally, but turn on for remote pipeline runs, you can set the `ZENML_LOGGING_COLORS_DISABLED` environment variable in your pipeline runs environment as follows: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) diff --git a/docs/book/how-to/advanced-topics/control-logging/disable-rich-traceback.md b/docs/book/how-to/control-logging/disable-rich-traceback.md similarity index 67% rename from docs/book/how-to/advanced-topics/control-logging/disable-rich-traceback.md rename to docs/book/how-to/control-logging/disable-rich-traceback.md index c19cf36257f..a47f37c388f 100644 --- a/docs/book/how-to/advanced-topics/control-logging/disable-rich-traceback.md +++ b/docs/book/how-to/control-logging/disable-rich-traceback.md @@ -12,9 +12,9 @@ export ZENML_ENABLE_RICH_TRACEBACK=false This will ensure that you see only the plain text traceback output. -Note that setting this on the [client environment](../../infrastructure-deployment/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will **not automatically disable rich tracebacks on remote pipeline runs**. That means setting this variable locally with only effect pipelines that run locally. +Note that setting this on the [client environment](../pipeline-development/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will **not automatically disable rich tracebacks on remote pipeline runs**. That means setting this variable locally with only effect pipelines that run locally. -If you wish to disable it also for [remote pipeline runs](../../../user-guide/production-guide/cloud-orchestration.md), you can set the `ZENML_ENABLE_RICH_TRACEBACK` environment variable in your pipeline runs environment as follows: +If you wish to disable it also for [remote pipeline runs](../../user-guide/production-guide/cloud-orchestration.md), you can set the `ZENML_ENABLE_RICH_TRACEBACK` environment variable in your pipeline runs environment as follows: ```python docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) diff --git a/docs/book/how-to/advanced-topics/control-logging/enable-or-disable-logs-storing.md b/docs/book/how-to/control-logging/enable-or-disable-logs-storing.md similarity index 90% rename from docs/book/how-to/advanced-topics/control-logging/enable-or-disable-logs-storing.md rename to docs/book/how-to/control-logging/enable-or-disable-logs-storing.md index 6e6e45015f5..13965f93819 100644 --- a/docs/book/how-to/advanced-topics/control-logging/enable-or-disable-logs-storing.md +++ b/docs/book/how-to/control-logging/enable-or-disable-logs-storing.md @@ -15,7 +15,7 @@ def my_step() -> None: These logs are stored within the respective artifact store of your stack. You can display the logs in the dashboard as follows: -![Displaying step logs on the dashboard](../../../.gitbook/assets/zenml_step_logs.png) +![Displaying step logs on the dashboard](../../.gitbook/assets/zenml_step_logs.png) {% hint style="warning" %} Note that if you are not connected to a cloud artifact store with a service connector configured then you will not @@ -37,7 +37,7 @@ If you do not want to store the logs in your artifact store, you can: def my_pipeline(): ... ``` -2. Disable it by using the environmental variable `ZENML_DISABLE_STEP_LOGS_STORAGE` and setting it to `true`. This environmental variable takes precedence over the parameters mentioned above. Note this environmental variable needs to be set on the [execution environment](../../infrastructure-deployment/configure-python-environments/README.md#execution-environments), i.e., on the orchestrator level: +2. Disable it by using the environmental variable `ZENML_DISABLE_STEP_LOGS_STORAGE` and setting it to `true`. This environmental variable takes precedence over the parameters mentioned above. Note this environmental variable needs to be set on the [execution environment](../pipeline-development/configure-python-environments/README.md#execution-environments), i.e., on the orchestrator level: ```python docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) diff --git a/docs/book/how-to/advanced-topics/control-logging/set-logging-verbosity.md b/docs/book/how-to/control-logging/set-logging-verbosity.md similarity index 60% rename from docs/book/how-to/advanced-topics/control-logging/set-logging-verbosity.md rename to docs/book/how-to/control-logging/set-logging-verbosity.md index b1839346695..fa21a318ade 100644 --- a/docs/book/how-to/advanced-topics/control-logging/set-logging-verbosity.md +++ b/docs/book/how-to/control-logging/set-logging-verbosity.md @@ -13,9 +13,9 @@ export ZENML_LOGGING_VERBOSITY=INFO Choose from `INFO`, `WARN`, `ERROR`, `CRITICAL`, `DEBUG`. This will set the logs to whichever level you suggest. -Note that setting this on the [client environment](../../infrastructure-deployment/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will **not automatically set the same logging verbosity for remote pipeline runs**. That means setting this variable locally with only effect pipelines that run locally. +Note that setting this on the [client environment](../pipeline-development/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will **not automatically set the same logging verbosity for remote pipeline runs**. That means setting this variable locally with only effect pipelines that run locally. -If you wish to control for [remote pipeline runs](../../../user-guide/production-guide/cloud-orchestration.md), you can set the `ZENML_LOGGING_VERBOSITY` environment variable in your pipeline runs environment as follows: +If you wish to control for [remote pipeline runs](../../user-guide/production-guide/cloud-orchestration.md), you can set the `ZENML_LOGGING_VERBOSITY` environment variable in your pipeline runs environment as follows: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) diff --git a/docs/book/how-to/advanced-topics/control-logging/view-logs-on-the-dasbhoard.md b/docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md similarity index 80% rename from docs/book/how-to/advanced-topics/control-logging/view-logs-on-the-dasbhoard.md rename to docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md index b202fb8c9c5..2b803a6d4f5 100644 --- a/docs/book/how-to/advanced-topics/control-logging/view-logs-on-the-dasbhoard.md +++ b/docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md @@ -17,14 +17,14 @@ These logs are stored within the respective artifact store of your stack. This m *if the deployed ZenML server has direct access to the underlying artifact store*. There are two cases in which this will be true: * In case of a local ZenML server (via `zenml login --local`), both local and remote artifact stores may be accessible, depending on configuration of the client. -* In case of a deployed ZenML server, logs for runs on a [local artifact store](../../../component-guide/artifact-stores/local.md) will not be accessible. Logs -for runs using a [remote artifact store](../../../user-guide/production-guide/remote-storage.md) **may be** accessible, if the artifact store has been configured -with a [service connector](../../infrastructure-deployment/auth-management/service-connectors-guide.md). Please read [this chapter](../../../user-guide/production-guide/remote-storage.md) of +* In case of a deployed ZenML server, logs for runs on a [local artifact store](../../component-guide/artifact-stores/local.md) will not be accessible. Logs +for runs using a [remote artifact store](../../user-guide/production-guide/remote-storage.md) **may be** accessible, if the artifact store has been configured +with a [service connector](../../infrastructure-deployment/auth-management/service-connectors-guide.md). Please read [this chapter](../../user-guide/production-guide/remote-storage.md) of the production guide to learn how to configure a remote artifact store with a service connector. If configured correctly, the logs are displayed in the dashboard as follows: -![Displaying step logs on the dashboard](../../../.gitbook/assets/zenml_step_logs.png) +![Displaying step logs on the dashboard](../../.gitbook/assets/zenml_step_logs.png) {% hint style="warning" %} If you do not want to store the logs for your pipeline (for example due to performance reduction or storage limits), diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/README.md b/docs/book/how-to/customize-docker-builds/README.md similarity index 62% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/README.md rename to docs/book/how-to/customize-docker-builds/README.md index da604618a95..746c09af3ea 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/README.md +++ b/docs/book/how-to/customize-docker-builds/README.md @@ -5,7 +5,7 @@ description: Using Docker images to run your pipeline. # Customize Docker Builds -ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote [orchestrators](../../../user-guide/production-guide/cloud-orchestration.md) or [step operators](../../../component-guide/step-operators/step-operators.md), ZenML builds [Docker](https://www.docker.com/) images to run your pipeline in an isolated, well-defined environment. +ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote [orchestrators](../../user-guide/production-guide/cloud-orchestration.md) or [step operators](../../component-guide/step-operators/step-operators.md), ZenML builds [Docker](https://www.docker.com/) images to run your pipeline in an isolated, well-defined environment. This section discusses how to control this dockerization process. diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/define-where-an-image-is-built.md b/docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md similarity index 63% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/define-where-an-image-is-built.md rename to docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md index 6c373705351..552af1fc612 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/define-where-an-image-is-built.md +++ b/docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md @@ -4,11 +4,11 @@ description: Defining the image builder. # 🐳 Define where an image is built -ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote [orchestrators](../../../component-guide/orchestrators/orchestrators.md) or [step operators](../../../component-guide/step-operators/step-operators.md), ZenML builds [Docker](https://www.docker.com/) images to run your pipeline in an isolated, well-defined environment. +ZenML executes pipeline steps sequentially in the active Python environment when running locally. However, with remote [orchestrators](../../component-guide/orchestrators/orchestrators.md) or [step operators](../../component-guide/step-operators/step-operators.md), ZenML builds [Docker](https://www.docker.com/) images to run your pipeline in an isolated, well-defined environment. -By default, execution environments are created locally in the client environment using the local Docker client. However, this requires Docker installation and permissions. ZenML offers [image builders](../../../component-guide/image-builders/image-builders.md), a special [stack component](../../../component-guide/README.md), allowing users to build and push Docker images in a different specialized _image builder environment_. +By default, execution environments are created locally in the client environment using the local Docker client. However, this requires Docker installation and permissions. ZenML offers [image builders](../../component-guide/image-builders/image-builders.md), a special [stack component](../../component-guide/README.md), allowing users to build and push Docker images in a different specialized _image builder environment_. -Note that even if you don't configure an image builder in your stack, ZenML still uses the [local image builder](../../../component-guide/image-builders/local.md) to retain consistency across all builds. In this case, the image builder environment is the same as the [client environment](../../infrastructure-deployment/configure-python-environments/README.md#client-environment-or-the-runner-environment). +Note that even if you don't configure an image builder in your stack, ZenML still uses the [local image builder](../../../component-guide/image-builders/local.md) to retain consistency across all builds. In this case, the image builder environment is the same as the [client environment](../pipeline-development/configure-python-environments/README.md#client-environment-or-the-runner-environment). You don't need to directly interact with any image builder in your code. As long as the image builder that you want to use is part of your active [ZenML stack](/docs/book/user-guide/production-guide/understand-stacks.md), it will be used diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md b/docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md similarity index 83% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md rename to docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md index 872cd691249..db342c4c8ea 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md +++ b/docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md @@ -4,7 +4,7 @@ description: Using Docker images to run your pipeline. # Specify Docker settings for a pipeline -When a [pipeline is run with a remote orchestrator](../configure-python-environments/README.md) a [Dockerfile](https://docs.docker.com/engine/reference/builder/) is dynamically generated at runtime. It is then used to build the Docker image using the [image builder](../../infrastructure-deployment/configure-python-environments/README.md#image-builder-environment) component of your stack. The Dockerfile consists of the following steps: +When a [pipeline is run with a remote orchestrator](../pipeline-development/configure-python-environments/README.md) a [Dockerfile](https://docs.docker.com/engine/reference/builder/) is dynamically generated at runtime. It is then used to build the Docker image using the [image builder](../pipeline-development/configure-python-environments/README.md#image-builder-environment) component of your stack. The Dockerfile consists of the following steps: * **Starts from a parent image** that has **ZenML installed**. By default, this will use the [official ZenML image](https://hub.docker.com/r/zenmldocker/zenml/) for the Python and ZenML version that you're using in the active Python environment. If you want to use a different image as the base for the following steps, check out [this guide](./docker-settings-on-a-pipeline.md#using-a-custom-parent-image). * **Installs additional pip dependencies**. ZenML will automatically detect which integrations are used in your stack and install the required dependencies. If your pipeline needs any additional requirements, check out our [guide on including custom dependencies](specify-pip-dependencies-and-apt-packages.md). @@ -58,7 +58,7 @@ my_step = my_step.with_options( ) ``` -* Using a YAML configuration file as described [here](../../pipeline-development/use-configuration-files/README.md): +* Using a YAML configuration file as described [here](../pipeline-development/use-configuration-files/README.md): ```yaml settings: @@ -72,11 +72,11 @@ steps: ... ``` -Check out [this page](../../pipeline-development/use-configuration-files/configuration-hierarchy.md) for more information on the hierarchy and precedence of the various ways in which you can supply the settings. +Check out [this page](../pipeline-development/use-configuration-files/configuration-hierarchy.md) for more information on the hierarchy and precedence of the various ways in which you can supply the settings. ### Specifying Docker build options -If you want to specify build options that get passed to the build method of the [image builder](../../infrastructure-deployment/configure-python-environments/README.md#image-builder-environment). For the default local image builder, these options get passed to the [`docker build` command](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build). +If you want to specify build options that get passed to the build method of the [image builder](../pipeline-development/configure-python-environments/README.md#image-builder-environment). For the default local image builder, these options get passed to the [`docker build` command](https://docker-py.readthedocs.io/en/stable/images.html#docker.models.images.ImageCollection.build). ```python docker_settings = DockerSettings(build_config={"build_options": {...}}) diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-step.md b/docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md similarity index 100% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-step.md rename to docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/how-to-reuse-builds.md b/docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md similarity index 89% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/how-to-reuse-builds.md rename to docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md index 17bfe22fc75..20ebe7f4d69 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/how-to-reuse-builds.md +++ b/docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md @@ -37,9 +37,9 @@ You can also let ZenML use the artifact store to upload your code. This is the d ## Use code repositories to speed up Docker build times -One way to speed up Docker builds is to connect a git repository. Registering a [code repository](../../../user-guide/production-guide/connect-code-repository.md) lets you avoid building images each time you run a pipeline **and** quickly iterate on your code. When running a pipeline that is part of a local code repository checkout, ZenML can instead build the Docker images without including any of your source files, and download the files inside the container before running your code. This greatly speeds up the building process and also allows you to reuse images that one of your colleagues might have built for the same stack. +One way to speed up Docker builds is to connect a git repository. Registering a [code repository](../../user-guide/production-guide/connect-code-repository.md) lets you avoid building images each time you run a pipeline **and** quickly iterate on your code. When running a pipeline that is part of a local code repository checkout, ZenML can instead build the Docker images without including any of your source files, and download the files inside the container before running your code. This greatly speeds up the building process and also allows you to reuse images that one of your colleagues might have built for the same stack. -ZenML will **automatically figure out which builds match your pipeline and reuse the appropriate build id**. Therefore, you **do not** need to explicitly pass in the build id when you have a clean repository state and a connected git repository. This approach is **highly recommended**. See an end to end example [here](../../../user-guide/production-guide/connect-code-repository.md). +ZenML will **automatically figure out which builds match your pipeline and reuse the appropriate build id**. Therefore, you **do not** need to explicitly pass in the build id when you have a clean repository state and a connected git repository. This approach is **highly recommended**. See an end to end example [here](../../user-guide/production-guide/connect-code-repository.md). {% hint style="warning" %} In order to benefit from the advantages of having a code repository in a project, you need to make sure that **the relevant integrations are installed for your ZenML installation.**. For instance, let's assume you are working on a project with ZenML and one of your team members has already registered a corresponding code repository of type `github` for it. If you do `zenml code-repository list`, you would also be able to see this repository. However, in order to fully use this repository, you still need to install the corresponding integration for it, in this example the `github` integration. diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/how-to-use-a-private-pypi-repository.md b/docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md similarity index 100% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/how-to-use-a-private-pypi-repository.md rename to docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md b/docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md similarity index 90% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md rename to docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md index b86bfc8f44a..5c8794c4242 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md +++ b/docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md @@ -4,7 +4,7 @@ The configuration for specifying pip and apt dependencies only works in the remote pipeline case, and is disregarded for local pipelines (i.e. pipelines that run locally without having to build a Docker image). {% endhint %} -When a [pipeline is run with a remote orchestrator](../../infrastructure-deployment/configure-python-environments/README.md) a [Dockerfile](https://docs.docker.com/engine/reference/builder/) is dynamically generated at runtime. It is then used to build the Docker image using the [image builder](../../infrastructure-deployment/configure-python-environments/README.md#-configure-python-environments) component of your stack. +When a [pipeline is run with a remote orchestrator](../pipeline-development/configure-python-environments/README.md) a [Dockerfile](https://docs.docker.com/engine/reference/builder/) is dynamically generated at runtime. It is then used to build the Docker image using the [image builder](../pipeline-development/configure-python-environments/README.md#-configure-python-environments) component of your stack. For all of examples on this page, note that `DockerSettings` can be imported using `from zenml.config import DockerSettings`. @@ -58,7 +58,7 @@ def my_pipeline(...): def my_pipeline(...): ... ``` -* Specify a list of [ZenML integrations](../../../component-guide/README.md) that you're using in your pipeline: +* Specify a list of [ZenML integrations](../../component-guide/README.md) that you're using in your pipeline: ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/use-a-prebuilt-image.md b/docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md similarity index 96% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/use-a-prebuilt-image.md rename to docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md index 77abf4f29a6..052c5dea2a6 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/use-a-prebuilt-image.md +++ b/docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md @@ -106,7 +106,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAG The files containing your pipeline and step code and all other necessary functions should be available in your execution environment. -- If you have a [code repository](../../../user-guide/production-guide/connect-code-repository.md) registered, you don't need to include your code files in the image yourself. ZenML will download them from the repository to the appropriate location in the image. +- If you have a [code repository](../../user-guide/production-guide/connect-code-repository.md) registered, you don't need to include your code files in the image yourself. ZenML will download them from the repository to the appropriate location in the image. - If you don't have a code repository but `allow_download_from_artifact_store` is set to `True` in your `DockerSettings` (`True` by default), ZenML will upload your code to the artifact store and make it available to the image. diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/use-your-own-docker-files.md b/docs/book/how-to/customize-docker-builds/use-your-own-docker-files.md similarity index 100% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/use-your-own-docker-files.md rename to docs/book/how-to/customize-docker-builds/use-your-own-docker-files.md diff --git a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/which-files-are-built-into-the-image.md b/docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md similarity index 92% rename from docs/book/how-to/infrastructure-deployment/customize-docker-builds/which-files-are-built-into-the-image.md rename to docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md index c0b90ba006a..52b8a478f3c 100644 --- a/docs/book/how-to/infrastructure-deployment/customize-docker-builds/which-files-are-built-into-the-image.md +++ b/docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md @@ -6,7 +6,7 @@ ZenML determines the root directory of your source files in the following order: * Otherwise, the parent directory of the Python file you're executing will be the source root. For example, running `python /path/to/file.py`, the source root would be `/path/to`. You can specify how the files inside this root directory are handled using the following three attributes on the [DockerSettings](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings): -* `allow_download_from_code_repository`: If this is set to `True` and your files are inside a registered [code repository](../../project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md) and the repository has no local changes, the files will be downloaded from the code repository and not included in the image. +* `allow_download_from_code_repository`: If this is set to `True` and your files are inside a registered [code repository](../../user-guide/production-guide/connect-code-repository.md) and the repository has no local changes, the files will be downloaded from the code repository and not included in the image. * `allow_download_from_artifact_store`: If the previous option is disabled or no code repository without local changes exists for the root directory, ZenML will archive and upload your code to the artifact store if this is set to `True`. * `allow_including_files_in_images`: If both previous options were disabled or not possible, ZenML will include your files in the Docker image if this option is enabled. This means a new Docker image has to be built each time you modify one of your code files. diff --git a/docs/book/how-to/data-artifact-management/complex-usecases/README.md b/docs/book/how-to/data-artifact-management/complex-usecases/README.md new file mode 100644 index 00000000000..75fd292ef6c --- /dev/null +++ b/docs/book/how-to/data-artifact-management/complex-usecases/README.md @@ -0,0 +1,3 @@ +--- +icon: sitemap +--- \ No newline at end of file diff --git a/docs/book/how-to/data-artifact-management/handle-data-artifacts/datasets.md b/docs/book/how-to/data-artifact-management/complex-usecases/datasets.md similarity index 100% rename from docs/book/how-to/data-artifact-management/handle-data-artifacts/datasets.md rename to docs/book/how-to/data-artifact-management/complex-usecases/datasets.md diff --git a/docs/book/how-to/data-artifact-management/handle-data-artifacts/manage-big-data.md b/docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md similarity index 100% rename from docs/book/how-to/data-artifact-management/handle-data-artifacts/manage-big-data.md rename to docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md diff --git a/docs/book/how-to/data-artifact-management/handle-data-artifacts/passing-artifacts-between-pipelines.md b/docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md similarity index 100% rename from docs/book/how-to/data-artifact-management/handle-data-artifacts/passing-artifacts-between-pipelines.md rename to docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md diff --git a/docs/book/how-to/data-artifact-management/handle-data-artifacts/registering-existing-data.md b/docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md similarity index 100% rename from docs/book/how-to/data-artifact-management/handle-data-artifacts/registering-existing-data.md rename to docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md diff --git a/docs/book/how-to/data-artifact-management/handle-data-artifacts/unmaterialized-artifacts.md b/docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md similarity index 100% rename from docs/book/how-to/data-artifact-management/handle-data-artifacts/unmaterialized-artifacts.md rename to docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md diff --git a/docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md b/docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md index 463438eb885..0c32700cf30 100644 --- a/docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md +++ b/docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md @@ -310,7 +310,7 @@ If you would like to disable artifact metadata extraction altogether, you can se ## Skipping materialization -You can learn more about skipping materialization [here](unmaterialized-artifacts.md). +You can learn more about skipping materialization [here](../complex-usecases/unmaterialized-artifacts.md). ## Interaction with custom artifact stores diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/README.md b/docs/book/how-to/manage-zenml-server/README.md similarity index 100% rename from docs/book/how-to/advanced-topics/manage-zenml-server/README.md rename to docs/book/how-to/manage-zenml-server/README.md diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md b/docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md similarity index 85% rename from docs/book/how-to/advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md rename to docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md index ca7e4b6ae1a..3688c49f5fd 100644 --- a/docs/book/how-to/advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md +++ b/docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md @@ -16,16 +16,16 @@ Follow the tips below while upgrading your server to mitigate data losses, downt - **Database Backup**: Before upgrading, create a backup of your MySQL database. This allows you to rollback if necessary. - **Automated Backups**: Consider setting up automatic daily backups of your database for added security. Most managed services like AWS RDS, Google Cloud SQL, and Azure Database for MySQL offer automated backup options. -![Screenshot of backups in AWS RDS](../../../.gitbook/assets/aws-rds-backups.png) +![Screenshot of backups in AWS RDS](../../.gitbook/assets/aws-rds-backups.png) ### Upgrade Strategies - **Staged Upgrade**: For large organizations or critical systems, consider using two ZenML server instances (old and new) and migrating services one by one to the new version. -![Server Migration Step 1](../../../.gitbook/assets/server_migration_1.png) +![Server Migration Step 1](../../.gitbook/assets/server_migration_1.png) -![Server Migration Step 2](../../../.gitbook/assets/server_migration_2.png) +![Server Migration Step 2](../../.gitbook/assets/server_migration_2.png) - **Team Coordination**: If multiple teams share a ZenML server instance, coordinate the upgrade timing to minimize disruption. - **Separate ZenML Servers**: Coordination between teams might be difficult if one team requires new features but the other can't upgrade yet. In such cases, it is recommended to use dedicated ZenML server instances per team or product to allow for more flexible upgrade schedules. @@ -48,7 +48,7 @@ Sometimes, you might have to upgrade your code to work with a new version of Zen - **Local Testing**: It's a good idea to test it locally first after you upgrade (`pip install zenml --upgrade`) and run some old pipelines to check for compatibility issues between the old and new versions. - **End-to-End Testing**: You can also develop simple end-to-end tests to ensure that the new version works with your pipeline code and your stack. ZenML already has an [extensive test suite](https://github.com/zenml-io/zenml/tree/main/tests) that we use for releases and you can use it as an example. -- **Artifact Compatibility**: Be cautious with pickle-based [materializers](../../../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md), as they can be sensitive to changes in Python versions or libraries. Consider using version-agnostic materialization methods for critical artifacts. You can try to load older artifacts with the new version of ZenML to see if they are compatible. Every artifact has an ID which you can use to load it in the following way: +- **Artifact Compatibility**: Be cautious with pickle-based [materializers](../../how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md), as they can be sensitive to changes in Python versions or libraries. Consider using version-agnostic materialization methods for critical artifacts. You can try to load older artifacts with the new version of ZenML to see if they are compatible. Every artifact has an ID which you can use to load it in the following way: ```python from zenml.client import Client @@ -59,7 +59,7 @@ loaded_artifact = artifact.load() ### Dependency Management -- **Python Version**: Make sure that the Python version you are using is compatible with the ZenML version you are upgrading to. Check out the [installation guide](../../../getting-started/installation.md) to find out which Python version is supported. +- **Python Version**: Make sure that the Python version you are using is compatible with the ZenML version you are upgrading to. Check out the [installation guide](../../getting-started/installation.md) to find out which Python version is supported. - **External Dependencies**: Be mindful of external dependencies (e.g. from integrations) that might be incompatible with the new version of ZenML. This could be the case when some older versions are no longer supported or maintained and the ZenML integration is updated to use a newer version. You can find this information in the [release notes](https://github.com/zenml-io/zenml/releases) for the new version of ZenML. ### Handling API Changes diff --git a/docs/book/how-to/project-setup-and-management/connecting-to-zenml/README.md b/docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/connecting-to-zenml/README.md rename to docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md diff --git a/docs/book/how-to/project-setup-and-management/connecting-to-zenml/connect-in-with-your-user-interactive.md b/docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/connecting-to-zenml/connect-in-with-your-user-interactive.md rename to docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md diff --git a/docs/book/how-to/project-setup-and-management/connecting-to-zenml/connect-with-a-service-account.md b/docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/connecting-to-zenml/connect-with-a-service-account.md rename to docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-guide.md b/docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md similarity index 100% rename from docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-guide.md rename to docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-forty.md b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md similarity index 91% rename from docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-forty.md rename to docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md index a8614bc02f9..6fb472182bd 100644 --- a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-forty.md +++ b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md @@ -135,7 +135,7 @@ def my_pipeline(): {% endtab %} {% endtabs %} -Check out [this page](../../how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md) for more information on how to parameterize your steps. +Check out [this page](../../pipeline-development/build-pipelines/use-pipeline-step-parameters.md) for more information on how to parameterize your steps. ## Calling a step outside of a pipeline @@ -353,7 +353,7 @@ loaded_model = model.load() {% endtab %} {% endtabs %} -Check out [this page](../../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md) for more information on how to programmatically fetch information about previous pipeline runs. +Check out [this page](../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md) for more information on how to programmatically fetch information about previous pipeline runs. ## Controlling the step execution order @@ -385,7 +385,7 @@ def my_pipeline(): {% endtab %} {% endtabs %} -Check out [this page](../../../pipeline-development/build-pipelines/control-execution-order-of-steps.md) for more information on how to control the step execution order. +Check out [this page](../../pipeline-development/build-pipelines/control-execution-order-of-steps.md) for more information on how to control the step execution order. ## Defining steps with multiple outputs @@ -424,7 +424,7 @@ def my_step() -> Tuple[ {% endtab %} {% endtabs %} -Check out [this page](../../../pipeline-development/build-pipelines/step-output-typing-and-annotation.md) for more information on how to annotate your step outputs. +Check out [this page](../../pipeline-development/build-pipelines/step-output-typing-and-annotation.md) for more information on how to annotate your step outputs. ## Accessing run information inside steps @@ -457,6 +457,6 @@ def my_step() -> Any: # New: StepContext is no longer an argument of the step {% endtab %} {% endtabs %} -Check out [this page](../../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md) for more information on how to fetch run information inside your steps using `get_step_context()`. +Check out [this page](../../model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md) for more information on how to fetch run information inside your steps using `get_step_context()`.
ZenML Scarf
diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-sixty.md b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md similarity index 99% rename from docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-sixty.md rename to docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md index a66b8480b02..60b5fc3cb91 100644 --- a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-sixty.md +++ b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md @@ -56,7 +56,7 @@ is still using `sqlalchemy` v1 and is incompatible with pydantic v2. As a solution, we have removed the dependencies of the `airflow` integration. Now, you can use ZenML to create your Airflow pipelines and use a separate environment to run them with Airflow. You can check the updated docs -[right here](../../../../component-guide/orchestrators/airflow.md). +[right here](../../../component-guide/orchestrators/airflow.md). ### AWS diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-thirty.md b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md similarity index 100% rename from docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-thirty.md rename to docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-twenty.md b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md similarity index 99% rename from docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-twenty.md rename to docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md index d0334358d1f..e44d4a54a6c 100644 --- a/docs/book/how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-twenty.md +++ b/docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md @@ -16,7 +16,7 @@ If you have updated to ZenML 0.20.0 by mistake or are experiencing issues with t High-level overview of the changes: -* [ZenML takes over the Metadata Store](migration-zero-twenty.md#zenml-takes-over-the-metadata-store-role) role. All information about your ZenML Stacks, pipelines, and artifacts is tracked by ZenML itself directly. If you are currently using remote Metadata Stores (e.g. deployed in cloud) in your stacks, you will probably need to replace them with a [ZenML server deployment](../../../../getting-started/deploying-zenml/README.md). +* [ZenML takes over the Metadata Store](migration-zero-twenty.md#zenml-takes-over-the-metadata-store-role) role. All information about your ZenML Stacks, pipelines, and artifacts is tracked by ZenML itself directly. If you are currently using remote Metadata Stores (e.g. deployed in cloud) in your stacks, you will probably need to replace them with a [ZenML server deployment](../../../getting-started/deploying-zenml/README.md). * the [new ZenML Dashboard](migration-zero-twenty.md#the-zenml-dashboard-is-now-available) is now available with all ZenML deployments. * [ZenML Profiles have been removed](migration-zero-twenty.md#removal-of-profiles-and-the-local-yaml-database) in favor of ZenML Projects. You need to [manually migrate your existing ZenML Profiles](migration-zero-twenty.md#-how-to-migrate-your-profiles) after the update. * the [configuration of Stack Components is now decoupled from their implementation](migration-zero-twenty.md#decoupling-stack-component-configuration-from-implementation). If you extended ZenML with custom stack component implementations, you may need to update the way they are registered in ZenML. @@ -24,7 +24,7 @@ High-level overview of the changes: ## ZenML takes over the Metadata Store role -ZenML can now run [as a server](../../../../getting-started/core-concepts.md#zenml-server-and-dashboard) that can be accessed via a REST API and also comes with a visual user interface (called the ZenML Dashboard). This server can be deployed in arbitrary environments (local, on-prem, via Docker, on AWS, GCP, Azure etc.) and supports user management, workspace scoping, and more. +ZenML can now run [as a server](../../../getting-started/core-concepts.md#zenml-server-and-dashboard) that can be accessed via a REST API and also comes with a visual user interface (called the ZenML Dashboard). This server can be deployed in arbitrary environments (local, on-prem, via Docker, on AWS, GCP, Azure etc.) and supports user management, workspace scoping, and more. The release introduces a series of commands to facilitate managing the lifecycle of the ZenML server and to access the pipeline and pipeline run information: diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/troubleshoot-your-deployed-server.md b/docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md similarity index 100% rename from docs/book/how-to/advanced-topics/manage-zenml-server/troubleshoot-your-deployed-server.md rename to docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/upgrade-zenml-server.md b/docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md similarity index 100% rename from docs/book/how-to/advanced-topics/manage-zenml-server/upgrade-zenml-server.md rename to docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md diff --git a/docs/book/how-to/advanced-topics/manage-zenml-server/using-zenml-server-in-prod.md b/docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md similarity index 95% rename from docs/book/how-to/advanced-topics/manage-zenml-server/using-zenml-server-in-prod.md rename to docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md index 6ffadb6496a..82bd3265d27 100644 --- a/docs/book/how-to/advanced-topics/manage-zenml-server/using-zenml-server-in-prod.md +++ b/docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md @@ -44,7 +44,7 @@ To scale your ZenML server deployed as a service on ECS, you can follow the step - If you scroll down, you will see the "Service auto scaling - optional" section. - Here you can enable autoscaling and set the minimum and maximum number of tasks to run for your service and also the ECS service metric to use for scaling. -![Image showing autoscaling settings for a service](../../../.gitbook/assets/ecs_autoscaling.png) +![Image showing autoscaling settings for a service](../../.gitbook/assets/ecs_autoscaling.png) {% endtab %} @@ -60,7 +60,7 @@ To scale your ZenML server deployed on Cloud Run, you can follow the steps below - Scroll down to the "Revision auto-scaling" section. - Here you can set the minimum and maximum number of instances to run for your service. -![Image showing autoscaling settings for a service](../../../.gitbook/assets/cloudrun_autoscaling.png) +![Image showing autoscaling settings for a service](../../.gitbook/assets/cloudrun_autoscaling.png) {% endtab %} {% tab title="Docker Compose" %} @@ -159,7 +159,7 @@ sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[ This query would give you the CPU utilization of your server pods in all namespaces that start with `zenml`. The image below shows how this query would look like in Grafana. -![Image showing CPU utilization of ZenML server pods](../../../.gitbook/assets/grafana_dashboard.png) +![Image showing CPU utilization of ZenML server pods](../../.gitbook/assets/grafana_dashboard.png) {% endtab %} @@ -168,7 +168,7 @@ On ECS, you can utilize the [CloudWatch integration](https://docs.aws.amazon.com In the "Health and metrics" section of your ECS console, you should see metrics pertaining to your ZenML service like CPU utilization and Memory utilization. -![Image showing CPU utilization ECS](../../../.gitbook/assets/ecs_cpu_utilization.png) +![Image showing CPU utilization ECS](../../.gitbook/assets/ecs_cpu_utilization.png) {% endtab %} {% tab title="Cloud Run" %} @@ -176,7 +176,7 @@ In Cloud Run, you can utilize the [Cloud Monitoring integration](https://cloud.g The "Metrics" tab in the Cloud Run console will show you metrics like Container CPU utilization, Container memory utilization, and more. -![Image showing metrics in Cloud Run](../../../.gitbook/assets/cloudrun_metrics.png) +![Image showing metrics in Cloud Run](../../.gitbook/assets/cloudrun_metrics.png) {% endtab %} {% endtabs %} diff --git a/docs/book/how-to/infrastructure-deployment/configure-python-environments/README.md b/docs/book/how-to/pipeline-development/configure-python-environments/README.md similarity index 100% rename from docs/book/how-to/infrastructure-deployment/configure-python-environments/README.md rename to docs/book/how-to/pipeline-development/configure-python-environments/README.md diff --git a/docs/book/how-to/infrastructure-deployment/configure-python-environments/configure-the-server-environment.md b/docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md similarity index 100% rename from docs/book/how-to/infrastructure-deployment/configure-python-environments/configure-the-server-environment.md rename to docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md diff --git a/docs/book/how-to/infrastructure-deployment/configure-python-environments/handling-dependencies.md b/docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md similarity index 100% rename from docs/book/how-to/infrastructure-deployment/configure-python-environments/handling-dependencies.md rename to docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md diff --git a/docs/book/how-to/project-setup-and-management/develop-locally/README.md b/docs/book/how-to/pipeline-development/develop-locally/README.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/develop-locally/README.md rename to docs/book/how-to/pipeline-development/develop-locally/README.md diff --git a/docs/book/how-to/project-setup-and-management/develop-locally/keep-your-dashboard-server-clean.md b/docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/develop-locally/keep-your-dashboard-server-clean.md rename to docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md diff --git a/docs/book/how-to/project-setup-and-management/develop-locally/local-prod-pipeline-variants.md b/docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/develop-locally/local-prod-pipeline-variants.md rename to docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md diff --git a/docs/book/how-to/advanced-topics/run-remote-notebooks/README.md b/docs/book/how-to/pipeline-development/run-remote-notebooks/README.md similarity index 100% rename from docs/book/how-to/advanced-topics/run-remote-notebooks/README.md rename to docs/book/how-to/pipeline-development/run-remote-notebooks/README.md diff --git a/docs/book/how-to/advanced-topics/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md b/docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md similarity index 100% rename from docs/book/how-to/advanced-topics/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md rename to docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md diff --git a/docs/book/how-to/advanced-topics/run-remote-notebooks/run-a-single-step-from-a-notebook.md b/docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md similarity index 100% rename from docs/book/how-to/advanced-topics/run-remote-notebooks/run-a-single-step-from-a-notebook.md rename to docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md diff --git a/docs/book/how-to/advanced-topics/training-with-gpus/README.md b/docs/book/how-to/pipeline-development/training-with-gpus/README.md similarity index 100% rename from docs/book/how-to/advanced-topics/training-with-gpus/README.md rename to docs/book/how-to/pipeline-development/training-with-gpus/README.md diff --git a/docs/book/how-to/advanced-topics/training-with-gpus/accelerate-distributed-training.md b/docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md similarity index 100% rename from docs/book/how-to/advanced-topics/training-with-gpus/accelerate-distributed-training.md rename to docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md diff --git a/docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-python.md b/docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-python.md index 61e3f459f6d..a6275ad86a1 100644 --- a/docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-python.md +++ b/docs/book/how-to/pipeline-development/trigger-pipelines/use-templates-python.md @@ -110,7 +110,7 @@ def loads_data_and_triggers_training(): Read more about the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) function object in the [SDK Docs](https://sdkdocs.zenml.io/). -Read more about Unmaterialized Artifacts [here](../../data-artifact-management/handle-data-artifacts/unmaterialized-artifacts.md). +Read more about Unmaterialized Artifacts [here](../../data-artifact-management/complex-usecases/unmaterialized-artifacts.md).
ZenML Scarf
diff --git a/docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md b/docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md index 5816d6c7679..5ec7c57f782 100644 --- a/docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md +++ b/docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md @@ -107,10 +107,10 @@ steps: These are boolean flags for various configurations: -* `enable_artifact_metadata`: Whether to [associate metadata with artifacts or not](../handle-data-artifacts/handle-custom-data-types.md#optional-which-metadata-to-extract-for-the-artifact). -* `enable_artifact_visualization`: Whether to [attach visualizations of artifacts](../visualize-artifacts/README.md). +* `enable_artifact_metadata`: Whether to [associate metadata with artifacts or not](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md#optional-which-metadata-to-extract-for-the-artifact). +* `enable_artifact_visualization`: Whether to [attach visualizations of artifacts](../../data-artifact-management/visualize-artifacts/README.md). * `enable_cache`: Utilize [caching](../build-pipelines/control-caching-behavior.md) or not. -* `enable_step_logs`: Enable tracking [step logs](../control-logging/enable-or-disable-logs-storing.md). +* `enable_step_logs`: Enable tracking [step logs](../../control-logging/enable-or-disable-logs-storing.md). ```yaml enable_artifact_metadata: True diff --git a/docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md b/docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md new file mode 100644 index 00000000000..3ee43e702fe --- /dev/null +++ b/docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md @@ -0,0 +1,3 @@ +--- +icon: people-group +--- \ No newline at end of file diff --git a/docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/access-management.md b/docs/book/how-to/project-setup-and-management/collaborate-with-team/access-management.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/access-management.md rename to docs/book/how-to/project-setup-and-management/collaborate-with-team/access-management.md diff --git a/docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/using-project-templates.md b/docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/using-project-templates.md rename to docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md diff --git a/docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/create-your-own-template.md b/docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md similarity index 86% rename from docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/create-your-own-template.md rename to docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md index 3f653544027..491b850d1ac 100644 --- a/docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/create-your-own-template.md +++ b/docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md @@ -37,7 +37,7 @@ Replace `v1.0.0` with the git tag of the version you want to use. That's it! Now you have your own ZenML project template that you can use to quickly set up new ML projects. Remember to keep your template up-to-date with the latest best practices and changes in your ML workflows. -Our [Production Guide](../../../user-guide/production-guide/README.md) documentation is built around the `E2E Batch` project template codes. Most examples will be based on it, so we highly recommend you to install the `e2e_batch` template with `--template-with-defaults` flag before diving deeper into this documentation section, so you can follow this guide along using your own local environment. +Our [Production Guide](../../../../user-guide/production-guide/README.md) documentation is built around the `E2E Batch` project template codes. Most examples will be based on it, so we highly recommend you to install the `e2e_batch` template with `--template-with-defaults` flag before diving deeper into this documentation section, so you can follow this guide along using your own local environment. ```bash mkdir e2e_batch diff --git a/docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/shared-components-for-teams.md b/docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/shared-components-for-teams.md rename to docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md diff --git a/docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/stacks-pipelines-models.md b/docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md similarity index 100% rename from docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/stacks-pipelines-models.md rename to docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md diff --git a/docs/book/how-to/interact-with-secrets.md b/docs/book/how-to/project-setup-and-management/interact-with-secrets.md similarity index 100% rename from docs/book/how-to/interact-with-secrets.md rename to docs/book/how-to/project-setup-and-management/interact-with-secrets.md diff --git a/docs/book/reference/environment-variables.md b/docs/book/reference/environment-variables.md index a3f14338a3b..c6452c26e40 100644 --- a/docs/book/reference/environment-variables.md +++ b/docs/book/reference/environment-variables.md @@ -17,7 +17,7 @@ Choose from `INFO`, `WARN`, `ERROR`, `CRITICAL`, `DEBUG`. ## Disable step logs -Usually, ZenML [stores step logs in the artifact store](../how-to/advanced-topics/control-logging/enable-or-disable-logs-storing.md), but this can sometimes cause performance bottlenecks, especially if the code utilizes progress bars. +Usually, ZenML [stores step logs in the artifact store](../how-to/control-logging/enable-or-disable-logs-storing.md), but this can sometimes cause performance bottlenecks, especially if the code utilizes progress bars. If you want to configure whether logged output from steps is stored or not, set the `ZENML_DISABLE_STEP_LOGS_STORAGE` environment variable to `true`. Note that this will mean that logs from your steps will no longer be stored and thus won't be visible on the dashboard anymore. @@ -81,7 +81,7 @@ If you wish to disable colorful logging, set the following environment variable: ZENML_LOGGING_COLORS_DISABLED=true ``` -Note that setting this on the [client environment](../how-to/infrastructure-deployment/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will automatically disable colorful logging on remote orchestrators. If you wish to disable it locally, but turn on for remote orchestrators, you can set the `ZENML_LOGGING_COLORS_DISABLED` environment variable in your orchestrator's environment as follows: +Note that setting this on the [client environment](../how-to/pipeline-development/configure-python-environments/README.md#client-environment-or-the-runner-environment) (e.g. your local machine which runs the pipeline) will automatically disable colorful logging on remote orchestrators. If you wish to disable it locally, but turn on for remote orchestrators, you can set the `ZENML_LOGGING_COLORS_DISABLED` environment variable in your orchestrator's environment as follows: ```python docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) diff --git a/docs/book/reference/how-do-i.md b/docs/book/reference/how-do-i.md index d6cef2f9a0f..4ac076dd435 100644 --- a/docs/book/reference/how-do-i.md +++ b/docs/book/reference/how-do-i.md @@ -21,7 +21,7 @@ From there, each of the custom stack component types has a dedicated section abo * **dependency clashes** mitigation with ZenML? -Check out [our dedicated documentation page](../how-to/infrastructure-deployment/configure-python-environments/handling-dependencies.md) on some ways you can try to solve these dependency and versioning issues. +Check out [our dedicated documentation page](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md) on some ways you can try to solve these dependency and versioning issues. * **deploy cloud infrastructure** and/or MLOps stacks? diff --git a/docs/book/reference/python-client.md b/docs/book/reference/python-client.md index fad315545bf..441f17d1125 100644 --- a/docs/book/reference/python-client.md +++ b/docs/book/reference/python-client.md @@ -43,7 +43,7 @@ These are the main ZenML resources that you can interact with via the ZenML Clie * **Step Runs**: The steps of all pipeline runs. Mainly useful for directly fetching a specific step of a run by its ID. * **Artifacts**: Information about all artifacts that were written to your artifact stores as part of pipeline runs. * **Schedules**: Metadata about the schedules that you have used to [schedule pipeline runs](../how-to/pipeline-development/build-pipelines/schedule-a-pipeline.md). -* **Builds**: The pipeline-specific Docker images that were created when [containerizing your pipeline](../how-to/infrastructure-deployment/customize-docker-builds/README.md). +* **Builds**: The pipeline-specific Docker images that were created when [containerizing your pipeline](../how-to/customize-docker-builds/README.md). * **Code Repositories**: The git code repositories that you have connected with your ZenML instance. See [here](../user-guide/production-guide/connect-code-repository.md) for more information. {% hint style="info" %} @@ -59,7 +59,7 @@ Checkout the [documentation on fetching runs](../how-to/pipeline-development/bui * Integration-enabled flavors like the [Kubeflow orchestrator](../component-guide/orchestrators/kubeflow.md), * Custom flavors that you have [created yourself](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). * **User**: The users registered in your ZenML instance. If you are running locally, there will only be a single `default` user. -* **Secrets**: The infrastructure authentication secrets that you have registered in the [ZenML Secret Store](../how-to/interact-with-secrets.md). +* **Secrets**: The infrastructure authentication secrets that you have registered in the [ZenML Secret Store](../how-to/project-setup-and-management/interact-with-secrets.md). * **Service Connectors**: The service connectors that you have set up to [connect ZenML to your infrastructure](../how-to/infrastructure-deployment/auth-management/README.md). ### Client Methods diff --git a/docs/book/toc.md b/docs/book/toc.md index aff3ce0c7b9..193547242a4 100644 --- a/docs/book/toc.md +++ b/docs/book/toc.md @@ -67,23 +67,33 @@ * [Evaluation for finetuning](user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md) * [Deploying finetuned models](user-guide/llmops-guide/finetuning-llms/deploying-finetuned-models.md) * [Next steps](user-guide/llmops-guide/finetuning-llms/next-steps.md) + ## How-To +* [Manage your ZenML server](how-to/manage-zenml-server/README.md) + * [Connect to a server](how-to/manage-zenml-server/connecting-to-zenml/README.md) + * [Connect in with your User (interactive)](how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md) + * [Connect with a Service Account](how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md) + * [Upgrade your ZenML server](how-to/manage-zenml-server/upgrade-zenml-server.md) + * [Best practices for upgrading ZenML](how-to/manage-zenml-server/best-practices-upgrading-zenml.md) + * [Using ZenML server in production](how-to/manage-zenml-server/using-zenml-server-in-prod.md) + * [Troubleshoot your ZenML server](how-to/manage-zenml-server/troubleshoot-your-deployed-server.md) + * [Migration guide](how-to/manage-zenml-server/migration-guide/migration-guide.md) + * [Migration guide 0.13.2 → 0.20.0](how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md) + * [Migration guide 0.23.0 → 0.30.0](how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md) + * [Migration guide 0.39.1 → 0.41.0](how-to/manage-zenml-server/migration-guide/migration-zero-forty.md) + * [Migration guide 0.58.2 → 0.60.0](how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md) * [Project Setup and Management](how-to/project-setup-and-management/README.md) * [Set up a ZenML project](how-to/project-setup-and-management/setting-up-a-project-repository/README.md) * [Set up a repository](how-to/project-setup-and-management/setting-up-a-project-repository/set-up-repository.md) * [Connect your git repository](how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md) - * [Project templates](how-to/project-setup-and-management/setting-up-a-project-repository/using-project-templates.md) - * [Create your own template](how-to/project-setup-and-management/setting-up-a-project-repository/create-your-own-template.md) - * [Shared components for teams](how-to/project-setup-and-management/setting-up-a-project-repository/shared-components-for-teams.md) - * [Stacks, pipelines and models](how-to/project-setup-and-management/setting-up-a-project-repository/stacks-pipelines-models.md) - * [Access management](how-to/project-setup-and-management/setting-up-a-project-repository/access-management.md) - * [Develop locally](how-to/project-setup-and-management/develop-locally/README.md) - * [Use config files to develop locally](how-to/project-setup-and-management/develop-locally/local-prod-pipeline-variants.md) - * [Keep your pipelines and dashboard clean](how-to/project-setup-and-management/develop-locally/keep-your-dashboard-server-clean.md) - * [Connect to a server](how-to/project-setup-and-management/connecting-to-zenml/README.md) - * [Connect in with your User (interactive)](how-to/project-setup-and-management/connecting-to-zenml/connect-in-with-your-user-interactive.md) - * [Connect with a Service Account](how-to/project-setup-and-management/connecting-to-zenml/connect-with-a-service-account.md) + * [Collaborate with your team](how-to/project-setup-and-management/collaborate-with-team/README.md) + * [Project templates](how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md) + * [Create your own template](how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md) + * [Shared components for teams](how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md) + * [Setting up Stacks, pipelines and models](how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md) + * [Access management](how-to/project-setup-and-management/collaborate-with-team/access-management.md) + * [Interact with secrets](how-to/project-setup-and-management/interact-with-secrets.md) * [Pipeline Development](how-to/pipeline-development/README.md) * [Build a pipeline](how-to/pipeline-development/build-pipelines/README.md) * [Use pipeline/step parameters](how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md) @@ -106,6 +116,9 @@ * [Run an individual step](how-to/pipeline-development/build-pipelines/run-an-individual-step.md) * [Fetching pipelines](how-to/pipeline-development/build-pipelines/fetching-pipelines.md) * [Get past pipeline/step runs](how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md) + * [Develop locally](how-to/pipeline-development/develop-locally/README.md) + * [Use config files to develop locally](how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md) + * [Keep your pipelines and dashboard clean](how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md) * [Trigger a pipeline](how-to/pipeline-development/trigger-pipelines/README.md) * [Use templates: Python SDK](how-to/pipeline-development/trigger-pipelines/use-templates-python.md) * [Use templates: CLI](how-to/pipeline-development/trigger-pipelines/use-templates-cli.md) @@ -118,8 +131,26 @@ * [Configuration hierarchy](how-to/pipeline-development/use-configuration-files/configuration-hierarchy.md) * [Find out which configuration was used for a run](how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md) * [Autogenerate a template yaml file](how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md) + * [Train with GPUs](how-to/pipeline-development/training-with-gpus/README.md) + * [Distributed Training with 🤗 Accelerate](how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md) + * [Run remote pipelines from notebooks](how-to/pipeline-development/run-remote-notebooks/README.md) + * [Limitations of defining steps in notebook cells](how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md) + * [Run a single step from a notebook](how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md) + * [Configure Python environments](how-to/pipeline-development/configure-python-environments/README.md) + * [Handling dependencies](how-to/pipeline-development/configure-python-environments/handling-dependencies.md) + * [Configure the server environment](how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md) +* [Customize Docker builds](how-to/customize-docker-builds/README.md) + * [Docker settings on a pipeline](how-to/customize-docker-builds/docker-settings-on-a-pipeline.md) + * [Docker settings on a step](how-to/customize-docker-builds/docker-settings-on-a-step.md) + * [Use a prebuilt image for pipeline execution](how-to/customize-docker-builds/use-a-prebuilt-image.md) + * [Specify pip dependencies and apt packages](how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md) + * [How to use a private PyPI repository](how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md) + * [Use your own Dockerfiles](how-to/customize-docker-builds/use-your-own-docker-files.md) + * [Which files are built into the image](how-to/customize-docker-builds/which-files-are-built-into-the-image.md) + * [How to reuse builds](how-to/customize-docker-builds/how-to-reuse-builds.md) + * [Define where an image is built](how-to/customize-docker-builds/define-where-an-image-is-built.md) * [Data and Artifact Management](how-to/data-artifact-management/README.md) - * [Handle Data/Artifacts](how-to/data-artifact-management/handle-data-artifacts/README.md) + * [Understand ZenML artifacts](how-to/data-artifact-management/handle-data-artifacts/README.md) * [How ZenML stores data](how-to/data-artifact-management/handle-data-artifacts/artifact-versioning.md) * [Return multiple outputs from a step](how-to/data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md) * [Delete an artifact](how-to/data-artifact-management/handle-data-artifacts/delete-an-artifact.md) @@ -128,11 +159,12 @@ * [Get arbitrary artifacts in a step](how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md) * [Handle custom data types](how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) * [Load artifacts into memory](how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md) - * [Datasets in ZenML](how-to/data-artifact-management/handle-data-artifacts/datasets.md) - * [Manage big data](how-to/data-artifact-management/handle-data-artifacts/manage-big-data.md) - * [Skipping materialization](how-to/data-artifact-management/handle-data-artifacts/unmaterialized-artifacts.md) - * [Passing artifacts between pipelines](how-to/data-artifact-management/handle-data-artifacts/passing-artifacts-between-pipelines.md) - * [Register Existing Data as a ZenML Artifact](how-to/data-artifact-management/handle-data-artifacts/registering-existing-data.md) + * [Complex use-cases](how-to/data-artifact-management/complex-usecases/README.md) + * [Datasets in ZenML](how-to/data-artifact-management/complex-usecases/datasets.md) + * [Manage big data](how-to/data-artifact-management/complex-usecases/manage-big-data.md) + * [Skipping materialization](how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md) + * [Passing artifacts between pipelines](how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md) + * [Register Existing Data as a ZenML Artifact](how-to/data-artifact-management/complex-usecases/registering-existing-data.md) * [Visualizing artifacts](how-to/data-artifact-management/visualize-artifacts/README.md) * [Default visualizations](how-to/data-artifact-management/visualize-artifacts/types-of-visualizations.md) * [Creating custom visualizations](how-to/data-artifact-management/visualize-artifacts/creating-custom-visualizations.md) @@ -158,7 +190,7 @@ * [Special Metadata Types](how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md) * [Fetch metadata within steps](how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md) * [Fetch metadata during pipeline composition](how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md) -* [Infrastructure and Deployment](how-to/infrastructure-deployment/README.md) +* [Stack infrastructure and deployment](how-to/infrastructure-deployment/README.md) * [Manage stacks & components](how-to/infrastructure-deployment/stack-deployment/README.md) * [Deploy a cloud stack with ZenML](how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) * [Deploy a cloud stack with Terraform](how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md) @@ -169,17 +201,7 @@ * [Infrastructure as code](how-to/infrastructure-deployment/infrastructure-as-code/README.md) * [Manage your stacks with Terraform](how-to/infrastructure-deployment/infrastructure-as-code/terraform-stack-management.md) * [ZenML & Terraform Best Practices](how-to/infrastructure-deployment/infrastructure-as-code/best-practices.md) - * [Customize Docker builds](how-to/infrastructure-deployment/customize-docker-builds/README.md) - * [Docker settings on a pipeline](how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md) - * [Docker settings on a step](how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-step.md) - * [Use a prebuilt image for pipeline execution](how-to/infrastructure-deployment/customize-docker-builds/use-a-prebuilt-image.md) - * [Specify pip dependencies and apt packages](how-to/infrastructure-deployment/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md) - * [How to use a private PyPI repository](how-to/infrastructure-deployment/customize-docker-builds/how-to-use-a-private-pypi-repository.md) - * [Use your own Dockerfiles](how-to/infrastructure-deployment/customize-docker-builds/use-your-own-docker-files.md) - * [Which files are built into the image](how-to/infrastructure-deployment/customize-docker-builds/which-files-are-built-into-the-image.md) - * [How to reuse builds](how-to/infrastructure-deployment/customize-docker-builds/how-to-reuse-builds.md) - * [Define where an image is built](how-to/infrastructure-deployment/customize-docker-builds/define-where-an-image-is-built.md) - * [Connect services](how-to/infrastructure-deployment/auth-management/README.md) + * [Connect services via connectors](how-to/infrastructure-deployment/auth-management/README.md) * [Service Connectors guide](how-to/infrastructure-deployment/auth-management/service-connectors-guide.md) * [Security best practices](how-to/infrastructure-deployment/auth-management/best-security-practices.md) * [Docker Service Connector](how-to/infrastructure-deployment/auth-management/docker-service-connector.md) @@ -188,31 +210,12 @@ * [GCP Service Connector](how-to/infrastructure-deployment/auth-management/gcp-service-connector.md) * [Azure Service Connector](how-to/infrastructure-deployment/auth-management/azure-service-connector.md) * [HyperAI Service Connector](how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md) - * [Configure Python environments](how-to/infrastructure-deployment/configure-python-environments/README.md) - * [Handling dependencies](how-to/infrastructure-deployment/configure-python-environments/handling-dependencies.md) - * [Configure the server environment](how-to/infrastructure-deployment/configure-python-environments/configure-the-server-environment.md) -* [Advanced Topics](how-to/advanced-topics/README.md) - * [Train with GPUs](how-to/advanced-topics/training-with-gpus/README.md) - * [Distributed Training with 🤗 Accelerate](how-to/advanced-topics/training-with-gpus/accelerate-distributed-training.md) - * [Run remote pipelines from notebooks](how-to/advanced-topics/run-remote-notebooks/README.md) - * [Limitations of defining steps in notebook cells](how-to/advanced-topics/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md) - * [Run a single step from a notebook](how-to/advanced-topics/run-remote-notebooks/run-a-single-step-from-a-notebook.md) - * [Manage your ZenML server](how-to/advanced-topics/manage-zenml-server/README.md) - * [Best practices for upgrading ZenML](how-to/advanced-topics/manage-zenml-server/best-practices-upgrading-zenml.md) - * [Upgrade your ZenML server](how-to/advanced-topics/manage-zenml-server/upgrade-zenml-server.md) - * [Using ZenML server in production](how-to/advanced-topics/manage-zenml-server/using-zenml-server-in-prod.md) - * [Troubleshoot your ZenML server](how-to/advanced-topics/manage-zenml-server/troubleshoot-your-deployed-server.md) - * [Migration guide](how-to/advanced-topics/manage-zenml-server/migration-guide/migration-guide.md) - * [Migration guide 0.13.2 → 0.20.0](how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-twenty.md) - * [Migration guide 0.23.0 → 0.30.0](how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-thirty.md) - * [Migration guide 0.39.1 → 0.41.0](how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-forty.md) - * [Migration guide 0.58.2 → 0.60.0](how-to/advanced-topics/manage-zenml-server/migration-guide/migration-zero-sixty.md) - * [Control logging](how-to/advanced-topics/control-logging/README.md) - * [View logs on the dashboard](how-to/advanced-topics/control-logging/view-logs-on-the-dasbhoard.md) - * [Enable or disable logs storage](how-to/advanced-topics/control-logging/enable-or-disable-logs-storing.md) - * [Set logging verbosity](how-to/advanced-topics/control-logging/set-logging-verbosity.md) - * [Disable `rich` traceback output](how-to/advanced-topics/control-logging/disable-rich-traceback.md) - * [Disable colorful logging](how-to/advanced-topics/control-logging/disable-colorful-logging.md) +* [Control logging](how-to/control-logging/README.md) + * [View logs on the dashboard](how-to/control-logging/view-logs-on-the-dasbhoard.md) + * [Enable or disable logs storage](how-to/control-logging/enable-or-disable-logs-storing.md) + * [Set logging verbosity](how-to/control-logging/set-logging-verbosity.md) + * [Disable `rich` traceback output](how-to/control-logging/disable-rich-traceback.md) + * [Disable colorful logging](how-to/control-logging/disable-colorful-logging.md) * [Popular integrations](how-to/popular-integrations/README.md) * [Run on AWS](how-to/popular-integrations/aws-guide.md) * [Run on GCP](how-to/popular-integrations/gcp-guide.md) @@ -221,10 +224,9 @@ * [Kubernetes](how-to/popular-integrations/kubernetes.md) * [MLflow](how-to/popular-integrations/mlflow.md) * [Skypilot](how-to/popular-integrations/skypilot.md) -* [Interact with secrets](how-to/interact-with-secrets.md) -* [Debug and solve issues](how-to/debug-and-solve-issues.md) -* [Contribute to ZenML](how-to/contribute-to-zenml/README.md) +* [Contribute to/Extend ZenML](how-to/contribute-to-zenml/README.md) * [Implement a custom integration](how-to/contribute-to-zenml/implement-a-custom-integration.md) +* [Debug and solve issues](how-to/debug-and-solve-issues.md) ## Stack Components diff --git a/docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md b/docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md index 6f995f7439d..def093ac5ae 100644 --- a/docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md +++ b/docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md @@ -186,7 +186,7 @@ def finetuning_pipeline(...): ``` This configuration ensures that your training environment has all the necessary -components for distributed training. For more details, see the [Accelerate documentation](../../../how-to/advanced-topics/training-with-gpus/accelerate-distributed-training.md). +components for distributed training. For more details, see the [Accelerate documentation](../../../how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md). ## Dataset iteration diff --git a/docs/book/user-guide/production-guide/ci-cd.md b/docs/book/user-guide/production-guide/ci-cd.md index 7470bf9554c..eee740d49a9 100644 --- a/docs/book/user-guide/production-guide/ci-cd.md +++ b/docs/book/user-guide/production-guide/ci-cd.md @@ -69,8 +69,8 @@ This step is optional, all you'll need for certain is a stack that runs remotely storage). The rest is up to you. You might for example want to parametrize your pipeline to use different data sources for the respective environments. You can also use different [configuration files](../../how-to/configuring-zenml/configuring-zenml.md) for the different environments to configure the [Model](../../how-to/model-management-metrics/model-control-plane/README.md), the -[DockerSettings](../../how-to/infrastructure-deployment/customize-docker-builds/docker-settings-on-a-pipeline.md), the [ResourceSettings like -accelerators](../../how-to/advanced-topics/training-with-gpus/README.md) differently for the different environments. +[DockerSettings](../../how-to/customize-docker-builds/docker-settings-on-a-pipeline.md), the [ResourceSettings like +accelerators](../../how-to/pipeline-development/training-with-gpus/README.md) differently for the different environments. ### Trigger a pipeline on a Pull Request (Merge Request) diff --git a/docs/book/user-guide/production-guide/cloud-orchestration.md b/docs/book/user-guide/production-guide/cloud-orchestration.md index fae93eae613..107d5e9b625 100644 --- a/docs/book/user-guide/production-guide/cloud-orchestration.md +++ b/docs/book/user-guide/production-guide/cloud-orchestration.md @@ -27,7 +27,7 @@ for a shortcut on how to deploy & register a cloud stack. The easiest cloud orchestrator to start with is the [Skypilot](https://skypilot.readthedocs.io/) orchestrator running on a public cloud. The advantage of Skypilot is that it simply provisions a VM to execute the pipeline on your cloud provider. -Coupled with Skypilot, we need a mechanism to package your code and ship it to the cloud for Skypilot to do its thing. ZenML uses [Docker](https://www.docker.com/) to achieve this. Every time you run a pipeline with a remote orchestrator, [ZenML builds an image](../../how-to/setting-up-a-project-repository/connect-your-git-repository.md) for the entire pipeline (and optionally each step of a pipeline depending on your [configuration](../../how-to/infrastructure-deployment/customize-docker-builds/README.md)). This image contains the code, requirements, and everything else needed to run the steps of the pipeline in any environment. ZenML then pushes this image to the container registry configured in your stack, and the orchestrator pulls the image when it's ready to execute a step. +Coupled with Skypilot, we need a mechanism to package your code and ship it to the cloud for Skypilot to do its thing. ZenML uses [Docker](https://www.docker.com/) to achieve this. Every time you run a pipeline with a remote orchestrator, [ZenML builds an image](../../how-to/setting-up-a-project-repository/connect-your-git-repository.md) for the entire pipeline (and optionally each step of a pipeline depending on your [configuration](../../how-to/customize-docker-builds/README.md)). This image contains the code, requirements, and everything else needed to run the steps of the pipeline in any environment. ZenML then pushes this image to the container registry configured in your stack, and the orchestrator pulls the image when it's ready to execute a step. To summarize, here is the broad sequence of events that happen when you run a pipeline with such a cloud stack: diff --git a/docs/book/user-guide/production-guide/configure-pipeline.md b/docs/book/user-guide/production-guide/configure-pipeline.md index ea1b3d375fd..cdfd95a2618 100644 --- a/docs/book/user-guide/production-guide/configure-pipeline.md +++ b/docs/book/user-guide/production-guide/configure-pipeline.md @@ -148,7 +148,7 @@ steps: {% hint style="info" %} Read more about settings in ZenML [here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) and -[here](../../how-to/advanced-topics/training-with-gpus/README.md) +[here](../../how-to/pipeline-development/training-with-gpus/README.md) {% endhint %} Now let's run the pipeline again: @@ -159,6 +159,6 @@ python run.py --training-pipeline Now you should notice the machine that gets provisioned on your cloud provider would have a different configuration as compared to last time. As easy as that! -Bear in mind that not every orchestrator supports `ResourceSettings` directly. To learn more, you can read about [`ResourceSettings` here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md), including the ability to [attach a GPU](../../how-to/advanced-topics/training-with-gpus/README.md#1-specify-a-cuda-enabled-parent-image-in-your-dockersettings). +Bear in mind that not every orchestrator supports `ResourceSettings` directly. To learn more, you can read about [`ResourceSettings` here](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md), including the ability to [attach a GPU](../../how-to/pipeline-development/training-with-gpus/README.md#1-specify-a-cuda-enabled-parent-image-in-your-dockersettings).
ZenML Scarf
diff --git a/docs/book/user-guide/production-guide/remote-storage.md b/docs/book/user-guide/production-guide/remote-storage.md index a3667e3732c..27b2461b83b 100644 --- a/docs/book/user-guide/production-guide/remote-storage.md +++ b/docs/book/user-guide/production-guide/remote-storage.md @@ -120,7 +120,7 @@ While you can go ahead and [run your pipeline on your stack](remote-storage.md#r First, let's understand what a service connector does. In simple words, a service connector contains credentials that grant stack components access to cloud infrastructure. These credentials are stored in the form of a -[secret](../../how-to/interact-with-secrets.md), +[secret](../../how-to/project-setup-and-management/interact-with-secrets.md), and are available to the ZenML server to use. Using these credentials, the service connector brokers a short-lived token and grants temporary permissions to the stack component to access that infrastructure. This diagram represents diff --git a/docs/book/user-guide/starter-guide/manage-artifacts.md b/docs/book/user-guide/starter-guide/manage-artifacts.md index d51939798b1..e6464d41f0c 100644 --- a/docs/book/user-guide/starter-guide/manage-artifacts.md +++ b/docs/book/user-guide/starter-guide/manage-artifacts.md @@ -370,7 +370,7 @@ The artifact produced from the preexisting data will have a `pathlib.Path` type, Even if an artifact is created and stored externally, it can be treated like any other artifact produced by ZenML steps - with all the functionalities described above! -For more details and use-cases check-out detailed docs page [Register Existing Data as a ZenML Artifact](../../how-to/data-artifact-management/handle-data-artifacts/registering-existing-data.md). +For more details and use-cases check-out detailed docs page [Register Existing Data as a ZenML Artifact](../../how-to/data-artifact-management/complex-usecases/registering-existing-data.md). ## Logging metadata for an artifact