From eb896d2a41ae3eb6521d7f6662b25c077036ff40 Mon Sep 17 00:00:00 2001 From: Colton Padden Date: Fri, 20 Dec 2024 15:24:37 -0500 Subject: [PATCH] todo links --- .../branch-deployments/change-tracking.md | 12 ++++-- .../branch-deployments/dagster-cloud-cli.md | 3 +- .../ci-cd/branch-deployments/testing.md | 39 ++++++++++++------- .../docs/integrations/libraries/duckdb.md | 5 +-- 4 files changed, 38 insertions(+), 21 deletions(-) diff --git a/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/change-tracking.md b/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/change-tracking.md index de7b90aab85f8..03a4feae766e8 100644 --- a/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/change-tracking.md +++ b/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/change-tracking.md @@ -16,7 +16,8 @@ Branch Deployments compare asset definitions in the branch deployment against th You can also apply filters to show only new and changed assets in the UI. This makes it easy to understand which assets will be impacted by the changes in the pull request associated with the branch deployment. -**Note:** The default main deployment is `prod`. To configure a different deployment as the main deployment, [create a branch deployment using the dagster-cloud CLI](/dagster-plus/managing-deployments/branch-deployments/using-branch-deployments) and specify it using the optional `--base-deployment-name` parameter. +{/* **Note:** The default main deployment is `prod`. To configure a different deployment as the main deployment, [create a branch deployment using the dagster-cloud CLI](/dagster-plus/managing-deployments/branch-deployments/using-branch-deployments) and specify it using the optional `--base-deployment-name` parameter. */} +**Note:** The default main deployment is `prod`. To configure a different deployment as the main deployment, [create a branch deployment using the dagster-cloud CLI](/todo) and specify it using the optional `--base-deployment-name` parameter. ## Supported change types @@ -39,7 +40,8 @@ If an asset is new in the branch deployment, the asset will have a **New in bran If using the `code_version` argument on the asset decorator, Change Tracking can detect when this value changes. -Some Dagster integrations, like `dagster-dbt`, automatically compute code versions for you. For more information on code versions, refer to the [Code versioning guide](/guides/dagster/asset-versioning-and-caching). +{/* Some Dagster integrations, like `dagster-dbt`, automatically compute code versions for you. For more information on code versions, refer to the [Code versioning guide](/guides/dagster/asset-versioning-and-caching). */} +Some Dagster integrations, like `dagster-dbt`, automatically compute code versions for you. For more information on code versions, refer to the [Code versioning guide](/todo). @@ -128,7 +130,8 @@ def weekly_orders(): ... ### Tags -Change Tracking can detect when an [asset's tags](/concepts/metadata-tags/tags) have changed, whether they've been added, modified, or removed. +{/* Change Tracking can detect when an [asset's tags](/concepts/metadata-tags/tags) have changed, whether they've been added, modified, or removed. */} +Change Tracking can detect when an [asset's tags](/todo) have changed, whether they've been added, modified, or removed. @@ -161,7 +164,8 @@ def fruits_in_stock(): ... ### Metadata -Change Tracking can detect when an [asset's definition metadata](/concepts/metadata-tags/asset-metadata#attaching-definition-metadata) has changed, whether it's been added, modified, or removed. +{/* Change Tracking can detect when an [asset's definition metadata](/concepts/metadata-tags/asset-metadata#attaching-definition-metadata) has changed, whether it's been added, modified, or removed. */} +Change Tracking can detect when an [asset's definition metadata](/todo) has changed, whether it's been added, modified, or removed. diff --git a/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/dagster-cloud-cli.md b/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/dagster-cloud-cli.md index 77c73b92df8c4..855cabd09fb49 100644 --- a/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/dagster-cloud-cli.md +++ b/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/dagster-cloud-cli.md @@ -74,7 +74,8 @@ When prompted, you can specify a default deployment. If specified, a deployment
TOKEN AUTHENTICATION -Alternatively, you may authenticate using a user token. Refer to the [Managing user and agent tokens guide](/dagster-plus/account/managing-user-agent-tokens) for more info. +{/* Alternatively, you may authenticate using a user token. Refer to the [Managing user and agent tokens guide](/dagster-plus/account/managing-user-agent-tokens) for more info. */} +Alternatively, you may authenticate using a user token. Refer to the [Managing user and agent tokens guide](/todo) for more info. ```shell $ dagster-cloud config setup diff --git a/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md b/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md index e17899a46a1f7..1995e56d0abee 100644 --- a/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md +++ b/docs/docs-beta/docs/dagster-plus/features/ci-cd/branch-deployments/testing.md @@ -17,12 +17,18 @@ With these tools, we can merge changes with confidence in the impact on our data Here’s an overview of the main concepts we’ll be using: -- [Assets](/concepts/assets/software-defined-assets) - We'll define three assets that each persist a table to Snowflake. -- [Ops](/concepts/ops-jobs-graphs/ops) - We'll define two ops that query Snowflake: the first will clone a database, and the second will drop database clones. -- [Graphs](/concepts/ops-jobs-graphs/graphs) - We'll build graphs that define the order our ops should run. -- [Jobs](/concepts/assets/asset-jobs) - We'll define jobs by binding our graphs to resources. -- [Resources](/concepts/resources) - We'll use the to swap in different Snowflake connections to our jobs depending on environment. -- [I/O managers](/concepts/io-management/io-managers) - We'll use a Snowflake I/O manager to persist asset outputs to Snowflake. +{/* - [Assets](/concepts/assets/software-defined-assets) - We'll define three assets that each persist a table to Snowflake. */} +- [Assets](/todo) - We'll define three assets that each persist a table to Snowflake. +{/* - [Ops](/concepts/ops-jobs-graphs/ops) - We'll define two ops that query Snowflake: the first will clone a database, and the second will drop database clones. */} +- [Ops](/todo) - We'll define two ops that query Snowflake: the first will clone a database, and the second will drop database clones. +{/* - [Graphs](/concepts/ops-jobs-graphs/graphs) - We'll build graphs that define the order our ops should run. */} +- [Graphs](/todo) - We'll build graphs that define the order our ops should run. +{/* - [Jobs](/concepts/assets/asset-jobs) - We'll define jobs by binding our graphs to resources. */} +- [Jobs](/todo) - We'll define jobs by binding our graphs to resources. +{/* - [Resources](/concepts/resources) - We'll use the to swap in different Snowflake connections to our jobs depending on environment. */} +- [Resources](/todo) - We'll use the to swap in different Snowflake connections to our jobs depending on environment. +{/* - [I/O managers](/concepts/io-management/io-managers) - We'll use a Snowflake I/O manager to persist asset outputs to Snowflake. */} +- [I/O managers](/todo) - We'll use a Snowflake I/O manager to persist asset outputs to Snowflake. --- @@ -40,10 +46,12 @@ Here’s an overview of the main concepts we’ll be using: To complete the steps in this guide, you'll need: - A Dagster+ account -- An existing Branch Deployments setup that uses [GitHub actions](/dagster-plus/managing-deployments/branch-deployments/using-branch-deployments-with-github) or [Gitlab CI/CD](/dagster-plus/managing-deployments/branch-deployments/using-branch-deployments-with-gitlab). Your setup should contain a Dagster project set up for branch deployments containing: +{/* - An existing Branch Deployments setup that uses [GitHub actions](/dagster-plus/managing-deployments/branch-deployments/using-branch-deployments-with-github) or [Gitlab CI/CD](/dagster-plus/managing-deployments/branch-deployments/using-branch-deployments-with-gitlab). Your setup should contain a Dagster project set up for branch deployments containing: */} +- An existing Branch Deployments setup that uses [GitHub actions](/todo) or [Gitlab CI/CD](/todo). Your setup should contain a Dagster project set up for branch deployments containing: - Either a GitHub actions workflow file (e.g. `.github/workflows/branch-deployments.yaml`) or a Gitlab CI/CD file (e.g. `.gitlab-ci.yml`) - Dockerfile that installs your Dagster project -- User permissions in Dagster+ that allow you to [access Branch Deployments](/dagster-plus/account/managing-users/managing-user-roles-permissions) +{/* - User permissions in Dagster+ that allow you to [access Branch Deployments](/dagster-plus/account/managing-users/managing-user-roles-permissions) */} +- User permissions in Dagster+ that allow you to [access Branch Deployments](/todo) --- @@ -57,7 +65,8 @@ We have a `PRODUCTION` Snowflake database with a schema named `HACKER_NEWS`. In To set up a branch deployment workflow to construct and test these tables, we will: -1. Define these tables as [assets](/concepts/assets/software-defined-assets). +{/* 1. Define these tables as [assets](/concepts/assets/software-defined-assets). */} +1. Define these tables as [assets](/todo). 2. Configure our assets to write to Snowflake using a different connection (credentials and database name) for two environments: production and branch deployment. 3. Write a job that will clone the production database upon each branch deployment launch. Each clone will be named `PRODUCTION_CLONE_`, where `` is the pull request ID of the branch. Then we'll create a branch deployment and test our Hacker News assets against our newly cloned database. 4. Write a job that will delete the corresponding database clone upon closing the feature branch. @@ -66,7 +75,8 @@ To set up a branch deployment workflow to construct and test these tables, we wi ## Step 1: Create our assets -In production, we want to write three tables to Snowflake: `ITEMS`, `COMMENTS`, and `STORIES`. We can define these tables as [assets](/concepts/assets/software-defined-assets) as follows: +{/* In production, we want to write three tables to Snowflake: `ITEMS`, `COMMENTS`, and `STORIES`. We can define these tables as [assets](/concepts/assets/software-defined-assets) as follows: */} +In production, we want to write three tables to Snowflake: `ITEMS`, `COMMENTS`, and `STORIES`. We can define these tables as [assets](/todo) as follows: ```python file=/guides/dagster/development_to_production/assets.py startafter=start_assets endbefore=end_assets # assets.py @@ -116,7 +126,8 @@ def stories(items: pd.DataFrame) -> pd.DataFrame: return items[items["type"] == "story"] ``` -As you can see, our assets use an [I/O manager](/concepts/io-management/io-managers) named `snowflake_io_manager`. Using I/O managers and other resources allow us to swap out implementations per environment without modifying our business logic. +{/* As you can see, our assets use an [I/O manager](/concepts/io-management/io-managers) named `snowflake_io_manager`. Using I/O managers and other resources allow us to swap out implementations per environment without modifying our business logic. */} +As you can see, our assets use an [I/O manager](/todo) named `snowflake_io_manager`. Using I/O managers and other resources allow us to swap out implementations per environment without modifying our business logic. --- @@ -126,7 +137,8 @@ At runtime, we’d like to determine which environment our code is running in: b To ensure we can't accidentally write to production from within our branch deployment, we’ll use a different set of credentials from production and write to our database clone. -Dagster automatically sets certain [environment variables](/dagster-plus/managing-deployments/reserved-environment-variables) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment. +{/* Dagster automatically sets certain [environment variables](/dagster-plus/managing-deployments/reserved-environment-variables) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment. */} +Dagster automatically sets certain [environment variables](/todo) containing deployment metadata, allowing us to read these environment variables to discern between deployments. We can access the `DAGSTER_CLOUD_IS_BRANCH_DEPLOYMENT` environment variable to determine the currently executing environment. Because we want to configure our assets to write to Snowflake using a different set of credentials and database in each environment, we’ll configure a separate I/O manager for each environment: @@ -170,7 +182,8 @@ defs = Definitions( ) ``` -Refer to the [Dagster+ environment variables documentation](/dagster-plus/managing-deployments/environment-variables-and-secrets) for more info about available environment variables. +{/* Refer to the [Dagster+ environment variables documentation](/dagster-plus/managing-deployments/environment-variables-and-secrets) for more info about available environment variables. */} +Refer to the [Dagster+ environment variables documentation](/todo) for more info about available environment variables. --- diff --git a/docs/docs-beta/docs/integrations/libraries/duckdb.md b/docs/docs-beta/docs/integrations/libraries/duckdb.md index e8097b5040ed8..a5ea4fd28d53f 100644 --- a/docs/docs-beta/docs/integrations/libraries/duckdb.md +++ b/docs/docs-beta/docs/integrations/libraries/duckdb.md @@ -17,9 +17,8 @@ enables: tags: [dagster-supported, storage] --- - - -This library provides an integration with the DuckDB database, and allows for an out-of-the-box [I/O Manager](https://docs.dagster.io/concepts/io-management/io-managers) so that you can make DuckDB your storage of choice. +{/* This library provides an integration with the DuckDB database, and allows for an out-of-the-box [I/O Manager](/concepts/io-management/io-managers) so that you can make DuckDB your storage of choice. */} +This library provides an integration with the DuckDB database, and allows for an out-of-the-box [I/O Manager](/todo) so that you can make DuckDB your storage of choice. ### Installation