Skip to content

Commit

Permalink
fix after automatic replacement
Browse files Browse the repository at this point in the history
  • Loading branch information
sh-rp committed Sep 17, 2024
1 parent 324a390 commit be45ecd
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -163,9 +163,9 @@ The examples below show how you can set arguments in any of the `.toml` files (`
database = sql_database()
```

You'll be able to configure all the arguments this way (except adapter callback function). [Standard dlt rules apply](/general-usage/credentials/configuration#configure-dlt-sources-and-resources).
It is also possible to set these arguments as environment variables [using the proper naming convention](/general-usage/credentials/config_providers#toml-vs-environment-variables):
You'll be able to configure all the arguments this way (except adapter callback function). [Standard dlt rules apply](/general-usage/credentials/setup).

It is also possible to set these arguments as environment variables [using the proper naming convention](/general-usage/credentials/setup#naming-convention):
```sh
SOURCES__SQL_DATABASE__CREDENTIALS="mssql+pyodbc://loader.database.windows.net/dlt_data?trusted_connection=yes&driver=ODBC+Driver+17+for+SQL+Server"
SOURCES__SQL_DATABASE__BACKEND=pandas
Expand Down
2 changes: 1 addition & 1 deletion docs/website/docs/general-usage/schema-evolution.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ By separating the technical process of loading data from curation, you free the

**Tracking column lineage**

The column lineage can be tracked by loading the 'load_info' to the destination. The 'load_info' contains information about columns ‘data types’, ‘add times’, and ‘load id’. To read more please see [the data lineage article](/blog/dlt-data-lineage) we have on the blog.
The column lineage can be tracked by loading the 'load_info' to the destination. The 'load_info' contains information about columns ‘data types’, ‘add times’, and ‘load id’. To read more please see [the data lineage article](https://dlthub.com/blog/dlt-data-lineage) we have on the blog.

**Getting notifications**

Expand Down
2 changes: 1 addition & 1 deletion docs/website/docs/reference/telemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ The message context contains the following information:

## Send telemetry data to your own tracker
You can setup your own tracker to receive telemetry events. You can create scalable, globally distributed
edge service [using `dlt` and Cloudflare](/blog/dlt-segment-migration).
edge service [using `dlt` and Cloudflare](https://dlthub.com/blog/dlt-segment-migration).

Once your tracker is running, point `dlt` to it. You can use global `config.toml` to redirect all pipelines on
a given machine.
Expand Down
2 changes: 1 addition & 1 deletion docs/website/docs/walkthroughs/zendesk-weaviate.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ For our example we will use "subject" and "description" fields from a ticket as

## Prerequisites

We're going to use some ready-made components from the [dlt ecosystem](/dlt-ecosystem) to make this process easier:
We're going to use some ready-made components from the [sources](/dlt-ecosystem/verified-sources) and [destinations](/dlt-ecosystem/destinations) to make this process easier:

1. A [Zendesk verified source](../dlt-ecosystem/verified-sources/zendesk.md) to extract the tickets from the API.
2. A [Weaviate destination](../dlt-ecosystem/destinations/weaviate.md) to load the data into a Weaviate instance.
Expand Down

0 comments on commit be45ecd

Please sign in to comment.