-
Notifications
You must be signed in to change notification settings - Fork 187
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
6 changed files
with
325 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
--- | ||
title: Load data from Filesystem or Cloud Storage | ||
description: How to extract and load data from a filesystem or cloud storage using dlt | ||
keywords: [tutorial, filesystem, cloud storage, dlt, python, data pipeline, incremental loading] | ||
--- | ||
|
||
## What you will learn | ||
|
||
- How to set up a filesystem or cloud storage source | ||
- Configuration basics for filesystems and cloud storage | ||
- Loading methods | ||
- Incremental loading of data from filesystems or cloud storage | ||
|
||
## Prerequisites | ||
|
||
- Python 3.9 or higher | ||
- Virtual environment set up | ||
|
||
## Installing dlt | ||
## Setting up a new project | ||
## Creating a new pipeline | ||
## Configuring filesystem source as the data source | ||
## Running the pipeline | ||
## Append, replace, and merge loading methods | ||
## Incremental loading | ||
## Wrapping up |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,268 @@ | ||
--- | ||
title: Load data from a REST API | ||
description: How to extract data from a REST API using dlt's generic REST API source | ||
keywords: [tutorial, api, github, duckdb, rest api, source, pagination, authentication] | ||
--- | ||
|
||
This tutorial shows how to extract data from a REST API using the dlt's generic REST API source. The tutorial will guide you through the basics of setting up and configuring the source to load data from the API into a destination. | ||
|
||
As a practical example, we'll build a data pipeline that loads data from the Pokemon API into DuckDB. | ||
|
||
## What you will learn | ||
|
||
- How to set up a REST API source | ||
- Configuration basics for API endpoints | ||
- Handling pagination, authentication, and relationships between different resources | ||
- Loading methods | ||
- Incremental loading of data from REST APIs | ||
|
||
## Prerequisites | ||
|
||
- Python 3.9 or higher | ||
- Virtual environment set up | ||
|
||
## Installing dlt | ||
|
||
Before we start, make sure you have a Python virtual environment set up. Follow the instructions in the [installation guide](https://dlthub.com/docs/reference/installation) to create a new virtual environment and install dlt. | ||
|
||
Verify that dlt is installed by running: | ||
|
||
```sh | ||
dlt --version | ||
``` | ||
|
||
If you see the version number (such as "dlt 0.5.3"), you're ready to proceed. | ||
|
||
## Setting up a new project | ||
|
||
Initialize a new dlt project with DuckDB as the destination database: | ||
|
||
```sh | ||
dlt init rest_api duckdb | ||
``` | ||
|
||
`dlt init` creates multiple files and a directory for your project. Let's take a look at the project structure: | ||
|
||
```sh | ||
rest_api_pipeline.py | ||
requirements.txt | ||
.dlt/ | ||
config.toml | ||
secrets.toml | ||
``` | ||
|
||
Here's what each file and directory contains: | ||
|
||
- `rest_api_pipeline.py`: This is the main script where you'll define your data pipeline. It contains two basic pipeline examples for Pokemon and GitHub APIs. You can modify or rename this file as needed. | ||
- `requirements.txt`: This file lists all the Python dependencies required for your project. | ||
- `.dlt/`: This directory contains the [configuration files](../general-usage/credentials/) for your project: | ||
- `secrets.toml`: This file stores your API keys, tokens, and other sensitive information. | ||
- `config.toml`: This file contains the configuration settings for your dlt project. | ||
|
||
## Installing dependencies | ||
|
||
Before we proceed, let's install the required dependencies for this tutorial. Run the following command to install the dependencies listed in the `requirements.txt` file: | ||
|
||
```sh | ||
pip install -r requirements.txt | ||
``` | ||
|
||
## Running the pipeline | ||
|
||
Let's verify that the pipeline is working as expected. Run the following command to execute the pipeline: | ||
|
||
```sh | ||
python rest_api_pipeline.py | ||
``` | ||
|
||
You should see the output of the pipeline execution in the terminal. The output will also diplay the location of the DuckDB database file where the data is stored: | ||
|
||
```sh | ||
Pipeline rest_api_pokemon load step completed in 1.08 seconds | ||
1 load package(s) were loaded to destination duckdb and into dataset rest_api_data | ||
The duckdb destination used duckdb:////home/user-name/quick_start/rest_api_pokemon.duckdb location to store data | ||
Load package 1692364844.9254808 is LOADED and contains no failed jobs | ||
``` | ||
|
||
## Exploring the data | ||
|
||
Now that the pipeline has run successfully, let's explore the data loaded into DuckDB. dlt comes with a built-in command browser application that allows you to interact with the data. To enable it, run the following command: | ||
|
||
```sh | ||
pip install streamlit | ||
``` | ||
|
||
Next, run the following command to start the data browser: | ||
|
||
```sh | ||
dlt pipeline rest_api_pokemon show | ||
``` | ||
|
||
The command opens a new browser window with the data browser application. You can explore the loaded data, run queries and see some pipeline execution details: | ||
|
||
![Streamlit Explore data](/img/streamlit-new.png) | ||
|
||
## Configuring the REST API source | ||
|
||
Now that you environment and the project are set up, let's take a closer look at the configuration of the REST API source. Open the `rest_api_pipeline.py` file in your code editor and locate the following code snippet: | ||
|
||
```python | ||
pipeline = dlt.pipeline( | ||
pipeline_name="rest_api_pokemon", | ||
destination="duckdb", | ||
dataset_name="rest_api_data", | ||
) | ||
|
||
pokemon_source = rest_api_source( | ||
{ | ||
"client": { | ||
"base_url": "https://pokeapi.co/api/v2/", | ||
}, | ||
"resource_defaults": { | ||
"endpoint": { | ||
"params": { | ||
"limit": 1000, | ||
}, | ||
}, | ||
}, | ||
"resources": [ | ||
"pokemon", | ||
"berry", | ||
"location", | ||
], | ||
} | ||
) | ||
|
||
load_info = pipeline.run(pokemon_source) | ||
print(load_info) | ||
``` | ||
|
||
The `rest_api_source` function creates a new REST API source object. It uses the configuration object with the following structure: | ||
|
||
|
||
```py | ||
config: RESTAPIConfig = { | ||
"client": { | ||
... | ||
}, | ||
"resource_defaults": { | ||
... | ||
}, | ||
"resources": [ | ||
... | ||
], | ||
} | ||
``` | ||
|
||
- The `client` configuration is used to connect to the API's endpoints. Here we specify the base URL of the Pokemon API (`https://pokeapi.co/api/v2/`). | ||
- The `resource_defaults` configuration allows you to set default parameters for all resources. Normally you would set common parameters here, such as pagination limits. In this example, we set the `limit` parameter to 1000 for all resources to retrieve more data in a single request and reduce the number of API calls. | ||
- The `resources` list contains the names of the resources you want to load from the API. REST API will use some conventions to determine the endpoint URL based on the resource name. For example, the resource name `pokemon` will be translated to the endpoint URL `https://pokeapi.co/api/v2/pokemon`. | ||
|
||
## Append, replace, and merge loading methods | ||
|
||
Try running the pipeline again with `python rest_api_pipeline.py`. You will notice that | ||
all the tables have data duplicated. This is because the default load mode is `append`. It is very useful, for example, when you have daily data updates and you want to ingest them. But in this case, we want to replace the data in the destination table with the new data. | ||
|
||
To do that, you can change the loading method in the pipeline configuration. Open the `rest_api_pipeline.py` and change the pipeline configuration to use the `replace` write disposition: | ||
|
||
```python | ||
pipeline = dlt.pipeline( | ||
pipeline_name="rest_api_pokemon", | ||
destination="duckdb", | ||
dataset_name="rest_api_data", | ||
) | ||
|
||
pokemon_source = rest_api_source( | ||
{ | ||
"client": { | ||
"base_url": "https://pokeapi.co/api/v2/", | ||
}, | ||
"resource_defaults": { | ||
"endpoint": { | ||
"params": { | ||
"limit": 1000, | ||
}, | ||
}, | ||
"write_disposition": "replace", # Change the write disposition to replace | ||
}, | ||
"resources": [ | ||
"pokemon", | ||
"berry", | ||
"location", | ||
], | ||
} | ||
) | ||
|
||
load_info = pipeline.run(pokemon_source) | ||
print(load_info) | ||
``` | ||
Here's the updated section on defining resource relationships, using the Pokemon API example instead of the GitHub API example: | ||
|
||
### Define resource relationships | ||
|
||
When you have a resource that depends on another resource, you can define the relationship using the `resolve` configuration. This configuration allows you to link a path parameter in the child resource to a field in the parent resource's data. | ||
|
||
For our Pokemon API example, let's consider the `pokemon` resource which depends on the `location` resource. Suppose we want to retrieve details about Pokémon encounters based on their location ID. The `location_id` parameter in the `pokemon` endpoint configuration is resolved from the `id` field of the `location` resource: | ||
|
||
```py | ||
{ | ||
"resources": [ | ||
{ | ||
"name": "location", | ||
"endpoint": { | ||
"path": "location", | ||
# ... | ||
}, | ||
}, | ||
{ | ||
"name": "pokemon", | ||
"endpoint": { | ||
"path": "location/{location_id}/pokemon", | ||
"params": { | ||
"location_id": { | ||
"type": "resolve", | ||
"resource": "location", | ||
"field": "id", | ||
} | ||
}, | ||
}, | ||
"include_from_parent": ["name"], | ||
}, | ||
], | ||
} | ||
``` | ||
|
||
This configuration tells the source to get location IDs from the `location` resource and use them to fetch Pokémon encounter details for each location. So if the `location` resource yields the following data: | ||
|
||
```json | ||
[ | ||
{"id": 1, "name": "Kanto"}, | ||
{"id": 2, "name": "Johto"}, | ||
{"id": 3, "name": "Hoenn"} | ||
] | ||
``` | ||
|
||
The `pokemon` resource will make requests to the following endpoints: | ||
|
||
- `location/1/pokemon` | ||
- `location/2/pokemon` | ||
- `location/3/pokemon` | ||
|
||
The syntax for the `resolve` field in parameter configuration is: | ||
|
||
```py | ||
{ | ||
"<parameter_name>": { | ||
"type": "resolve", | ||
"resource": "<parent_resource_name>", | ||
"field": "<parent_resource_field_name_or_jsonpath>", | ||
} | ||
} | ||
``` | ||
|
||
The `field` value can be specified as a [JSONPath](https://github.com/h2non/jsonpath-ng?tab=readme-ov-file#jsonpath-syntax) to select a nested field in the parent resource data. For example: `"field": "items[0].id"`. | ||
|
||
Under the hood, dlt handles this by using a [transformer resource](../../general-usage/resource.md#process-resources-with-dlttransformer). | ||
|
||
## Incremental loading | ||
## Wrapping up |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
--- | ||
title: Load data from a SQL database | ||
description: How to extract data from a REST API using dlt's generic REST API source | ||
keywords: [tutorial, api, github, duckdb, rest api, source, pagination, authentication] | ||
--- | ||
|
||
## What you will learn | ||
|
||
- How to set up a SQL database source | ||
- Configuration basics for SQL databases | ||
- Loading methods | ||
- Incremental loading of data from SQL databases | ||
|
||
## Prerequisites | ||
|
||
- Python 3.9 or higher | ||
- Virtual environment set up | ||
|
||
## Installing dlt | ||
## Setting up a new project | ||
## Creating a new pipeline | ||
## Running the pipeline | ||
## Append, replace, and merge loading methods | ||
## Incremental loading | ||
## Wrapping up |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters