Skip to content

Commit

Permalink
Update
Browse files Browse the repository at this point in the history
  • Loading branch information
dat-a-man committed Feb 3, 2024
1 parent 01de058 commit a23e4ad
Showing 1 changed file with 20 additions and 0 deletions.
20 changes: 20 additions & 0 deletions docs/website/docs/running-in-production/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,26 @@ Normalized data for the following tables:
- players_profiles: 4 row(s)
```

To load these info back to the destination you can use the following:
```python
# Create a pipeline with the specified name, destination, and dataset
# Run the pipeline

# Get the trace of the last run of the pipeline
# The trace contains timing information on extract, normalize, and load steps
trace = pipeline.last_trace

# Load the trace information into a table named "_trace" in the destination
pipeline.run([trace], table_name="_trace")
```
This process loads several additional tables to the destination, which provide insights into
the extract, normalize, and load steps. Information on the number of rows loaded for each table,
along with the `load_id`, can be found in the `_trace__steps__extract_info__table_metrics` table.
The `load_id` is an epoch timestamp that indicates when the loading was completed. Here's graphical
representation of the rows loaded with `load_id` for different tables:

![image](https://storage.googleapis.com/dlt-blog-images/docs_monitoring_count_of_rows_vs_load_id.jpg)

### Data load time
Data loading time for each table can be obtained by using the following command:

Expand Down

0 comments on commit a23e4ad

Please sign in to comment.