You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Background
by default dlt will move terminally failed jobs into failed_jobs folder and will not raise any exceptions. users can change load.raise_on_failed_jobs=true config option to abort the package and raise exception afterwards.
@rudolfix
The only downside of this change might be that the default strategy for the write_disposition = "replace" would truncate the table and not add any data because the load package failed.
Could we change the default strategy to ' insert-from-staging ' to minimize the chances of users deleting their data on error?
Background
by default
dlt
will move terminally failed jobs intofailed_jobs
folder and will not raise any exceptions. users can changeload.raise_on_failed_jobs=true
config option to abort the package and raise exception afterwards.this PR makes this behavior default.
background: https://dlthub.com/docs/running-in-production/running#handle-exceptions-failed-jobs-and-retry-the-pipeline
Requirements
PR 1:
raise_on_failed_jobs
to truetest_dummy_client
(I hope we test this config flag somewhere in it)PR 2:
4. Add a new cli/pipeline method to retry aborted package (moves failed jobs back to new). extend this command:
https://dlthub.com/docs/reference/command-line-interface#get-the-load-package-information
and add custom parser with "retry-aborted". which:
The text was updated successfully, but these errors were encountered: