From b8ca4451442e8960af64b564fb6394d7a26db98b Mon Sep 17 00:00:00 2001 From: Eva Khmelinskaya Date: Thu, 18 Jul 2024 18:11:14 -0700 Subject: [PATCH] Linting fixes --- classify-split-extract-workflow/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/classify-split-extract-workflow/README.md b/classify-split-extract-workflow/README.md index 39fd84085..7d4731a6b 100644 --- a/classify-split-extract-workflow/README.md +++ b/classify-split-extract-workflow/README.md @@ -28,7 +28,7 @@ * [Configuration](#configuration) * [Running the Pipeline](#running-the-pipeline) * [Out-of-the box demo](#out-of-the-box-demo) - * [Using DATA_SYNCH](#using-datasynch) + * [Using live data updates](#using-live-data-updates) * [Running the Batch](#running-the-batch) * [Next Steps](#next-steps) * [References](#references) @@ -293,8 +293,8 @@ If you followed the steps of LDAI Splitter & Classifier you can try the single d * MISC1099 * INT1099 - -### Using DATA_SYNCH + +### Using live data updates The data synch is disabled/enabled with environment variable `DATA_SYNCH` defined in [vars.sh](vars.sh) - When `DATA_SYNCH` is on, each document uploaded to the input bucket (`CLASSIFY_INPUT_BUCKET`), will trigger the pipeline execution.