feat: Process TSV files as streams and validate only the first 1000 rows by default #139
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR is an optimization. In #138 we found a case with >300k lines in a TSV file. In order to limit the number of lines being inspected, I needed to switch TSV loading to be stream-based instead of slurping the entire file.
This PR does the following:
\r?\n
--max-rows
flag to the CLI and amaxRows
variable to validator options.Note that this adds a new error condition, where we tolerate empty lines only at the end of files (
<content><LF><EOF>
). In passing, this permits us to report the line number of bad TSV lines.I also do not attempt to add
maxRows
to the TSV cache key, so callingloadTSV()
successively on the same file and differentmaxRows
values will return the result from the first call. This does not seem like a problem in terms of running the validator, but might be surprising to future developers. I can look into that, if desired.Closes #138.