-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: blockbuilder component #14621
feat: blockbuilder component #14621
Conversation
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
…lled Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
Signed-off-by: Owen Diehl <[email protected]>
…der is the target Signed-off-by: Owen Diehl <[email protected]>
"err", err, | ||
) | ||
if err != nil { | ||
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log and continue? this would fail the service
|
||
for _, db := range built { | ||
u := newUploader(i.objStore) | ||
if err := u.Put(ctx, db); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: retries to avoid processing the job again
pkg/blockbuilder/partition.go
Outdated
} | ||
|
||
select { | ||
case ch <- converted: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was this intentional to send all the inputs in one go? this won't make use of the parallel workers from the next stage.
…ates blockbuilder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
return periodicConfigs[i].From.Time.Before(periodicConfigs[j].From.Time) | ||
}) | ||
for _, periodicConfig := range periodicConfigs { | ||
objectClient, err := storage.NewObjectClient(periodicConfig.ObjectType, "storage-rf1", storageConfig, clientMetrics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should also wrap this with NewPrefixedObjectClient
if PathPrefix
is set
defer m.mtx.RUnlock() | ||
|
||
// TODO(owen-d): safe to remove? | ||
// Remove __name__="logs" as it's not needed in TSDB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be safe to remove as we are not adding this in the first place when creating the chunk?
Introduces a "slimgester" component, which is essentially an ingester without write ahead logs, querying support (inc inverted indices or in-mem TSDB implementation), or limits application. It consumes a "job", aka
(partition, offset_range)
in kafka, writing our pre-existing format (chunks) to object storage. When finished, a TSDB is created from these chunks and written to storage.This creates a way to idempotently consume a job specification and commit it in a more optimized form to a storage backend. We'll use this to iterate later on new storage formats, a scheduler+worker architecture for building, etc.
See
plan.txt
in this PR for more detail.