Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compressor Optimizer #367

Open
wants to merge 13 commits into
base: main
Choose a base branch
from
Open

Compressor Optimizer #367

wants to merge 13 commits into from

Conversation

Tabaie
Copy link
Contributor

@Tabaie Tabaie commented Nov 30, 2024

This PR implements issue #366.

The compressor now attempts to recompress everything from scratch upon encountering a full blob, before attempting to bypass compression altogether.

Checklist

  • I wrote new tests for my new core changes.
  • I have successfully run tests, style checker and build against my new changes locally.
  • I have informed the team of any breaking changes if there are any.

@Tabaie Tabaie added enhancement New feature or request performances Label the current work as being directed toward performance optimization Data compressor labels Nov 30, 2024
@Tabaie Tabaie self-assigned this Nov 30, 2024
@Tabaie Tabaie linked an issue Nov 30, 2024 that may be closed by this pull request
4 tasks
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e November 30, 2024 01:15 — with GitHub Actions Error
@codecov-commenter
Copy link

codecov-commenter commented Nov 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.42%. Comparing base (29a8d19) to head (2a11734).

Additional details and impacted files
@@             Coverage Diff              @@
##               main     #367      +/-   ##
============================================
+ Coverage     68.17%   70.42%   +2.24%     
+ Complexity     1125     1063      -62     
============================================
  Files           319      305      -14     
  Lines         12789    12188     -601     
  Branches       1275     1165     -110     
============================================
- Hits           8719     8583     -136     
+ Misses         3542     3120     -422     
+ Partials        528      485      -43     
Flag Coverage Δ *Carryforward flag
hardhat 98.70% <ø> (+0.10%) ⬆️
kotlin 68.09% <ø> (+2.29%) ⬆️ Carriedforward from 4552442

*This pull request uses carry forward flags. Click here to find out more.

see 54 files with indirect coverage changes

@Tabaie Tabaie temporarily deployed to docker-build-and-e2e November 30, 2024 01:25 — with GitHub Actions Inactive
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 2, 2024 04:03 — with GitHub Actions Error
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 3, 2024 17:12 — with GitHub Actions Error
@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 3, 2024 18:55 — with GitHub Actions Error
@gbotrel
Copy link
Contributor

gbotrel commented Dec 3, 2024

So to sum up, this is a non-breaking change, we could compress better (by ~7%) if we re-compress the full blob each time we attempt to append a block; but doing so is too slow so you propose to keep the original method, that would give a "preliminary result", and change the result behind the scenes between calls?

Wouldn't this have some side effects @jpnovais ? (i.e basically the compressor could say "we compressed block 1, it takes N Bytes, current blob is now at 100kB", then recompress it with more context and update the internal state to "current blob is 98kB" without notifying the coordinator. ).

Re implementation, before introducing an async optimizer, I'ld prefer to understand perf constraints better; i.e. how long it takes now, how long it would take if we recompress all the blob at each append, and within what limit we need operate . i.e. if we say the compressor could take as much as XXXms , then we may just want to have a simpler cleaner code and kill this async pattern.

@jpnovais
Copy link
Collaborator

jpnovais commented Dec 3, 2024

My input on this optimization is:

Context:

  • The coordinator does not actively keep track of the current blob size, that is the responsibility of the compressor, which is aware of blob limit.
  • Coordonator calls fun CanWrite(data: ByteArray, data_len: Int): Boolean before calling Write, if returns false then coordinator knows the blob is full and starts a new one

My take based on the above:

  • As mentioned, Asyc can be problematic so I would avoid it. Also, I think we don't need it here.
  • My suggestion: when coordinator calls CanWrite before returning false, the compressor would try to perform the full compression an see if that extra blob fits.
    • pros:
      • lazy computation - it only does the "full re-compression" when it reaches the limit;
      • 100% transparent to the coordinator and keeps same API. This leaves room for further internal code performance optimizations ;

@Tabaie @gbotrel WDYT?

@gbotrel
Copy link
Contributor

gbotrel commented Dec 3, 2024

Yep that would make less CPU use to do it only when full . But this last call to "CanWrite" may be 10x slower than the previous calls.
So what are the bounds perf-wise we need to operate in? If a call to CanWrite takes 500ms is that acceptable? (not saying it will, just want a rough order of magnitude of bound)

@jpnovais
Copy link
Collaborator

jpnovais commented Dec 4, 2024

So what are the bounds perf-wise we need to operate in? If a call to CanWrite takes 500ms is that acceptable? (not saying it will, just want a rough order of magnitude of bound)

It's ok to have a call to CanWrite that takes 500ms at the of the blob, as long as the preceding calls are not affected timewise.

@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 6, 2024 18:54 — with GitHub Actions Error
@Tabaie
Copy link
Contributor Author

Tabaie commented Dec 6, 2024

I agree, doing it synchronously at the end is a good idea. In fact it's similar to the "no compress" logic we already have.

@Tabaie Tabaie temporarily deployed to docker-build-and-e2e December 6, 2024 21:23 — with GitHub Actions Inactive
gbotrel
gbotrel previously approved these changes Dec 9, 2024
Copy link
Contributor

@gbotrel gbotrel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍 I would probably just add one or two test to ensure this is correctly triggered and that the internal state is correctly reset after

@jpnovais
Copy link
Collaborator

The difference, from the user's point of view, is output nondeterminism, as it will differ based on how much time the compressor has had to optimize.

What does this mean, "based on how much time the compressor has had to optimize" from the coordinator's PoV?

@Tabaie Tabaie had a problem deploying to docker-build-and-e2e December 16, 2024 14:59 — with GitHub Actions Error
@Tabaie
Copy link
Contributor Author

Tabaie commented Dec 16, 2024

The difference, from the user's point of view, is output nondeterminism, as it will differ based on how much time the compressor has had to optimize.

What does this mean, "based on how much time the compressor has had to optimize" from the coordinator's PoV?

This was for the parallel optimizer so it no longer applies. Removing from description.

@Tabaie Tabaie temporarily deployed to docker-build-and-e2e December 16, 2024 17:33 — with GitHub Actions Inactive
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Data compressor enhancement New feature or request performances Label the current work as being directed toward performance optimization
Projects
None yet
Development

Successfully merging this pull request may close these issues.

"Wholesale" blob compression
4 participants