You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The new design of data valuation methods avoids repeated computations of the utility function without relying on caching. We could therefore get rid of our current caching implementation based on memcached, which seems overpowered. This would close several issues related to caching (e.g. #517, #475, #464 and #459). Moreover, it could solve problems that arise due to the many files the current caching solution creates.
The only situation where caching ist still really important is when one benchmarks multiple algorithms and wants to use caching to ensure that randomness is kept as constant as possible between different algorithms and to save runtime in the benchmark. We therefore should create an entry point for benchmarking frameworks to enable caching. I see two possible solutions:
Use a simple shared-memory cache to store all utility evaluations and return them as part of the ValuationResult. A benchmarking library could then use these evaluations to build up a cache. All logic to wrap Utility with a cached version would be in the benchmarking library.
We could keep the cache_backend abstraction in the Utility but only implement a much simpler shared-memory backend in pydvl. Users with advanced caching needs could then build their own backends.
The text was updated successfully, but these errors were encountered:
Now that we only use joblib for the parallelization of data valuation algorithms we could also leverage its caching mechanism through the Memory class and maybe only offer one extension to support caching in a distributed setting.
I tried using it when I refactored the caching backends and couldn't really make it work with memcached because it is implemented as a file-based caching. So I gave up on basing our code on it but I still took heavy inspiration from their interface so perhaps we could consider it again.
The new design of data valuation methods avoids repeated computations of the utility function without relying on caching. We could therefore get rid of our current caching implementation based on memcached, which seems overpowered. This would close several issues related to caching (e.g. #517, #475, #464 and #459). Moreover, it could solve problems that arise due to the many files the current caching solution creates.
The only situation where caching ist still really important is when one benchmarks multiple algorithms and wants to use caching to ensure that randomness is kept as constant as possible between different algorithms and to save runtime in the benchmark. We therefore should create an entry point for benchmarking frameworks to enable caching. I see two possible solutions:
cache_backend
abstraction in the Utility but only implement a much simpler shared-memory backend in pydvl. Users with advanced caching needs could then build their own backends.The text was updated successfully, but these errors were encountered: