Skip to content

Releases: martinsumner/leveled

Riak KV 2.9.2 - Release

07 Apr 09:58
9412e7c
Compare
Choose a tag to compare

Release includes option to use recalc as a journal compaction and reload strategy, and a change to the LSM-tree compaction strategy so that half of compaction file choices are now grooming (not all random).

Riak KV 2.9.1 - Release

12 Feb 09:33
0215555
Compare
Choose a tag to compare

Tag for Riak release

Riak KV 2.9.0 - Release - Patch5

20 Nov 10:36
4d550ef
Compare
Choose a tag to compare

This resolves two issues related to the handling of corruption in the file system:

#298
#301

Riak KV 2.9.0 - Release - Patch4

29 Aug 20:44
432fe71
Compare
Choose a tag to compare

Riak KV 2.9.0 - Release - Patch3

26 Jul 20:46
e3913a6
Compare
Choose a tag to compare

A number of issues related to both memory management and compaction management.

Performance could be slow after a restart of a node as the page cache has not yet been populated with the ledger contents. The page cache is now partially pre-loaded (using fadvise) if the ledger is expected to support direct lookup. When testing on Ubuntu, this worked intelligently, with the loading of the page cache prompted as a background task without blocking the startup of the process.

There was a significant defect in the lazy loading of the block header cache in leveled_sst. This was caching . a binary with a sub-reference to a slice of the block, meaning that the parent reference (to the whole block) could not be GC'd. This meant that following a restart of a store, it could significantly over-consume binary heap memory.

Garbage collection has been accelerated using hibernate (and in some cases manual GC) during compaction events to reduce cases where process had garbage but failed to GC due to a minimal reduction count. This should make memory consumed by leveled more stable and predictable during these events.

Journal compaction did not handle the KeyDelta objects will - and could go into a loop where it continually re-compacted the same object without any benefit to disk space reduction. Also journals could end up with very large object counts (due to KeyDeltas, tombstones, or just many small objects), and those would have a drag on performance - and so now journal size is restricted by bytes on disk AND object count.

Riak KV 2.9.0 - Release - Patch2

19 Jun 09:36
b2d4d76
Compare
Choose a tag to compare

Patched for issues:

#282
#285

This resolves an issue whereby sending a burst of very large objects to Riak with a leveled backend would result in the leveled consuming far higher than expected binary memory. This memory would not be released, unless PUTs continue to be received (to trigger a write to file). This could cause particular problems during transfers, when the highest bucket name contained a large number of large objects.

Riak KV 2.9.0 - Release - Patch1

23 May 11:21
da59901
Compare
Choose a tag to compare

Patched for issues:

#278
#280

As per issue:

basho/riak_kv#1699

Riak KV 2.9.0 - Release

11 May 15:01
984a2ab
Compare
Choose a tag to compare

Tag for Riak release

Riak KV 2.9.0 - Release Candidate 3

27 Feb 14:43
Compare
Choose a tag to compare
Pre-release

Target for RC3 on Riak 2.9.0

Release Candidate 9 - Riak RC0

18 Dec 16:30
d5a9f2e
Compare
Choose a tag to compare
Pre-release

Updated to resolve issue with make test in riak