This report provides a performance evaluation of embedded key-value stores available to Java applications. The benchmark tests various workload sizes with RAM-based auto-scaling (capped at 1 million entries), testing different value sizes, access patterns, and implementation-specific configurations.
⚠️ SMOKETEST RESULTSThis report was generated from a smoketest run and should NOT be used for performance comparisons or production decisions. Smoketest results have:
- No warmup iterations
- Single iteration
- Minimal entry counts
- Short runtime
For valid performance results, run the benchmark script with
benchmarkmode instead.
The benchmark was executed on 2025-11-06 using LmdbJava Benchmarks with the following configuration:
| Library | Version | Abbreviation |
|---|---|---|
| LmdbJava (ByteBuffer) | 0.9.1 | LMDB BB |
| LmdbJava (Agrona DirectBuffer) | 0.9.1 | LMDB DB |
| LMDBJNI | 0.4.7 | LMDB JNI |
| LWJGL | 3.3.6 | LMDB JGL |
| LevelDB | 1.8 | LevelDB |
| RocksDB | 10.4.2 | RocksDB |
| MapDB | 3.1.0 | MapDB |
| MVStore | 2.4.240 | MVStore |
| Xodus | 2.0.1 | Xodus |
| Chronicle Map | 3.27ea1 | Chronicle |
| CPU | Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (4 cores) |
| RAM | 16 GiB |
| OS | Linux 6.11.0-1018-azure (x86_64) |
| Java | 25.0.1 |
All benchmarks were executed by JMH
with default operating system and JVM configuration. The /tmp directory was
used as the work directory during each benchmark.
The following operations are measured:
readKey: Fetch each entry by presenting its keywrite: Bulk insert entries into the storereadXxh64: Iterate over entries computing XXH64 hash of keys and valuesreadSeq: Iterate over key-ordered entries in forward orderreadRev: Iterate over key-ordered entries in reverse orderreadCrc: Iterate over entries computing CRC32 of keys and valuesAll storage sizes reflect actual bytes consumed on disk (via POSIX stat), not
apparent size. Chronicle Map only supports readKey and write benchmarks
as it does not provide ordered key iteration.
This run tests various LMDB implementation options using 100-byte values to determine optimal settings for subsequent benchmarks. All tests use sequential integer keys.
LmdbJava supports multiple buffer types including Java's ByteBuffer in both
safe and unsafe modes. The unsafe mode (default) uses sun.misc.Unsafe for
direct memory access. The graph shows consistent overhead when forcing safe
mode, confirming that unsafe mode provides better performance and should be
used for production workloads.
This graph shows the impact of LMDB's MDB_NOSYNC flag on write performance.
As expected, requiring fsync on every transaction commit is significantly slower
than allowing the OS to manage sync operations. For maximum write performance,
sync is disabled in subsequent benchmarks.
LMDB's MDB_WRITEMAP flag enables a writable memory map, improving write
performance by allowing direct writes to the mapped region. The graph confirms
that enabling write map improves write latency across all LMDB implementations.
This setting is enabled for all subsequent benchmarks.
Some of the later runs require larger value sizes in order to explore behaviour at higher memory workloads. This run was therefore focused on finding reasonable byte values around 2, 4, 8 and 16 KB. Only the native implementations were benchmarked.
This benchmark wrote randomly-ordered integer keys, with value sizes as indicated on the horizontal axis.
As shown, LevelDB and RocksDB achieve consistent performance across value sizes. LMDB shows degradation if entry sizes are not well-aligned with its page size. Exceeding the entry size by a single byte requires an additional page. For example, moving from 2,026 byte values (2,030 byte entry including the 4 byte integer key) to 2,027 byte values causes increased storage requirements. If storage space is an issue, entry sizes should reflect LMDB page sizing requirements. Optimal entry sizes are (in bytes) 2,030, 4,084, 8,180, 12,276 and so on in 4,096 byte increments.
Given there is no disadvantage to LevelDB or RocksDB by using entry sizes that align well with LMDB page sizes, these will be used in later runs. Ensuring overall storage requirements are similar also enables a more reasonable comparison of each implementation's performance (as distinct from storage) trade-offs.
LevelDB and RocksDB are both LSM-based stores and benefit from inserting data in batches. Both implementations handled large value sizes with a variety of very large batch sizes. The graph below illustrates the batch size impact when writing sequential integer keys X 8,176 byte values.
Testing found that RocksDB failed with insufficient file handles when using large batch sizes. This was overcome with system configuration adjustments. It is therefore important to consider the impact of LSM-based implementations on servers with file handle constraints. Such constraints may be related to memory, competing uses or security policies.
One limitation of this report is it only measures the time taken for the client thread to complete a given read or write workload. The LSM-based implementations also use a separate compaction thread to rewrite the data. This thread overhead is therefore not measured by the benchmark and not reported here. Given the compaction thread remains very busy during sustained write operations, the LSM-based implementations reduce the availability of a second core for end user application workloads. This may be of concern on CPU-constrained servers.
Finally, LSM-based implementations typically offer considerable tuning options. Users are expected to tune the store based on their workload type, storage type and file system configuration. Such extensive tuning was not conducted in this benchmark because the workload was very comfortably memory-bound and an effort had already been made to determine reasonable batch sizes. A production LSM deployment will need to tune these parameters carefully. A key feature of the non-LSM implementations is they do not require such tuning.
This is a comprehensive test of all libraries with 100 byte values, testing integer vs string keys and sequential vs random access patterns. The vertical (y) axis of each graph uses a log scale.
| Implementation | Bytes | Overhead % |
|---|---|---|
| (Flat Array) | 104,000 | 0.00 |
| MVStore | 118,784 | 14.22 |
| Xodus | 118,784 | 14.22 |
| Chronicle | 122,880 | 18.15 |
| LevelDB | 122,880 | 18.15 |
| RocksDB | 167,936 | 61.48 |
| LMDB JNI | 188,416 | 81.17 |
| LMDB DB | 196,608 | 89.05 |
| LMDB JGL | 196,608 | 89.05 |
| LMDB BB | 200,704 | 92.98 |
| MapDB | 2,097,152 | 1916.49 |
We begin by reviewing the storage space required by each implementation's memory-mapped files. We can see that MVStore, Xodus, Chronicle and LevelDB are very efficient, requiring less than 20% overhead to store the data. LMDB requires around 89% more bytes than the size of a flat array, due to its B+ tree layout and copy-on-write page allocation approach. These collectively provide higher read performance and LMDB MVCC ACID transactional support. As we will see later, this overhead reduces as the value sizes are increased.
We start with the most mechanically sympathetic workload. If you have integer keys and can insert them in sequential order, the above graphs illustrate the type of latencies achievable across the various implementations. LMDB is clearly the fastest option, even (surprisingly) including writes.
Here we simply run the same benchmark as before, but with string keys instead of integer keys. Our string keys are the same integers as our last benchmark, but this time they are recorded as a zero-padded string. LMDB continues to perform better than any alternative, including for writes. This confirms the previous result seen with sequentially-inserted integer keys.
Next up we farewell mechanical sympathy and apply some random workloads. Here
we write the keys out in random order, and we read them back (the readKey
benchmark) in that same random order. The remaining operations are all cursors
over sequentially-ordered keys. The graphs show LMDB is consistently faster for
all operations, even including writes.
This benchmark is the same as the previous, except with our zero-padded string keys. There are no surprises; we see similar results as previously reported.
This run tests larger value sizes (2,026 bytes) to explore behavior at higher
memory workloads. Based on Run 4 showing that integer and string keys perform
effectively the same, this run only includes integer keys. Similarly, to reduce
execution time, the readRev, readCrc and readXxh64 benchmarks are
excluded (we retain readSeq and readKey to illustrate cursor and direct
lookup performance).
| Implementation | Bytes | Overhead % |
|---|---|---|
| (Flat Array) | 2,030,000 | 0.00 |
| LevelDB | 2,048,000 | 0.89 |
| Xodus | 2,048,000 | 0.89 |
| RocksDB | 2,093,056 | 3.11 |
| Chronicle | 2,449,408 | 20.66 |
| LMDB JNI | 2,760,704 | 36.00 |
| LMDB DB | 2,768,896 | 36.40 |
| LMDB JGL | 2,768,896 | 36.40 |
| LMDB BB | 2,805,760 | 38.21 |
| MapDB | 6,291,456 | 209.92 |
All implementations offer much better storage efficiency now that the value sizes have increased (from 100 bytes in Run 4 to 2,026 bytes in Run 5).
Starting with the most optimistic scenario of sequential keys, we see LMDB out-perform the alternatives for both read and write workloads. Chronicle Map's write performance is good, but it should be remembered that it is not an index suitable for ordered key iteration.
LMDB easily remains the fastest with random reads. However, random writes involving these larger values are a different story, with the two native LSM implementations completing the write workloads much faster than LMDB.
This run explores much larger workloads with 4-16KB value sizes. Given the performance of the pure Java sorting implementations (particularly for writes), they are not included in Run 6. The unsorted Chronicle Map continues to be included. Only random access patterns are tested as they represent the worst-case scenario.
| Implementation | Bytes | Overhead % |
|---|---|---|
| (Flat Array) | 4,084,000 | 0.00 |
| LevelDB | 4,104,192 | 0.49 |
| RocksDB | 4,149,248 | 1.60 |
| LMDB DB | 4,157,440 | 1.80 |
| Chronicle | 4,902,912 | 20.05 |
With 4,080 byte values, storage efficiency is now excellent.
We can see the larger value sizes are starting to equal out the write speeds. Chronicle Map continues to write the fastest, but it should be remembered that it is not an index suitable for ordered key iteration. LMDB offers the fastest read performance.
| Implementation | Bytes | Overhead % |
|---|---|---|
| (Flat Array) | 8,180,000 | 0.00 |
| LevelDB | 8,200,192 | 0.25 |
| RocksDB | 8,245,248 | 0.80 |
| LMDB DB | 8,253,440 | 0.90 |
| Chronicle | 9,801,728 | 19.83 |
The trend toward better storage efficiency with larger values has continued.
Now that much larger values are in use, we start to see the LSM implementations slowed down by write amplification. LMDB offers the fastest reads.
| Implementation | Bytes | Overhead % |
|---|---|---|
| (Flat Array) | 16,372,000 | 0.00 |
| LevelDB | 16,392,192 | 0.12 |
| RocksDB | 16,437,248 | 0.40 |
| LMDB DB | 16,445,440 | 0.45 |
| Chronicle | 20,975,616 | 28.12 |
All implementations offer very good storage space efficiency compared with a flat array.
The write amplification issue seen with the earlier 8,176 byte benchmark continues, with the LSM implementations further slowing down.
After testing various workloads across different value sizes, we have seen a number of important differences between the implementations.
Before discussing the ordered key implementations, it is noted that Chronicle Map offers a good option for unordered keys. It's consistently fast for both reads and writes, plus storage space efficient. Chronicle Map also offers a different scope than the other embedded key-value stores in this report. For example, it lacks transactions but does offer replication.
Ordered key implementations were the focus of this report. Those use cases which can employ ordered keys will always achieve much better read performance by iterating over a cursor. We saw this regardless of entry size, original write ordering, or even implementation. It is worth devising a key structure that enables ordered iteration whenever possible.
Pure Java sorting implementations (MapDB, MVStore, Xodus) generally showed weaker performance compared with the native implementations (Chronicle Map, LMDB, RocksDB and LevelDB). GC tuning may improve these results.
LMDB was always the fastest implementation for every read workload. This is unsurprising given its B+ Tree and copy-on-write design. LMDB's excellent read performance is sustained regardless of entry size or access pattern.
Write workloads show more variation in the results. Small value sizes (100 bytes) were written more quickly by LMDB than any other sorted key implementation. As value sizes increased toward 2 KB, this situation reversed and LMDB became much slower than RocksDB and LevelDB. However, once value sizes reached the 4 KB region, the differences between LMDB, LevelDB and RocksDB diminished significantly. At 8 KB and beyond, LMDB was materially faster for writes. This finding is readily explained by the write amplification necessary in LSM-based implementations.
All implementations became more storage space efficient as the value sizes increased. LMDB was relatively inefficient at small value sizes (89% overhead with 100 byte values) but the overhead became minimal (under 2%) by the time values reached 4 KB. Modern Java compression libraries such as LZ4-Java (for general-purpose cases) and JavaFastPFOR (for integers) may also provide enhanced storage efficiency by packing related data into chunked, compressed values. This may also improve performance in the case of IO bottlenecks, as the CPU can decompress while waiting on further IO.
In terms of broader efficiency, LMDB operates in the same thread as its caller and therefore the performance reported above is a total indication of LMDB cost. On the other hand, RocksDB and LevelDB use a second thread for write compaction. This second thread may compete with application workloads on busy servers. We also see a similar efficiency concern around operating system file handle consumption. While LMDB only requires two open files, RocksDB and LevelDB both require tens to hundreds of thousands of open files to operate.
The qualitative dimensions of each implementation should also be considered. For example, consider recovery time from dirty shutdown (process/OS/server crash), ACID transaction guarantees, inter-process usage flexibility, runtime monitoring requirements, hot backup support and ongoing configuration effort. In these situations LMDB delivers a very strong solution. For more information, see the LmdbJava features list.