Skip to content

NDBench UI

Ioannis Papapanagiotou edited this page Nov 16, 2016 · 4 revisions

NDbench operations can be controlled either through a REST API or from a UI. A screenshot of the NDBench Runner (Web UI) is shown below. Through the UI, a user can select a cluster, connect a driver, modify settings, set a load testing pattern (random or sliding window), and finally run the load tests. Selecting an instance while a load test is running also enables the user to view live-updating statistics, such as read/write latencies, requests per second, cache hits vs. misses, and more.

ndbench-ui-2

Load Properties

NDBench provides a variety of input parameters that are loaded dynamically and can dynamically change during the workload test. The following parameters can be configured on a per node basis:

  • numKeys: the sample space for the randomly generated keys
  • numValues: the sample space for the generated values
  • dataSize: the size of each value
  • numWriters/numReaders: the number of threads per NDBench node for writes/reads
  • writeEnabled/readEnabled: boolean to enable or disable writes or reads
  • writeRateLimit/readRateLimit: the number of writes per second and reads per seconds
  • userVariableDataSize: boolean to enable or disable the ability of the payload to be randomly generated.

Workload Types

NDBench offers pluggable load tests. Currently, it offers two modes --

  • random traffic
  • sliding window traffic: The sliding window test is a more sophisticated test that can concurrently exercise data that is repetitive inside the window, thereby providing a combination of temporally local data and spatially local data. This test is important as we want to exercise both the caching layer provided by the data store system, as well as the disk’s IOPS (Input/Output Operations Per Second).

Load Generation

Load can be generated individually for each node on the application side, or all nodes can generate reads and writes simultaneously. Moreover, NDBench provides the ability to use the “backfill” feature in order to start the workload with hot data. This helps in reducing the ramp-up time of the benchmark.