Skip to content

How to benchmark

JasonT edited this page Oct 24, 2024 · 8 revisions

Benchmarks 🏋🏼

How to enable and use the runtime-benchmarks feature for a pallet

In order to fully enable and use runtime-benchmarks for a pallet the following steps are needed to be done:

  • Define the runtime-benchmarks feature for your pallet -> #49e6d57967a84c3a18562c7835810dfaac390b24
  • Create the benchmarking.rs file and add at least one benchmarking test -> #97dd0f52fbcc935187efd6c2a02824a41650e030
  • Add our pallet to the list of pallet that can be benchmarked -> #5c1b57651c1d728eaf0b7a3bbbb11de8d51c4d32
  • Use the provided script ./scripts/run_benchmarks.rs to generate the weights -> #adc11f5bb12870adab38e8d8772971bc00923271
  • Define the WeightInfo trait and use dummy weights for tests and real weights for the runtime -> #7fd34dbe658ae69411bc2bf9efb1b19e9feaa45f

The commit hashes represent commits that have exactly done that step. You can find all those commits and changes by checking out to the following branch: examples/runtime-benchmarks image

How to write benchmarks

Important Note

Writing benchmark tests by itself isn't a difficult task to do but there are certain things that need to be understood:

  • When you write them rust-analyzer (linting and code completion) won't work.
  • Those benchmark tests are run in a different environment than the regular tests. While the regular tests use the Mock environment the benchmarking one uses the --chain dev one
  • Since it uses the --chain dev one we need to make sure that before executing a test that the chain data is in a valid state.

Example

#![cfg(feature = "runtime-benchmarks")]

use super::*;
use frame_benchmarking::{account as benchmark_account, benchmarks, impl_benchmark_test_suite};
use frame_system::RawOrigin;
use sp_std::prelude::*;

use crate::Pallet as Custom;

/// This is a helper function to get an account.
pub fn get_account<T: Config>(name: &'static str) -> T::AccountId {
	let account: T::AccountId = benchmark_account(name, 0, 0);
	account
}

/// This is a helper function to get an `origin`.
pub fn origin<T: Config>(name: &'static str) -> RawOrigin<T::AccountId> {
	RawOrigin::Signed(get_account::<T>(name))
}

/// Prepare benchmark with all that the data that it needs.
pub fn prepare_benchmarks<T: Config>() {
	// Here you can do things like:
	//  - Setup account balances
	//  - Pre-mint NFTs and Collections
	//  - Assert that the start state is valid
	//  ...

	// Example on how to set balance of an account
	// let alice: T::AccountId = get_account::<T>("ALICE");
	// T::Currency::make_free_balance_be(&alice, BalanceOf::<T>::max_value() / 2);
}

benchmarks! {
	do_something {
		// First we need to prepare all the data and assert that we are in a valid state.
		prepare_benchmarks::<T>();

		// Here you would get the initial data that you are interested in.
		let alice = origin::<T>("ALICE");
		let old_value = Custom::<T>::something();
		let new_value = 100u32;
		assert_ne!(old_value, Some(new_value));

		// After that we call the extrinsic that we want to bench.
	}: _(alice, new_value)
	verify {
		// Here we do some light asserting. It's expected that the tests that are written cover all the possible cases.
		assert_eq!(Custom::<T>::something(), Some(new_value));
	}
}

impl_benchmark_test_suite!(Custom, crate::mock::new_test_ext(), crate::mock::Test);

Let's go step by step and see what's happening.

Benchmark macro

/// The macro allows for a number of "arms", each representing an individual benchmark. Using the
/// simple syntax, the associated dispatchable function maps 1:1 with the benchmark and the name of
/// the benchmark is the same as that of the associated function. However, extended syntax allows
/// for arbitrary expresions to be evaluated in a benchmark (including for example,
/// `on_initialize`).
benchmarks! {
  ...
}

This is the key part of benching. Basically each arm represents one benchmark test. The name of the arm is the same as the extrinsic name.

Benchmark test

{
        /// The name of the benchmark test is the same one as the name of the extrinsic
	do_something {
                /// Here we make sure that everything is prepared before we call the do_something extrinsic.
		...
	}: _(alice, new_value) /// Instead of writing here the extrinsic name that we want to bench we can use the "_" underscore character
                               /// which means that we want to call the extrinsic which has the same name as our test (which is do_something)
	verify {
                /// Here we do some light asserting and testing to see that everything went correctly.
                /// We need to assume that extensive testing was already done in our tests so no need to go to deep into this.
		...
	}
}

Each benchmark test consists of three things:

  • A head where we prepare the chain to be in a valid state
  • The body where we execute the extrinsic
  • The tail where we do some light asserting to make sure everything went correctly

impl_benchmark_test_suite

/// This creates a test suite which runs the module's benchmarks.
///
/// When called in `pallet_example_basic` as
///
/// ```rust,ignore
/// impl_benchmark_test_suite!(Pallet, crate::tests::new_test_ext(), crate::tests::Test);
/// ```
///
/// It expands to the equivalent of:
///
/// ```rust,ignore
/// #[cfg(test)]
/// mod tests {
/// 	use super::*;
/// 	use crate::tests::{new_test_ext, Test};
/// 	use frame_support::assert_ok;
///
/// 	#[test]
/// 	fn test_benchmarks() {
/// 		new_test_ext().execute_with(|| {
/// 			assert_ok!(test_benchmark_accumulate_dummy::<Test>());
/// 			assert_ok!(test_benchmark_set_dummy::<Test>());
/// 			assert_ok!(test_benchmark_sort_vector::<Test>());
/// 		});
/// 	}
/// }
/// ```
impl_benchmark_test_suite!(Custom, crate::mock::new_test_ext(), crate::mock::Test);

This is just standard boilerplate code.

Helper functions

pub fn get_account<T: Config>(name: &'static str) -> T::AccountId {
	....
}
pub fn origin<T: Config>(name: &'static str) -> RawOrigin<T::AccountId> {
	....
}
pub fn prepare_benchmarks<T: Config>() {
        ....
}

Those are some helper functions take make the code easier to write and understand.

Additional documentation

It's highly recommended to read the extended documentation of all those substrate macros.

Debuging benchmarks

To log messages inside benchmarks you can use the println! macro, example:

benchmarks! {
	my_personal_bench {
		let x in 0 .. 100;
		println!("Pre-bench: X: {:?}", x);
	}: {
		println!("This will be benched");
	}
}

To get the logging messages visible on the screen run the following command:

cargo test --all-features -- --show-output
# This also works
# cargo test --all-features -- --nocapture

Once run this is how it is going to look like:

image

How to run benchmarks

You can see if the benchmark tests are compiling by running the following command:

cargo test --all-features

Automatic benchmarking

The easiest way to run the benchmarks is to run the the ./scripts/run_benchmarks.sh script. It provides a nice user interface that allows you to either run the benchmarks or list the pallets that can be benched. If you choose to run them then you can select at which speed to run them and what pallets to bench.

Running the benchmarks

image

Manual benchmarking

First we need to build our binary with the runtime-benchmarks features:

cargo build --release --locked --features=runtime-benchmarks

After that we run the benchmarks by executing this command:

./target/release/seed benchmark pallet --chain dev --steps=50 --repeat=20 --pallet="$PALLET" --extrinsic="*" --wasm-execution=compiled --heap-pages=4096 --output $OUTPUT_FOLDER

Where

  • "$PALLET" -> is either "*" for all pallets or something like "pallet_balances pallet_utility" to bench just specific pallets.
  • "$OUTPUT_FOLDER" -> the folder that will contain the generate weights
  • "--repeat=20" -> Each benchmark will be run 20 times. Use a lower value if you just want to test if it works.
  • "--steps=50" -> Each parameterized benchmark will be run 50 times. Use a lower value if you just want to test if it works.