Benchmarking¶
Introduction¶
Benchmarking is a critical component of developing efficient and secure blockchain runtimes. In the Polkadot ecosystem, accurately benchmarking your custom pallets ensures that each extrinsic has a precise weight, representing its computational and storage demands. This process is vital for maintaining the blockchain's performance and preventing potential vulnerabilities, such as Denial of Service (DoS) attacks.
The Polkadot SDK leverages the FRAME benchmarking framework, offering tools to measure and assign weights to extrinsics. These weights help determine the maximum number of transactions or system-level calls processed within a block. This guide covers how to use FRAME's benchmarking framework, from setting up your environment to writing and running benchmarks for your custom pallets. You'll understand how to generate accurate weights by the end, ensuring your runtime remains performant and secure.
The Case for Benchmarking¶
Benchmarking helps validate that the required execution time for different functions is within reasonable boundaries to ensure your blockchain runtime can handle transactions efficiently and securely. By accurately measuring the weight of each extrinsic, you can prevent service interruptions caused by computationally intensive calls that exceed block time limits. Without benchmarking, runtime performance could be vulnerable to DoS attacks, where malicious users exploit functions with unoptimized weights.
Benchmarking also ensures predictable transaction fees. Weights derived from benchmark tests accurately reflect the resource usage of function calls, allowing fair fee calculation. This approach discourages abuse while maintaining network reliability.
Benchmarking and Weight¶
In Polkadot SDK-based chains, weight quantifies the computational effort needed to process transactions. This weight includes factors such as:
- Computational complexity
- Storage complexity (proof size)
- Database reads and writes
- Hardware specifications
Benchmarking uses real-world testing to simulate worst-case scenarios for extrinsics. The framework generates a linear model for weight calculation by running multiple iterations with varied parameters. These worst-case weights ensure blocks remain within execution limits, enabling the runtime to maintain throughput under varying loads. Excess fees can be refunded if a call uses fewer resources than expected, offering users a fair cost model.
Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specifications of hardware used for benchmarking. By modeling the expected weight of each runtime function, the blockchain can calculate the number of transactions or system-level calls it can execute within a certain period.
Within FRAME, each function call that is dispatched must have a #[pallet::weight]
annotation that can return the expected weight for the worst-case scenario execution of that function given its inputs:
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::do_something())]
pub fn do_something(origin: OriginFor<T>) -> DispatchResultWithPostInfo { Ok(()) }
The WeightInfo
file is automatically generated during benchmarking. Based on these tests, this file provides accurate weights for each extrinsic.
Benchmarking Process¶
Benchmarking a pallet involves the following steps:
- Creating a
benchmarking.rs
file within your pallet's structure - Writing a benchmarking test for each extrinsic
- Executing the benchmarking tool to calculate weights based on performance metrics
The benchmarking tool runs multiple iterations to model worst-case execution times and determine the appropriate weight. By default, the benchmarking pipeline is deactivated. To activate it, compile your runtime with the runtime-benchmarks
feature flag.
Prepare Your Environment¶
Before writing benchmark tests, you need to ensure the frame-benchmarking
crate is included in your pallet's Cargo.toml
similar to the following:
You must also ensure that you add the runtime-benchmarks
feature flag as follows under the [features]
section of your pallet's Cargo.toml
:
runtime-benchmarks = [
"frame-benchmarking/runtime-benchmarks",
"frame-support/runtime-benchmarks",
"frame-system/runtime-benchmarks",
"sp-runtime/runtime-benchmarks",
]
Lastly, ensure that frame-benchmarking
is included in std = []
:
Once complete, you have the required dependencies for writing benchmark tests for your pallet.
Write Benchmark Tests¶
Create a benchmarking.rs
file in your pallet's src/
. Your directory structure should look similar to the following:
my-pallet/
├── src/
│ ├── lib.rs # Main pallet implementation
│ └── benchmarking.rs # Benchmarking
└── Cargo.toml
With the directory structure set, you can use the polkadot-sdk-parachain-template
to get started as follows:
//! Benchmarking setup for pallet-template
#![cfg(feature = "runtime-benchmarks")]
use super::*;
use frame_benchmarking::v2::*;
#[benchmarks]
mod benchmarks {
use super::*;
#[cfg(test)]
use crate::pallet::Pallet as Template;
use frame_system::RawOrigin;
#[benchmark]
fn do_something() {
let caller: T::AccountId = whitelisted_caller();
#[extrinsic_call]
do_something(RawOrigin::Signed(caller), 100);
assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(100u32.into()));
}
#[benchmark]
fn cause_error() {
Something::<T>::put(CompositeStruct { block_number: 100u32.into() });
let caller: T::AccountId = whitelisted_caller();
#[extrinsic_call]
cause_error(RawOrigin::Signed(caller));
assert_eq!(Something::<T>::get().map(|v| v.block_number), Some(101u32.into()));
}
impl_benchmark_test_suite!(Template, crate::mock::new_test_ext(), crate::mock::Test);
}
In your benchmarking tests, employ these best practices:
- Write custom testing functions - the function
do_something
in the preceding example is a placeholder. Similar to writing unit tests, you must write custom functions to benchmark test your extrinsics. Access the mock runtime and use functions such aswhitelisted_caller()
to sign transactions and facilitate testing - Use the
#[extrinsic_call]
macro - this macro is used when calling the extrinsic itself and is a required part of a benchmarking function. See the `extrinsic_call Rust docs for more details - Validate extrinsic behavior - the
assert_eq
expression ensures that the extrinsic is working properly within the benchmark context
Add Benchmarks to Runtime¶
Before running the benchmarking tool, you must integrate benchmarks with your runtime as follows:
-
Create a
benchmarks.rs
file. This file should contain the following macro, which registers all pallets for benchmarking, as well as their respective configurations:benchmarks.rsFor example, to register a pallet namedframe_benchmarking::define_benchmarks!( [frame_system, SystemBench::<Runtime>] [pallet_parachain_template, TemplatePallet] [pallet_balances, Balances] [pallet_session, SessionBench::<Runtime>] [pallet_timestamp, Timestamp] [pallet_message_queue, MessageQueue] [pallet_sudo, Sudo] [pallet_collator_selection, CollatorSelection] [cumulus_pallet_parachain_system, ParachainSystem] [cumulus_pallet_xcmp_queue, XcmpQueue] );
pallet_parachain_template
for benchmarking, add it as follows:benchmarks.rsframe_benchmarking::define_benchmarks!( [frame_system, SystemBench::<Runtime>] [pallet_parachain_template, TemplatePallet] );
Updating
define_benchmarks!
macro is requiredIf the pallet isn't included in the
define_benchmarks!
macro, the CLI cannot access and benchmark it later. -
Navigate to the runtime's
lib.rs
file and add the import forbenchmarks.rs
as follows:The
runtime-benchmarks
feature gate ensures benchmark tests are isolated from production runtime code.
Run Benchmarks¶
You can now compile your runtime with the runtime-benchmarks
feature flag. This feature flag is crucial as the benchmarking tool will look for this feature being enabled to know when it should run benchmark tests. Follow these steps to compile the runtime with benchmarking enabled:
-
Run
build
with the feature flag included -
Once compiled, run the benchmarking tool to measure extrinsic weights
./target/release/INSERT_NODE_BINARY_NAME benchmark pallet \ --runtime INSERT_PATH_TO_WASM_RUNTIME \ --pallet INSERT_NAME_OF_PALLET \ --extrinsic '*' \ --steps 20 \ --repeat 10 \ --output weights.rs
Flag definitions
--runtime
- the path to your runtime's Wasm--pallet
- the name of the pallet you wish to benchmark. This pallet must be configured in your runtime and defined indefine_benchmarks
--extrinsic
- which extrinsic to test. Using'*'
implies all extrinsics will be benchmarked--output
- where the output of the auto-generated weights will reside
The generated weights.rs
file contains weight annotations for your extrinsics, ready to be added to your pallet. The output should be similar to the following. Some output is omitted for brevity:
Add Benchmark Weights to Pallet¶
Once the weights.rs
is generated, you may add the generated weights to your pallet. It is common that weights.rs
become part of your pallet's root in src/
:
use crate::weights::WeightInfo;
/// Configure the pallet by specifying the parameters and types on which it depends.
#[pallet::config]
pub trait Config: frame_system::Config {
/// A type representing the weights required by the dispatchables of this pallet.
type WeightInfo: WeightInfo;
}
After which, you may add this to the #[pallet::weight]
annotation in the extrinsic via the Config
:
#[pallet::call_index(0)]
#[pallet::weight(T::WeightInfo::do_something())]
pub fn do_something(origin: OriginFor<T>) -> DispatchResultWithPostInfo { Ok(()) }
Where to Go Next¶
- View the Rust Docs for a more comprehensive, low-level view of the FRAME V2 Benchmarking Suite
- Read the FRAME Benchmarking and Weights reference document, a concise guide which details how weights and benchmarking work
| Created: October 18, 2024