Struct criterion::Bencher [−][src]
Timer struct used to iterate a benchmarked function and measure the runtime.
This struct provides different timing loops as methods. Each timing loop provides a different way to time a routine and each has advantages and disadvantages.
- If you want to do the iteration and measurement yourself (eg. passing the iteration count
to a separate process), use
iter_custom
. - If your routine requires no per-iteration setup and returns a value with an expensive
drop
method, useiter_with_large_drop
. - If your routine requires some per-iteration setup that shouldn’t be timed, use
iter_batched
oriter_batched_ref
. SeeBatchSize
for a discussion of batch sizes. If the setup value implementsDrop
and you don’t want to include thedrop
time in the measurement, useiter_batched_ref
, otherwise useiter_batched
. These methods are also suitable for benchmarking routines which return a value with an expensivedrop
method, but are more complex thaniter_with_large_drop
. - Otherwise, use
iter
.
Implementations
impl<'a, M: Measurement> Bencher<'a, M>
[src]
pub fn iter<O, R>(&mut self, routine: R) where
R: FnMut() -> O,
[src]
R: FnMut() -> O,
Times a routine
by executing it many times and timing the total elapsed time.
Prefer this timing loop when routine
returns a value that doesn’t have a destructor.
Timing model
Note that the Bencher
also times the time required to destroy the output of routine()
.
Therefore prefer this timing loop when the runtime of mem::drop(O)
is negligible compared
to the runtime of the routine
.
elapsed = Instant::now + iters * (routine + mem::drop(O) + Range::next)
Example
#[macro_use] extern crate criterion; use criterion::*; // The function to benchmark fn foo() { // ... } fn bench(c: &mut Criterion) { c.bench_function("iter", move |b| { b.iter(|| foo()) }); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn iter_custom<R>(&mut self, routine: R) where
R: FnMut(u64) -> M::Value,
[src]
R: FnMut(u64) -> M::Value,
Times a routine
by executing it many times and relying on routine
to measure its own execution time.
Prefer this timing loop in cases where routine
has to do its own measurements to
get accurate timing information (for example in multi-threaded scenarios where you spawn
and coordinate with multiple threads).
Timing model
Custom, the timing model is whatever is returned as the Duration from routine
.
Example
#[macro_use] extern crate criterion; use criterion::*; use criterion::black_box; use std::time::Instant; fn foo() { // ... } fn bench(c: &mut Criterion) { c.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { black_box(foo()); } start.elapsed() }) }); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn iter_with_large_drop<O, R>(&mut self, routine: R) where
R: FnMut() -> O,
[src]
R: FnMut() -> O,
Times a routine
by collecting its output on each iteration. This avoids timing the
destructor of the value returned by routine
.
WARNING: This requires O(iters * mem::size_of::<O>())
of memory, and iters
is not under the
control of the caller. If this causes out-of-memory errors, use iter_batched
instead.
Timing model
elapsed = Instant::now + iters * (routine) + Iterator::collect::<Vec<_>>
Example
#[macro_use] extern crate criterion; use criterion::*; fn create_vector() -> Vec<u64> { // ... } fn bench(c: &mut Criterion) { c.bench_function("with_drop", move |b| { // This will avoid timing the Vec::drop. b.iter_with_large_drop(|| create_vector()) }); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn iter_batched<I, O, S, R>(
&mut self,
setup: S,
routine: R,
size: BatchSize
) where
S: FnMut() -> I,
R: FnMut(I) -> O,
[src]
&mut self,
setup: S,
routine: R,
size: BatchSize
) where
S: FnMut() -> I,
R: FnMut(I) -> O,
Times a routine
that requires some input by generating a batch of input, then timing the
iteration of the benchmark over the input. See BatchSize
for
details on choosing the batch size. Use this when the routine must consume its input.
For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.
Timing model
elapsed = (Instant::now * num_batches) + (iters * (routine + O::drop)) + Vec::extend
Example
#[macro_use] extern crate criterion; use criterion::*; fn create_scrambled_data() -> Vec<u64> { // ... } // The sorting algorithm to test fn sort(data: &mut [u64]) { // ... } fn bench(c: &mut Criterion) { let data = create_scrambled_data(); c.bench_function("with_setup", move |b| { // This will avoid timing the to_vec call. b.iter_batched(|| data.clone(), |mut data| sort(&mut data), BatchSize::SmallInput) }); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn iter_batched_ref<I, O, S, R>(
&mut self,
setup: S,
routine: R,
size: BatchSize
) where
S: FnMut() -> I,
R: FnMut(&mut I) -> O,
[src]
&mut self,
setup: S,
routine: R,
size: BatchSize
) where
S: FnMut() -> I,
R: FnMut(&mut I) -> O,
Times a routine
that requires some input by generating a batch of input, then timing the
iteration of the benchmark over the input. See BatchSize
for
details on choosing the batch size. Use this when the routine should accept the input by
mutable reference.
For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.
Timing model
elapsed = (Instant::now * num_batches) + (iters * routine) + Vec::extend
Example
#[macro_use] extern crate criterion; use criterion::*; fn create_scrambled_data() -> Vec<u64> { // ... } // The sorting algorithm to test fn sort(data: &mut [u64]) { // ... } fn bench(c: &mut Criterion) { let data = create_scrambled_data(); c.bench_function("with_setup", move |b| { // This will avoid timing the to_vec call. b.iter_batched(|| data.clone(), |mut data| sort(&mut data), BatchSize::SmallInput) }); } criterion_group!(benches, bench); criterion_main!(benches);
pub fn to_async<'b, A: AsyncExecutor>(
&'b mut self,
runner: A
) -> AsyncBencher<'a, 'b, A, M>
[src]
&'b mut self,
runner: A
) -> AsyncBencher<'a, 'b, A, M>
Convert this bencher into an AsyncBencher, which enables async/await support.
Auto Trait Implementations
impl<'a, M> RefUnwindSafe for Bencher<'a, M> where
M: RefUnwindSafe,
<M as Measurement>::Value: RefUnwindSafe,
M: RefUnwindSafe,
<M as Measurement>::Value: RefUnwindSafe,
impl<'a, M> Send for Bencher<'a, M> where
M: Sync,
<M as Measurement>::Value: Send,
M: Sync,
<M as Measurement>::Value: Send,
impl<'a, M> Sync for Bencher<'a, M> where
M: Sync,
<M as Measurement>::Value: Sync,
M: Sync,
<M as Measurement>::Value: Sync,
impl<'a, M> Unpin for Bencher<'a, M> where
<M as Measurement>::Value: Unpin,
<M as Measurement>::Value: Unpin,
impl<'a, M> UnwindSafe for Bencher<'a, M> where
M: RefUnwindSafe,
<M as Measurement>::Value: UnwindSafe,
M: RefUnwindSafe,
<M as Measurement>::Value: UnwindSafe,
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
pub fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> Pointable for T
pub const ALIGN: usize
type Init = T
The type for initializers.
pub unsafe fn init(init: <T as Pointable>::Init) -> usize
pub unsafe fn deref<'a>(ptr: usize) -> &'a T
pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T
pub unsafe fn drop(ptr: usize)
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,