Struct criterion::AsyncBencher[][src]

pub struct AsyncBencher<'a, 'b, A: AsyncExecutor, M: Measurement = WallTime> { /* fields omitted */ }

Async/await variant of the Bencher struct.

Implementations

impl<'a, 'b, A: AsyncExecutor, M: Measurement> AsyncBencher<'a, 'b, A, M>[src]

pub fn iter<O, R, F>(&mut self, routine: R) where
    R: FnMut() -> F,
    F: Future<Output = O>, 
[src]

Times a routine by executing it many times and timing the total elapsed time.

Prefer this timing loop when routine returns a value that doesn’t have a destructor.

Timing model

Note that the AsyncBencher also times the time required to destroy the output of routine(). Therefore prefer this timing loop when the runtime of mem::drop(O) is negligible compared to the runtime of the routine.

elapsed = Instant::now + iters * (routine + mem::drop(O) + Range::next)

Example

#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

// The function to benchmark
async fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.to_async(FuturesExecutor).iter(|| async { foo().await } )
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_custom<R, F>(&mut self, routine: R) where
    R: FnMut(u64) -> F,
    F: Future<Output = M::Value>, 
[src]

Times a routine by executing it many times and relying on routine to measure its own execution time.

Prefer this timing loop in cases where routine has to do its own measurements to get accurate timing information (for example in multi-threaded scenarios where you spawn and coordinate with multiple threads).

Timing model

Custom, the timing model is whatever is returned as the Duration from routine.

Example

#[macro_use] extern crate criterion;
use criterion::*;
use criterion::black_box;
use criterion::async_executor::FuturesExecutor;
use std::time::Instant;

async fn foo() {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("iter", move |b| {
        b.to_async(FuturesExecutor).iter_custom(|iters| {
            async move {
                let start = Instant::now();
                for _i in 0..iters {
                    black_box(foo().await);
                }
                start.elapsed()
            }
        })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_with_large_drop<O, R, F>(&mut self, routine: R) where
    R: FnMut() -> F,
    F: Future<Output = O>, 
[src]

Times a routine by collecting its output on each iteration. This avoids timing the destructor of the value returned by routine.

WARNING: This requires O(iters * mem::size_of::<O>()) of memory, and iters is not under the control of the caller. If this causes out-of-memory errors, use iter_batched instead.

Timing model

elapsed = Instant::now + iters * (routine) + Iterator::collect::<Vec<_>>

Example

#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

async fn create_vector() -> Vec<u64> {
    // ...
}

fn bench(c: &mut Criterion) {
    c.bench_function("with_drop", move |b| {
        // This will avoid timing the Vec::drop.
        b.to_async(FuturesExecutor).iter_with_large_drop(|| async { create_vector().await })
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_batched<I, O, S, R, F>(
    &mut self,
    setup: S,
    routine: R,
    size: BatchSize
) where
    S: FnMut() -> I,
    R: FnMut(I) -> F,
    F: Future<Output = O>, 
[src]

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine must consume its input.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model

elapsed = (Instant::now * num_batches) + (iters * (routine + O::drop)) + Vec::extend

Example

#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
async fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| async move { sort(&mut data).await }, BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

pub fn iter_batched_ref<I, O, S, R, F>(
    &mut self,
    setup: S,
    routine: R,
    size: BatchSize
) where
    S: FnMut() -> I,
    R: FnMut(&mut I) -> F,
    F: Future<Output = O>, 
[src]

Times a routine that requires some input by generating a batch of input, then timing the iteration of the benchmark over the input. See BatchSize for details on choosing the batch size. Use this when the routine should accept the input by mutable reference.

For example, use this loop to benchmark sorting algorithms, because they require unsorted data on each iteration.

Timing model

elapsed = (Instant::now * num_batches) + (iters * routine) + Vec::extend

Example

#[macro_use] extern crate criterion;

use criterion::*;
use criterion::async_executor::FuturesExecutor;

fn create_scrambled_data() -> Vec<u64> {
    // ...
}

// The sorting algorithm to test
async fn sort(data: &mut [u64]) {
    // ...
}

fn bench(c: &mut Criterion) {
    let data = create_scrambled_data();

    c.bench_function("with_setup", move |b| {
        // This will avoid timing the to_vec call.
        b.iter_batched(|| data.clone(), |mut data| async move { sort(&mut data).await }, BatchSize::SmallInput)
    });
}

criterion_group!(benches, bench);
criterion_main!(benches);

Auto Trait Implementations

impl<'a, 'b, A, M> RefUnwindSafe for AsyncBencher<'a, 'b, A, M> where
    A: RefUnwindSafe,
    M: RefUnwindSafe,
    <M as Measurement>::Value: RefUnwindSafe

impl<'a, 'b, A, M> Send for AsyncBencher<'a, 'b, A, M> where
    A: Send,
    M: Sync,
    <M as Measurement>::Value: Send

impl<'a, 'b, A, M> Sync for AsyncBencher<'a, 'b, A, M> where
    A: Sync,
    M: Sync,
    <M as Measurement>::Value: Sync

impl<'a, 'b, A, M> Unpin for AsyncBencher<'a, 'b, A, M> where
    A: Unpin,
    'a: 'b, 

impl<'a, 'b, A, M = WallTime> !UnwindSafe for AsyncBencher<'a, 'b, A, M>

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Pointable for T

type Init = T

The type for initializers.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.