Enum criterion::BatchSize [−][src]
Argument to Bencher::iter_batched
and
Bencher::iter_batched_ref
which controls the
batch size.
Generally speaking, almost all benchmarks should use SmallInput
. If the input or the result
of the benchmark routine is large enough that SmallInput
causes out-of-memory errors,
LargeInput
can be used to reduce memory usage at the cost of increasing the measurement
overhead. If the input or the result is extremely large (or if it holds some
limited external resource like a file handle), PerIteration
will set the number of iterations
per batch to exactly one. PerIteration
can increase the measurement overhead substantially
and should be avoided wherever possible.
Each value lists an estimate of the measurement overhead. This is intended as a rough guide to assist in choosing an option, it should not be relied upon. In particular, it is not valid to subtract the listed overhead from the measurement and assume that the result represents the true runtime of a function. The actual measurement overhead for your specific benchmark depends on the details of the function you’re benchmarking and the hardware and operating system running the benchmark.
With that said, if the runtime of your function is small relative to the measurement overhead
it will be difficult to take accurate measurements. In this situation, the best option is to use
Bencher::iter
which has next-to-zero measurement overhead.
Variants
SmallInput
indicates that the input to the benchmark routine (the value returned from
the setup routine) is small enough that millions of values can be safely held in memory.
Always prefer SmallInput
unless the benchmark is using too much memory.
In testing, the maximum measurement overhead from benchmarking with SmallInput
is on the
order of 500 picoseconds. This is presented as a rough guide; your results may vary.
LargeInput
indicates that the input to the benchmark routine or the value returned from
that routine is large. This will reduce the memory usage but increase the measurement
overhead.
In testing, the maximum measurement overhead from benchmarking with LargeInput
is on the
order of 750 picoseconds. This is presented as a rough guide; your results may vary.
PerIteration
indicates that the input to the benchmark routine or the value returned from
that routine is extremely large or holds some limited resource, such that holding many values
in memory at once is infeasible. This provides the worst measurement overhead, but the
lowest memory usage.
In testing, the maximum measurement overhead from benchmarking with PerIteration
is on the
order of 350 nanoseconds or 350,000 picoseconds. This is presented as a rough guide; your
results may vary.
NumBatches(u64)
NumBatches
will attempt to divide the iterations up into a given number of batches.
A larger number of batches (and thus smaller batches) will reduce memory usage but increase
measurement overhead. This allows the user to choose their own tradeoff between memory usage
and measurement overhead, but care must be taken in tuning the number of batches. Most
benchmarks should use SmallInput
or LargeInput
instead.
NumIterations(u64)
NumIterations
fixes the batch size to a constant number, specified by the user. This
allows the user to choose their own tradeoff between overhead and memory usage, but care must
be taken in tuning the batch size. In general, the measurement overhead of NumIterations
will be larger than that of NumBatches
. Most benchmarks should use SmallInput
or
LargeInput
instead.
Trait Implementations
impl Clone for BatchSize
[src]
impl Copy for BatchSize
[src]
impl Debug for BatchSize
[src]
impl Eq for BatchSize
[src]
impl Hash for BatchSize
[src]
fn hash<__H: Hasher>(&self, state: &mut __H)
[src]
pub fn hash_slice<H>(data: &[Self], state: &mut H) where
H: Hasher,
1.3.0[src]
H: Hasher,
impl PartialEq<BatchSize> for BatchSize
[src]
impl StructuralEq for BatchSize
[src]
impl StructuralPartialEq for BatchSize
[src]
Auto Trait Implementations
impl RefUnwindSafe for BatchSize
impl Send for BatchSize
impl Sync for BatchSize
impl Unpin for BatchSize
impl UnwindSafe for BatchSize
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
pub fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> Pointable for T
pub const ALIGN: usize
type Init = T
The type for initializers.
pub unsafe fn init(init: <T as Pointable>::Init) -> usize
pub unsafe fn deref<'a>(ptr: usize) -> &'a T
pub unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T
pub unsafe fn drop(ptr: usize)
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
pub fn to_owned(&self) -> T
[src]
pub fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,