Today I’ve released Criterion 0.3.4 and Iai 0.1.0. I’m particularly excited by Iai, so read on to find out what I’ve been up to.
Criterion Updates The main new feature in this release is that Criterion.rs now has built-in support for benchmarking async functions.
This feature requires the async feature to be enabled. In addition to this, four other features - async-std, async-tokio, async-smol, and async-futures can be enabled to add support for benchmarking with the respective futures executors.
I’m pleased to announce the release of Criterion.rs v0.3, available today. Version 0.3 provides a number of new features including preliminary support for plugging in custom measurements (eg. hardware timers or POSIX CPU time), hooks to start/stop profilers, a new BenchmarkGroup struct that provides more flexibility than the older Benchmark and ParameterizedBenchmark structs, and an implementation of a #[criterion] custom-test-framework macro for those on Nightly.
What is Criterion.rs? Criterion.rs is a statistics-driven benchmarking library for Rust.
I’m pleased to announce the release of Criterion.rs v0.2, available today. Version 0.2 provides a number of new features including HTML reports and throughput measurements, fixes a handful of bugs, and adds a new, more powerful way to configure and construct your benchmarks. It also breaks backwards compatibility with the 0.1 versions in a number of small but important ways. Read on to learn more!
What is Criterion.rs? Criterion.rs is a statistics-driven benchmarking library for Rust.
When I initially announced the release of Criterion.rs, I didn’t expect that there would be so much demand for benchmarking on stable Rust. Now, I’d like to announce the release of Criterion.rs 0.1.2, which supports the stable compiler. This post is an introduction to benchmarking with Criterion.rs and a discussion of reasons why you might or might not want to do so.
What is Criterion.rs? Criterion.rs is a benchmarking library for Rust that aims to bring solid statistical confidence to benchmarking Rust code, while maintaining good ease-of-use, even for programmers without a background in statistics.
After I released the first version of Criterion.rs, (a statistics-driven benchmarking tool for Rust) I was asked about using it to detect performance regressions as part of a cloud-based continuous integration (CI) pipeline such as Travis-CI or Appveyor. That got me wondering - does it even make sense to do that?
Cloud-CI pipelines have a lot of potential to introduce noise into the benchmarking process - unpredictable load on the physical hosts of the build VM’s, or even unpredictable migrations of VMs between physical hosts.