Module: Benchmark
- Defined in:
- lib/better-benchmark.rb,
lib/better-benchmark/bencher.rb,
lib/better-benchmark/comparer.rb,
lib/better-benchmark/comparison-partial.rb
Defined Under Namespace
Classes: Bencher, Comparer, ComparisonPartial
Constant Summary collapse
- BETTER_BENCHMARK_VERSION =
'0.8.7'
- DEFAULT_REQUIRED_SIGNIFICANCE =
0.01
Class Method Summary collapse
-
.compare_realtime(options = {}, &block1) ⇒ Object
To use better-benchmark properly, it is important to set :iterations and :inner_iterations properly.
-
.compare_times(times1, times2, required_significance = DEFAULT_REQUIRED_SIGNIFICANCE) ⇒ Object
The number of elements in times1 and times2 should be the same.
- .report_on(result) ⇒ Object
- .write_realtime(data_dir, &block) ⇒ Object
Class Method Details
.compare_realtime(options = {}, &block1) ⇒ Object
To use better-benchmark properly, it is important to set :iterations and :inner_iterations properly. There are a few things to bear in mind:
(1) Do not set :iterations too high. It should normally be in the range of 10-20, but can be lower. Over 25 should be considered too high. (2) Execution time for one run of the blocks under test should not be too small (or else random variance will muddle the results). Aim for at least 1.0 seconds per iteration. (3) Minimize the proportion of any warmup time (and cooldown time) of one block run (or use :warmup_iterations to eliminate this factor entirely).
In order to achieve these goals, you will need to tweak :inner_iterations based on your situation. The exact number you should use will depend on the strength of the hardware (CPU, RAM, disk), and the amount of work done by the blocks. For code blocks that execute extremely rapidly, you may need hundreds of thousands of :inner_iterations.
79 80 81 |
# File 'lib/better-benchmark.rb', line 79 def self.compare_realtime( = {}, &block1 ) ComparisonPartial.new( block1, ) end |
.compare_times(times1, times2, required_significance = DEFAULT_REQUIRED_SIGNIFICANCE) ⇒ Object
The number of elements in times1 and times2 should be the same.
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
# File 'lib/better-benchmark.rb', line 27 def self.compare_times( times1, times2, required_significance = DEFAULT_REQUIRED_SIGNIFICANCE ) r = RSRuby.instance wilcox_result = r.wilcox_test( times1, times2 ) { :results1 => { :times => times1, :mean => r.mean( times1 ), :stddev => r.sd( times1 ), }, :results2 => { :times => times2, :mean => r.mean( times2 ), :stddev => r.sd( times2 ), }, :p => wilcox_result[ 'p.value' ], :W => wilcox_result[ 'statistic' ][ 'W' ], :significant => ( wilcox_result[ 'p.value' ] < ( required_significance || DEFAULT_REQUIRED_SIGNIFICANCE ) ), } end |
.report_on(result) ⇒ Object
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
# File 'lib/better-benchmark.rb', line 83 def self.report_on( result ) puts puts( "Set 1 mean: %.3f s" % [ result[ :results1 ][ :mean ] ] ) puts( "Set 1 std dev: %.3f" % [ result[ :results1 ][ :stddev ] ] ) puts( "Set 2 mean: %.3f s" % [ result[ :results2 ][ :mean ] ] ) puts( "Set 2 std dev: %.3f" % [ result[ :results2 ][ :stddev ] ] ) puts "p.value: #{result[ :p ]}" puts "W: #{result[ :W ]}" puts( "The difference (%+.1f%%) %s statistically significant." % [ ( ( result[ :results2 ][ :mean ] - result[ :results1 ][ :mean ] ) / result[ :results1 ][ :mean ] ) * 100, result[ :significant ] ? 'IS' : 'IS NOT' ] ) end |