Class: MiniTest::Unit::TestCase
- Extended by:
- Deprecated::HooksCM, Guard
- Includes:
- Assertions, Deprecated::Hooks, Guard, LifecycleHooks
- Defined in:
- lib/minitest/unit.rb,
lib/minitest/benchmark.rb
Overview
Subclass TestCase to create your own tests. Typically you’ll want a TestCase subclass per implementation class.
See MiniTest::Assertions
Direct Known Subclasses
Constant Summary collapse
- PASSTHROUGH_EXCEPTIONS =
[NoMemoryError, SignalException, Interrupt, SystemExit]
- SUPPORTS_INFO_SIGNAL =
:nodoc:
Signal.list['INFO']
Constants included from Assertions
Instance Attribute Summary collapse
-
#__name__ ⇒ Object
readonly
:nodoc:.
Class Method Summary collapse
-
.bench_exp(min, max, base = 10) ⇒ Object
Returns a set of ranges stepped exponentially from
min
tomax
by powers ofbase
. -
.bench_linear(min, max, step = 10) ⇒ Object
Returns a set of ranges stepped linearly from
min
tomax
bystep
. -
.bench_range ⇒ Object
Specifies the ranges used for benchmarking for that class.
-
.benchmark_methods ⇒ Object
Returns the benchmark methods (methods that start with bench_) for that class.
-
.benchmark_suites ⇒ Object
Returns all test suites that have benchmark methods.
-
.current ⇒ Object
:nodoc:.
-
.i_suck_and_my_tests_are_order_dependent! ⇒ Object
Call this at the top of your tests when you absolutely positively need to have ordered tests.
-
.inherited(klass) ⇒ Object
:nodoc:.
-
.make_my_diffs_pretty! ⇒ Object
Make diffs for this TestCase use #pretty_inspect so that diff in assert_equal can be more details.
-
.parallelize_me! ⇒ Object
Call this at the top of your tests when you want to run your tests in parallel.
-
.reset ⇒ Object
:nodoc:.
-
.reset_setup_teardown_hooks ⇒ Object
:nodoc:.
-
.test_methods ⇒ Object
:nodoc:.
-
.test_order ⇒ Object
:nodoc:.
-
.test_suites ⇒ Object
:nodoc:.
Instance Method Summary collapse
-
#assert_performance(validation, &work) ⇒ Object
Runs the given
work
, gathering the times of each run. -
#assert_performance_constant(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a giventhreshold
. -
#assert_performance_exponential(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered fit to match a exponential curve within a given errorthreshold
. -
#assert_performance_linear(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered fit to match a straight line within a given errorthreshold
. -
#assert_performance_power(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered curve fit to match a power curve within a given errorthreshold
. -
#fit_error(xys) ⇒ Object
Takes an array of x/y pairs and calculates the general R^2 value.
-
#fit_exponential(xs, ys) ⇒ Object
To fit a functional form: y = ae^(bx).
-
#fit_linear(xs, ys) ⇒ Object
Fits the functional form: a + bx.
-
#fit_power(xs, ys) ⇒ Object
To fit a functional form: y = ax^b.
-
#initialize(name) ⇒ TestCase
constructor
:nodoc:.
-
#io ⇒ Object
Return the output IO object.
-
#io? ⇒ Boolean
Have we hooked up the IO yet?.
-
#passed? ⇒ Boolean
Returns true if the test passed.
-
#run(runner) ⇒ Object
Runs the tests reporting the status to
runner
. -
#setup ⇒ Object
Runs before every test.
-
#sigma(enum, &block) ⇒ Object
Enumerates over
enum
mappingblock
if given, returning the sum of the result. -
#teardown ⇒ Object
Runs after every test.
-
#validation_for_fit(msg, threshold) ⇒ Object
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
Methods included from Guard
jruby?, mri?, rubinius?, windows?
Methods included from Deprecated::HooksCM
add_setup_hook, add_teardown_hook, setup_hooks, teardown_hooks
Methods included from Assertions
#_assertions, #_assertions=, #assert, #assert_block, #assert_empty, #assert_equal, #assert_in_delta, #assert_in_epsilon, #assert_includes, #assert_instance_of, #assert_kind_of, #assert_match, #assert_nil, #assert_operator, #assert_output, #assert_predicate, #assert_raises, #assert_respond_to, #assert_same, #assert_send, #assert_silent, #assert_throws, #capture_io, #capture_subprocess_io, #diff, diff, diff=, #exception_details, #flunk, #message, #mu_pp, #mu_pp_for_diff, #pass, #refute, #refute_empty, #refute_equal, #refute_in_delta, #refute_in_epsilon, #refute_includes, #refute_instance_of, #refute_kind_of, #refute_match, #refute_nil, #refute_operator, #refute_predicate, #refute_respond_to, #refute_same, #skip, #synchronize
Methods included from Deprecated::Hooks
#_run_hooks, #run_setup_hooks, #run_teardown_hooks
Methods included from LifecycleHooks
#after_setup, #after_teardown, #before_setup, #before_teardown
Constructor Details
#initialize(name) ⇒ TestCase
:nodoc:
1331 1332 1333 1334 1335 1336 |
# File 'lib/minitest/unit.rb', line 1331 def initialize name # :nodoc: @__name__ = name @__io__ = nil @passed = nil @@current = self end |
Instance Attribute Details
#__name__ ⇒ Object (readonly)
:nodoc:
1272 1273 1274 |
# File 'lib/minitest/unit.rb', line 1272 def __name__ @__name__ end |
Class Method Details
.bench_exp(min, max, base = 10) ⇒ Object
Returns a set of ranges stepped exponentially from min
to max
by powers of base
. Eg:
bench_exp(2, 16, 2) # => [2, 4, 8, 16]
27 28 29 30 31 32 |
# File 'lib/minitest/benchmark.rb', line 27 def self.bench_exp min, max, base = 10 min = (Math.log10(min) / Math.log10(base)).to_i max = (Math.log10(max) / Math.log10(base)).to_i (min..max).map { |m| base ** m }.to_a end |
.bench_linear(min, max, step = 10) ⇒ Object
Returns a set of ranges stepped linearly from min
to max
by step
. Eg:
bench_linear(20, 40, 10) # => [20, 30, 40]
40 41 42 43 44 |
# File 'lib/minitest/benchmark.rb', line 40 def self.bench_linear min, max, step = 10 (min..max).step(step).to_a rescue LocalJumpError # 1.8.6 r = []; (min..max).step(step) { |n| r << n }; r end |
.bench_range ⇒ Object
Specifies the ranges used for benchmarking for that class. Defaults to exponential growth from 1 to 10k by powers of 10. Override if you need different ranges for your benchmarks.
See also: ::bench_exp and ::bench_linear.
68 69 70 |
# File 'lib/minitest/benchmark.rb', line 68 def self.bench_range bench_exp 1, 10_000 end |
.benchmark_methods ⇒ Object
Returns the benchmark methods (methods that start with bench_) for that class.
50 51 52 |
# File 'lib/minitest/benchmark.rb', line 50 def self.benchmark_methods # :nodoc: public_instance_methods(true).grep(/^bench_/).map { |m| m.to_s }.sort end |
.benchmark_suites ⇒ Object
Returns all test suites that have benchmark methods.
57 58 59 |
# File 'lib/minitest/benchmark.rb', line 57 def self.benchmark_suites TestCase.test_suites.reject { |s| s.benchmark_methods.empty? } end |
.current ⇒ Object
:nodoc:
1338 1339 1340 |
# File 'lib/minitest/unit.rb', line 1338 def self.current # :nodoc: @@current end |
.i_suck_and_my_tests_are_order_dependent! ⇒ Object
Call this at the top of your tests when you absolutely positively need to have ordered tests. In doing so, you’re admitting that you suck and your tests are weak.
1368 1369 1370 1371 1372 1373 |
# File 'lib/minitest/unit.rb', line 1368 def self.i_suck_and_my_tests_are_order_dependent! class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :alpha end end end |
.inherited(klass) ⇒ Object
:nodoc:
1401 1402 1403 1404 1405 |
# File 'lib/minitest/unit.rb', line 1401 def self.inherited klass # :nodoc: @@test_suites[klass] = true klass.reset_setup_teardown_hooks super end |
.make_my_diffs_pretty! ⇒ Object
Make diffs for this TestCase use #pretty_inspect so that diff in assert_equal can be more details. NOTE: this is much slower than the regular inspect but much more usable for complex objects.
1381 1382 1383 1384 1385 1386 1387 |
# File 'lib/minitest/unit.rb', line 1381 def self.make_my_diffs_pretty! require 'pp' define_method :mu_pp do |o| o.pretty_inspect end end |
.parallelize_me! ⇒ Object
Call this at the top of your tests when you want to run your tests in parallel. In doing so, you’re admitting that you rule and your tests are awesome.
1394 1395 1396 1397 1398 1399 |
# File 'lib/minitest/unit.rb', line 1394 def self.parallelize_me! class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :parallel end end end |
.reset ⇒ Object
:nodoc:
1357 1358 1359 |
# File 'lib/minitest/unit.rb', line 1357 def self.reset # :nodoc: @@test_suites = {} end |
.reset_setup_teardown_hooks ⇒ Object
:nodoc:
1451 1452 1453 1454 1455 |
# File 'lib/minitest/unit.rb', line 1451 def self.reset_setup_teardown_hooks # :nodoc: # also deprecated... believe it. @setup_hooks = [] @teardown_hooks = [] end |
.test_methods ⇒ Object
:nodoc:
1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 |
# File 'lib/minitest/unit.rb', line 1415 def self.test_methods # :nodoc: methods = public_instance_methods(true).grep(/^test/).map { |m| m.to_s } case self.test_order when :parallel max = methods.size ParallelEach.new methods.sort.sort_by { rand max } when :random then max = methods.size methods.sort.sort_by { rand max } when :alpha, :sorted then methods.sort else raise "Unknown test_order: #{self.test_order.inspect}" end end |
.test_order ⇒ Object
:nodoc:
1407 1408 1409 |
# File 'lib/minitest/unit.rb', line 1407 def self.test_order # :nodoc: :random end |
.test_suites ⇒ Object
:nodoc:
1411 1412 1413 |
# File 'lib/minitest/unit.rb', line 1411 def self.test_suites # :nodoc: @@test_suites.keys.sort_by { |ts| ts.name.to_s } end |
Instance Method Details
#assert_performance(validation, &work) ⇒ Object
Runs the given work
, gathering the times of each run. Range and times are then passed to a given validation
proc. Outputs the benchmark name and times in tab-separated format, making it easy to paste into a spreadsheet for graphing or further analysis.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
validation = proc { |x, y| ... }
assert_performance validation do |n|
@obj.algorithm(n)
end
end
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
# File 'lib/minitest/benchmark.rb', line 90 def assert_performance validation, &work range = self.class.bench_range io.print "#{__name__}" times = [] range.each do |x| GC.start t0 = Time.now instance_exec(x, &work) t = Time.now - t0 io.print "\t%9.6f" % t times << t end io.puts validation[range, times] end |
#assert_performance_constant(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a given threshold
. Note: because we’re testing for a slope of 0, R^2 is not a good determining factor for the fit, so the threshold is applied against the slope itself. As such, you probably want to tighten it from the default.
See www.graphpad.com/curvefit/goodness_of_fit.htm for more details.
Fit is calculated by #fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_constant 0.9999 do |n|
@obj.algorithm(n)
end
end
134 135 136 137 138 139 140 141 142 |
# File 'lib/minitest/benchmark.rb', line 134 def assert_performance_constant threshold = 0.99, &work validation = proc do |range, times| a, b, rr = fit_linear range, times assert_in_delta 0, b, 1 - threshold [a, b, rr] end assert_performance validation, &work end |
#assert_performance_exponential(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered fit to match a exponential curve within a given error threshold
.
Fit is calculated by #fit_exponential.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_exponential 0.9999 do |n|
@obj.algorithm(n)
end
end
160 161 162 |
# File 'lib/minitest/benchmark.rb', line 160 def assert_performance_exponential threshold = 0.99, &work assert_performance validation_for_fit(:exponential, threshold), &work end |
#assert_performance_linear(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered fit to match a straight line within a given error threshold
.
Fit is calculated by #fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_linear 0.9999 do |n|
@obj.algorithm(n)
end
end
180 181 182 |
# File 'lib/minitest/benchmark.rb', line 180 def assert_performance_linear threshold = 0.99, &work assert_performance validation_for_fit(:linear, threshold), &work end |
#assert_performance_power(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered curve fit to match a power curve within a given error threshold
.
Fit is calculated by #fit_power.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_power 0.9999 do |x|
@obj.algorithm
end
end
200 201 202 |
# File 'lib/minitest/benchmark.rb', line 200 def assert_performance_power threshold = 0.99, &work assert_performance validation_for_fit(:power, threshold), &work end |
#fit_error(xys) ⇒ Object
Takes an array of x/y pairs and calculates the general R^2 value.
209 210 211 212 213 214 215 |
# File 'lib/minitest/benchmark.rb', line 209 def fit_error xys = sigma(xys) { |x, y| y } / xys.size.to_f ss_tot = sigma(xys) { |x, y| (y - ) ** 2 } ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 } 1 - (ss_err / ss_tot) end |
#fit_exponential(xs, ys) ⇒ Object
To fit a functional form: y = ae^(bx).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingExponential.html
224 225 226 227 228 229 230 231 232 233 234 235 236 237 |
# File 'lib/minitest/benchmark.rb', line 224 def fit_exponential xs, ys n = xs.size xys = xs.zip(ys) sxlny = sigma(xys) { |x,y| x * Math.log(y) } slny = sigma(xys) { |x,y| Math.log(y) } sx2 = sigma(xys) { |x,y| x * x } sx = sigma xs c = n * sx2 - sx ** 2 a = (slny * sx2 - sx * sxlny) / c b = ( n * sxlny - sx * slny ) / c return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) } end |
#fit_linear(xs, ys) ⇒ Object
Fits the functional form: a + bx.
Takes x and y values and returns [a, b, r^2].
246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
# File 'lib/minitest/benchmark.rb', line 246 def fit_linear xs, ys n = xs.size xys = xs.zip(ys) sx = sigma xs sy = sigma ys sx2 = sigma(xs) { |x| x ** 2 } sxy = sigma(xys) { |x,y| x * y } c = n * sx2 - sx**2 a = (sy * sx2 - sx * sxy) / c b = ( n * sxy - sx * sy ) / c return a, b, fit_error(xys) { |x| a + b * x } end |
#fit_power(xs, ys) ⇒ Object
To fit a functional form: y = ax^b.
Takes x and y values and returns [a, b, r^2].
268 269 270 271 272 273 274 275 276 277 278 279 280 |
# File 'lib/minitest/benchmark.rb', line 268 def fit_power xs, ys n = xs.size xys = xs.zip(ys) slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) } slnx = sigma(xs) { |x | Math.log(x) } slny = sigma(ys) { | y| Math.log(y) } slnx2 = sigma(xs) { |x | Math.log(x) ** 2 } b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2); a = (slny - b * slnx) / n return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) } end |
#io ⇒ Object
Return the output IO object
1345 1346 1347 1348 |
# File 'lib/minitest/unit.rb', line 1345 def io @__io__ = true MiniTest::Unit.output end |
#io? ⇒ Boolean
Have we hooked up the IO yet?
1353 1354 1355 |
# File 'lib/minitest/unit.rb', line 1353 def io? @__io__ end |
#passed? ⇒ Boolean
Returns true if the test passed.
1435 1436 1437 |
# File 'lib/minitest/unit.rb', line 1435 def passed? @passed end |
#run(runner) ⇒ Object
Runs the tests reporting the status to runner
1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 |
# File 'lib/minitest/unit.rb', line 1282 def run runner trap "INFO" do runner.report.each_with_index do |msg, i| warn "\n%3d) %s" % [i + 1, msg] end warn '' time = runner.start_time ? Time.now - runner.start_time : 0 warn "Current Test: %s#%s %.2fs" % [self.class, self.__name__, time] runner.status $stderr end if SUPPORTS_INFO_SIGNAL start_time = Time.now result = "" begin @passed = nil self.before_setup self.setup self.after_setup self.run_test self.__name__ result = "." unless io? time = Time.now - start_time runner.record self.class, self.__name__, self._assertions, time, nil @passed = true rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false time = Time.now - start_time runner.record self.class, self.__name__, self._assertions, time, e result = runner.puke self.class, self.__name__, e ensure %w{ before_teardown teardown after_teardown }.each do |hook| begin self.send hook rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false result = runner.puke self.class, self.__name__, e end end trap 'INFO', 'DEFAULT' if SUPPORTS_INFO_SIGNAL end result end |
#setup ⇒ Object
Runs before every test. Use this to set up before each test run.
1443 |
# File 'lib/minitest/unit.rb', line 1443 def setup; end |
#sigma(enum, &block) ⇒ Object
Enumerates over enum
mapping block
if given, returning the sum of the result. Eg:
sigma([1, 2, 3]) # => 1 + 2 + 3 => 7
sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
289 290 291 292 |
# File 'lib/minitest/benchmark.rb', line 289 def sigma enum, &block enum = enum.map(&block) if block enum.inject { |sum, n| sum + n } end |
#teardown ⇒ Object
Runs after every test. Use this to clean up after each test run.
1449 |
# File 'lib/minitest/unit.rb', line 1449 def teardown; end |
#validation_for_fit(msg, threshold) ⇒ Object
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
298 299 300 301 302 303 304 |
# File 'lib/minitest/benchmark.rb', line 298 def validation_for_fit msg, threshold proc do |range, times| a, b, rr = send "fit_#{msg}", range, times assert_operator rr, :>=, threshold [a, b, rr] end end |