Class: MiniTest::Unit::TestCase
- Extended by:
- Deprecated::HooksCM, Guard
- Includes:
- Assertions, Deprecated::Hooks, Guard, LifecycleHooks
- Defined in:
- lib/minitest/unit.rb,
lib/minitest/benchmark.rb
Overview
Subclass TestCase to create your own tests. Typically you’ll want a TestCase subclass per implementation class.
See MiniTest::Assertions
Direct Known Subclasses
Constant Summary collapse
- PASSTHROUGH_EXCEPTIONS =
[NoMemoryError, SignalException, Interrupt, SystemExit]
- SUPPORTS_INFO_SIGNAL =
:nodoc:
Constants included from Assertions
Instance Attribute Summary collapse
-
#__name__ ⇒ Object
readonly
:nodoc:.
Class Method Summary collapse
-
.bench_exp(min, max, base = 10) ⇒ Object
Returns a set of ranges stepped exponentially from
min
tomax
by powers ofbase
. -
.bench_linear(min, max, step = 10) ⇒ Object
Returns a set of ranges stepped linearly from
min
tomax
bystep
. -
.bench_range ⇒ Object
Specifies the ranges used for benchmarking for that class.
-
.benchmark_methods ⇒ Object
Returns the benchmark methods (methods that start with bench_) for that class.
-
.benchmark_suites ⇒ Object
Returns all test suites that have benchmark methods.
-
.current ⇒ Object
:nodoc:.
-
.i_suck_and_my_tests_are_order_dependent! ⇒ Object
Call this at the top of your tests when you absolutely positively need to have ordered tests.
-
.inherited(klass) ⇒ Object
:nodoc:.
-
.make_my_diffs_pretty! ⇒ Object
Make diffs for this TestCase use #pretty_inspect so that diff in assert_equal can be more details.
-
.parallelize_me! ⇒ Object
Call this at the top of your tests when you want to run your tests in parallel.
-
.reset ⇒ Object
:nodoc:.
-
.reset_setup_teardown_hooks ⇒ Object
:nodoc:.
-
.test_methods ⇒ Object
:nodoc:.
-
.test_order ⇒ Object
:nodoc:.
-
.test_suites ⇒ Object
:nodoc:.
Instance Method Summary collapse
-
#assert_performance(validation, &work) ⇒ Object
Runs the given
work
, gathering the times of each run. -
#assert_performance_constant(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a giventhreshold
. -
#assert_performance_exponential(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered fit to match a exponential curve within a given errorthreshold
. -
#assert_performance_linear(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered fit to match a straight line within a given errorthreshold
. -
#assert_performance_power(threshold = 0.99, &work) ⇒ Object
Runs the given
work
and asserts that the times gathered curve fit to match a power curve within a given errorthreshold
. -
#fit_error(xys) ⇒ Object
Takes an array of x/y pairs and calculates the general R^2 value.
-
#fit_exponential(xs, ys) ⇒ Object
To fit a functional form: y = ae^(bx).
-
#fit_linear(xs, ys) ⇒ Object
Fits the functional form: a + bx.
-
#fit_power(xs, ys) ⇒ Object
To fit a functional form: y = ax^b.
-
#initialize(name) ⇒ TestCase
constructor
:nodoc:.
-
#io ⇒ Object
Return the output IO object.
-
#io? ⇒ Boolean
Have we hooked up the IO yet?.
-
#passed? ⇒ Boolean
Returns true if the test passed.
-
#run(runner) ⇒ Object
Runs the tests reporting the status to
runner
. -
#setup ⇒ Object
Runs before every test.
-
#sigma(enum, &block) ⇒ Object
Enumerates over
enum
mappingblock
if given, returning the sum of the result. -
#teardown ⇒ Object
Runs after every test.
-
#validation_for_fit(msg, threshold) ⇒ Object
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
Methods included from Deprecated::HooksCM
add_setup_hook, add_teardown_hook, setup_hooks, teardown_hooks
Methods included from Guard
jruby?, mri?, rubinius?, windows?
Methods included from Assertions
#_assertions, #_assertions=, #assert, #assert_block, #assert_empty, #assert_equal, #assert_in_delta, #assert_in_epsilon, #assert_includes, #assert_instance_of, #assert_kind_of, #assert_match, #assert_nil, #assert_operator, #assert_output, #assert_predicate, #assert_raises, #assert_respond_to, #assert_same, #assert_send, #assert_silent, #assert_throws, #capture_io, #capture_subprocess_io, #diff, diff, diff=, #exception_details, #flunk, #message, #mu_pp, #mu_pp_for_diff, #pass, #refute, #refute_empty, #refute_equal, #refute_in_delta, #refute_in_epsilon, #refute_includes, #refute_instance_of, #refute_kind_of, #refute_match, #refute_nil, #refute_operator, #refute_predicate, #refute_respond_to, #refute_same, #skip, #synchronize
Methods included from Deprecated::Hooks
#_run_hooks, #run_setup_hooks, #run_teardown_hooks
Methods included from LifecycleHooks
#after_setup, #after_teardown, #before_setup, #before_teardown
Constructor Details
#initialize(name) ⇒ TestCase
:nodoc:
1324 1325 1326 1327 1328 1329 |
# File 'lib/minitest/unit.rb', line 1324 def initialize name # :nodoc: @__name__ = name @__io__ = nil @passed = nil @@current = self end |
Instance Attribute Details
#__name__ ⇒ Object (readonly)
:nodoc:
1265 1266 1267 |
# File 'lib/minitest/unit.rb', line 1265 def __name__ @__name__ end |
Class Method Details
.bench_exp(min, max, base = 10) ⇒ Object
Returns a set of ranges stepped exponentially from min
to max
by powers of base
. Eg:
bench_exp(2, 16, 2) # => [2, 4, 8, 16]
20 21 22 23 24 25 |
# File 'lib/minitest/benchmark.rb', line 20 def self.bench_exp min, max, base = 10 min = (Math.log10(min) / Math.log10(base)).to_i max = (Math.log10(max) / Math.log10(base)).to_i (min..max).map { |m| base ** m }.to_a end |
.bench_linear(min, max, step = 10) ⇒ Object
Returns a set of ranges stepped linearly from min
to max
by step
. Eg:
bench_linear(20, 40, 10) # => [20, 30, 40]
33 34 35 36 37 |
# File 'lib/minitest/benchmark.rb', line 33 def self.bench_linear min, max, step = 10 (min..max).step(step).to_a rescue LocalJumpError # 1.8.6 r = []; (min..max).step(step) { |n| r << n }; r end |
.bench_range ⇒ Object
Specifies the ranges used for benchmarking for that class. Defaults to exponential growth from 1 to 10k by powers of 10. Override if you need different ranges for your benchmarks.
See also: ::bench_exp and ::bench_linear.
61 62 63 |
# File 'lib/minitest/benchmark.rb', line 61 def self.bench_range bench_exp 1, 10_000 end |
.benchmark_methods ⇒ Object
Returns the benchmark methods (methods that start with bench_) for that class.
43 44 45 |
# File 'lib/minitest/benchmark.rb', line 43 def self.benchmark_methods # :nodoc: public_instance_methods(true).grep(/^bench_/).map { |m| m.to_s }.sort end |
.benchmark_suites ⇒ Object
Returns all test suites that have benchmark methods.
50 51 52 |
# File 'lib/minitest/benchmark.rb', line 50 def self.benchmark_suites TestCase.test_suites.reject { |s| s.benchmark_methods.empty? } end |
.current ⇒ Object
:nodoc:
1331 1332 1333 |
# File 'lib/minitest/unit.rb', line 1331 def self.current # :nodoc: @@current end |
.i_suck_and_my_tests_are_order_dependent! ⇒ Object
Call this at the top of your tests when you absolutely positively need to have ordered tests. In doing so, you’re admitting that you suck and your tests are weak.
1361 1362 1363 1364 1365 1366 |
# File 'lib/minitest/unit.rb', line 1361 def self.i_suck_and_my_tests_are_order_dependent! class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :alpha end end end |
.inherited(klass) ⇒ Object
:nodoc:
1394 1395 1396 1397 1398 |
# File 'lib/minitest/unit.rb', line 1394 def self.inherited klass # :nodoc: @@test_suites[klass] = true klass.reset_setup_teardown_hooks super end |
.make_my_diffs_pretty! ⇒ Object
Make diffs for this TestCase use #pretty_inspect so that diff in assert_equal can be more details. NOTE: this is much slower than the regular inspect but much more usable for complex objects.
1374 1375 1376 1377 1378 1379 1380 |
# File 'lib/minitest/unit.rb', line 1374 def self.make_my_diffs_pretty! require 'pp' define_method :mu_pp do |o| o.pretty_inspect end end |
.parallelize_me! ⇒ Object
Call this at the top of your tests when you want to run your tests in parallel. In doing so, you’re admitting that you rule and your tests are awesome.
1387 1388 1389 1390 1391 1392 |
# File 'lib/minitest/unit.rb', line 1387 def self.parallelize_me! class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :parallel end end end |
.reset ⇒ Object
:nodoc:
1350 1351 1352 |
# File 'lib/minitest/unit.rb', line 1350 def self.reset # :nodoc: @@test_suites = {} end |
.reset_setup_teardown_hooks ⇒ Object
:nodoc:
1444 1445 1446 1447 1448 |
# File 'lib/minitest/unit.rb', line 1444 def self.reset_setup_teardown_hooks # :nodoc: # also deprecated... believe it. @setup_hooks = [] @teardown_hooks = [] end |
.test_methods ⇒ Object
:nodoc:
1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 |
# File 'lib/minitest/unit.rb', line 1408 def self.test_methods # :nodoc: methods = public_instance_methods(true).grep(/^test/).map { |m| m.to_s } case self.test_order when :parallel max = methods.size ParallelEach.new methods.sort.sort_by { rand max } when :random then max = methods.size methods.sort.sort_by { rand max } when :alpha, :sorted then methods.sort else raise "Unknown test_order: #{self.test_order.inspect}" end end |
.test_order ⇒ Object
:nodoc:
1400 1401 1402 |
# File 'lib/minitest/unit.rb', line 1400 def self.test_order # :nodoc: :random end |
.test_suites ⇒ Object
:nodoc:
1404 1405 1406 |
# File 'lib/minitest/unit.rb', line 1404 def self.test_suites # :nodoc: @@test_suites.keys.sort_by { |ts| ts.name.to_s } end |
Instance Method Details
#assert_performance(validation, &work) ⇒ Object
Runs the given work
, gathering the times of each run. Range and times are then passed to a given validation
proc. Outputs the benchmark name and times in tab-separated format, making it easy to paste into a spreadsheet for graphing or further analysis.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
validation = proc { |x, y| ... }
assert_performance validation do |n|
@obj.algorithm(n)
end
end
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/minitest/benchmark.rb', line 83 def assert_performance validation, &work range = self.class.bench_range io.print "#{__name__}" times = [] range.each do |x| GC.start t0 = Time.now instance_exec(x, &work) t = Time.now - t0 io.print "\t%9.6f" % t times << t end io.puts validation[range, times] end |
#assert_performance_constant(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a given threshold
. Note: because we’re testing for a slope of 0, R^2 is not a good determining factor for the fit, so the threshold is applied against the slope itself. As such, you probably want to tighten it from the default.
See www.graphpad.com/curvefit/goodness_of_fit.htm for more details.
Fit is calculated by #fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_constant 0.9999 do |n|
@obj.algorithm(n)
end
end
127 128 129 130 131 132 133 134 135 |
# File 'lib/minitest/benchmark.rb', line 127 def assert_performance_constant threshold = 0.99, &work validation = proc do |range, times| a, b, rr = fit_linear range, times assert_in_delta 0, b, 1 - threshold [a, b, rr] end assert_performance validation, &work end |
#assert_performance_exponential(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered fit to match a exponential curve within a given error threshold
.
Fit is calculated by #fit_exponential.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_exponential 0.9999 do |n|
@obj.algorithm(n)
end
end
153 154 155 |
# File 'lib/minitest/benchmark.rb', line 153 def assert_performance_exponential threshold = 0.99, &work assert_performance validation_for_fit(:exponential, threshold), &work end |
#assert_performance_linear(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered fit to match a straight line within a given error threshold
.
Fit is calculated by #fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_linear 0.9999 do |n|
@obj.algorithm(n)
end
end
173 174 175 |
# File 'lib/minitest/benchmark.rb', line 173 def assert_performance_linear threshold = 0.99, &work assert_performance validation_for_fit(:linear, threshold), &work end |
#assert_performance_power(threshold = 0.99, &work) ⇒ Object
Runs the given work
and asserts that the times gathered curve fit to match a power curve within a given error threshold
.
Fit is calculated by #fit_power.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_power 0.9999 do |x|
@obj.algorithm
end
end
193 194 195 |
# File 'lib/minitest/benchmark.rb', line 193 def assert_performance_power threshold = 0.99, &work assert_performance validation_for_fit(:power, threshold), &work end |
#fit_error(xys) ⇒ Object
Takes an array of x/y pairs and calculates the general R^2 value.
202 203 204 205 206 207 208 |
# File 'lib/minitest/benchmark.rb', line 202 def fit_error xys = sigma(xys) { |x, y| y } / xys.size.to_f ss_tot = sigma(xys) { |x, y| (y - ) ** 2 } ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 } 1 - (ss_err / ss_tot) end |
#fit_exponential(xs, ys) ⇒ Object
To fit a functional form: y = ae^(bx).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingExponential.html
217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
# File 'lib/minitest/benchmark.rb', line 217 def fit_exponential xs, ys n = xs.size xys = xs.zip(ys) sxlny = sigma(xys) { |x,y| x * Math.log(y) } slny = sigma(xys) { |x,y| Math.log(y) } sx2 = sigma(xys) { |x,y| x * x } sx = sigma xs c = n * sx2 - sx ** 2 a = (slny * sx2 - sx * sxlny) / c b = ( n * sxlny - sx * slny ) / c return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) } end |
#fit_linear(xs, ys) ⇒ Object
Fits the functional form: a + bx.
Takes x and y values and returns [a, b, r^2].
239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
# File 'lib/minitest/benchmark.rb', line 239 def fit_linear xs, ys n = xs.size xys = xs.zip(ys) sx = sigma xs sy = sigma ys sx2 = sigma(xs) { |x| x ** 2 } sxy = sigma(xys) { |x,y| x * y } c = n * sx2 - sx**2 a = (sy * sx2 - sx * sxy) / c b = ( n * sxy - sx * sy ) / c return a, b, fit_error(xys) { |x| a + b * x } end |
#fit_power(xs, ys) ⇒ Object
To fit a functional form: y = ax^b.
Takes x and y values and returns [a, b, r^2].
261 262 263 264 265 266 267 268 269 270 271 272 273 |
# File 'lib/minitest/benchmark.rb', line 261 def fit_power xs, ys n = xs.size xys = xs.zip(ys) slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) } slnx = sigma(xs) { |x | Math.log(x) } slny = sigma(ys) { | y| Math.log(y) } slnx2 = sigma(xs) { |x | Math.log(x) ** 2 } b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2); a = (slny - b * slnx) / n return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) } end |
#io ⇒ Object
Return the output IO object
1338 1339 1340 1341 |
# File 'lib/minitest/unit.rb', line 1338 def io @__io__ = true MiniTest::Unit.output end |
#io? ⇒ Boolean
Have we hooked up the IO yet?
1346 1347 1348 |
# File 'lib/minitest/unit.rb', line 1346 def io? @__io__ end |
#passed? ⇒ Boolean
Returns true if the test passed.
1428 1429 1430 |
# File 'lib/minitest/unit.rb', line 1428 def passed? @passed end |
#run(runner) ⇒ Object
Runs the tests reporting the status to runner
1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 |
# File 'lib/minitest/unit.rb', line 1275 def run runner trap "INFO" do runner.report.each_with_index do |msg, i| warn "\n%3d) %s" % [i + 1, msg] end warn '' time = runner.start_time ? Time.now - runner.start_time : 0 warn "Current Test: %s#%s %.2fs" % [self.class, self.__name__, time] runner.status $stderr end if SUPPORTS_INFO_SIGNAL start_time = Time.now result = "" begin @passed = nil self.before_setup self.setup self.after_setup self.run_test self.__name__ result = "." unless io? time = Time.now - start_time runner.record self.class, self.__name__, self._assertions, time, nil @passed = true rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false time = Time.now - start_time runner.record self.class, self.__name__, self._assertions, time, e result = runner.puke self.class, self.__name__, e ensure %w{ before_teardown teardown after_teardown }.each do |hook| begin self.send hook rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false result = runner.puke self.class, self.__name__, e end end trap 'INFO', 'DEFAULT' if SUPPORTS_INFO_SIGNAL end result end |
#setup ⇒ Object
Runs before every test. Use this to set up before each test run.
1436 |
# File 'lib/minitest/unit.rb', line 1436 def setup; end |
#sigma(enum, &block) ⇒ Object
Enumerates over enum
mapping block
if given, returning the sum of the result. Eg:
sigma([1, 2, 3]) # => 1 + 2 + 3 => 7
sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
282 283 284 285 |
# File 'lib/minitest/benchmark.rb', line 282 def sigma enum, &block enum = enum.map(&block) if block enum.inject { |sum, n| sum + n } end |
#teardown ⇒ Object
Runs after every test. Use this to clean up after each test run.
1442 |
# File 'lib/minitest/unit.rb', line 1442 def teardown; end |
#validation_for_fit(msg, threshold) ⇒ Object
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
291 292 293 294 295 296 297 |
# File 'lib/minitest/benchmark.rb', line 291 def validation_for_fit msg, threshold proc do |range, times| a, b, rr = send "fit_#{msg}", range, times assert_operator rr, :>=, threshold [a, b, rr] end end |