Class: MiniTest::Unit::TestCase
- Extended by:
- Guard
- Includes:
- Assertions, Guard
- Defined in:
- lib/minitest/unit.rb,
lib/minitest/benchmark.rb
Overview
Subclass TestCase to create your own tests. Typically you’ll want a TestCase subclass per implementation class.
See MiniTest::Assertions
Direct Known Subclasses
Constant Summary collapse
- PASSTHROUGH_EXCEPTIONS =
[NoMemoryError, SignalException, Interrupt, SystemExit]
- SUPPORTS_INFO_SIGNAL =
:nodoc:
Constants included from Assertions
Assertions::UNDEFINED, Assertions::WINDOZE
Instance Attribute Summary collapse
-
#__name__ ⇒ Object
readonly
:nodoc:.
Class Method Summary collapse
-
.add_setup_hook(arg = nil, &block) ⇒ Object
Adds a block of code that will be executed before every TestCase is run.
-
.add_teardown_hook(arg = nil, &block) ⇒ Object
Adds a block of code that will be executed after every TestCase is run.
-
.bench_exp(min, max, base = 10) ⇒ Object
Returns a set of ranges stepped exponentially from
mintomaxby powers ofbase. -
.bench_linear(min, max, step = 10) ⇒ Object
Returns a set of ranges stepped linearly from
mintomaxbystep. -
.bench_range ⇒ Object
Specifies the ranges used for benchmarking for that class.
-
.benchmark_methods ⇒ Object
Returns the benchmark methods (methods that start with bench_) for that class.
-
.benchmark_suites ⇒ Object
Returns all test suites that have benchmark methods.
-
.current ⇒ Object
:nodoc:.
-
.i_suck_and_my_tests_are_order_dependent! ⇒ Object
Call this at the top of your tests when you absolutely positively need to have ordered tests.
-
.inherited(klass) ⇒ Object
:nodoc:.
-
.reset ⇒ Object
:nodoc:.
-
.reset_setup_teardown_hooks ⇒ Object
:nodoc:.
-
.setup_hooks ⇒ Object
:nodoc:.
-
.teardown_hooks ⇒ Object
:nodoc:.
-
.test_methods ⇒ Object
:nodoc:.
-
.test_order ⇒ Object
:nodoc:.
-
.test_suites ⇒ Object
:nodoc:.
Instance Method Summary collapse
-
#after_setup ⇒ Object
Runs before every test after setup.
-
#after_teardown ⇒ Object
Runs after every teardown.
-
#assert_performance(validation, &work) ⇒ Object
Runs the given
work, gathering the times of each run. -
#assert_performance_constant(threshold = 0.99, &work) ⇒ Object
Runs the given
workand asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a giventhreshold. -
#assert_performance_exponential(threshold = 0.99, &work) ⇒ Object
Runs the given
workand asserts that the times gathered fit to match a exponential curve within a given errorthreshold. -
#assert_performance_linear(threshold = 0.99, &work) ⇒ Object
Runs the given
workand asserts that the times gathered fit to match a straight line within a given errorthreshold. -
#assert_performance_power(threshold = 0.99, &work) ⇒ Object
Runs the given
workand asserts that the times gathered curve fit to match a power curve within a given errorthreshold. -
#before_setup ⇒ Object
Runs before every setup.
-
#before_teardown ⇒ Object
Runs after every test before teardown.
-
#fit_error(xys) ⇒ Object
Takes an array of x/y pairs and calculates the general R^2 value.
-
#fit_exponential(xs, ys) ⇒ Object
To fit a functional form: y = ae^(bx).
-
#fit_linear(xs, ys) ⇒ Object
Fits the functional form: a + bx.
-
#fit_power(xs, ys) ⇒ Object
To fit a functional form: y = ax^b.
-
#initialize(name) ⇒ TestCase
constructor
:nodoc:.
-
#io ⇒ Object
Return the output IO object.
-
#io? ⇒ Boolean
Have we hooked up the IO yet?.
-
#passed? ⇒ Boolean
Returns true if the test passed.
-
#run(runner) ⇒ Object
Runs the tests reporting the status to
runner. -
#run_setup_hooks ⇒ Object
:nodoc:.
-
#run_teardown_hooks ⇒ Object
:nodoc:.
-
#setup ⇒ Object
Runs before every test.
-
#sigma(enum, &block) ⇒ Object
Enumerates over
enummappingblockif given, returning the sum of the result. -
#teardown ⇒ Object
Runs after every test.
-
#validation_for_fit(msg, threshold) ⇒ Object
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
Methods included from Guard
jruby?, mri?, rubinius?, windows?
Methods included from Assertions
#_assertions, #_assertions=, #assert, #assert_block, #assert_empty, #assert_equal, #assert_in_delta, #assert_in_epsilon, #assert_includes, #assert_instance_of, #assert_kind_of, #assert_match, #assert_nil, #assert_operator, #assert_output, #assert_predicate, #assert_raises, #assert_respond_to, #assert_same, #assert_send, #assert_silent, #assert_throws, #capture_io, #diff, diff, diff=, #exception_details, #flunk, #message, #mu_pp, #mu_pp_for_diff, #pass, #refute, #refute_empty, #refute_equal, #refute_in_delta, #refute_in_epsilon, #refute_includes, #refute_instance_of, #refute_kind_of, #refute_match, #refute_nil, #refute_operator, #refute_predicate, #refute_respond_to, #refute_same, #skip
Constructor Details
#initialize(name) ⇒ TestCase
:nodoc:
1096 1097 1098 1099 1100 1101 |
# File 'lib/minitest/unit.rb', line 1096 def initialize name # :nodoc: @__name__ = name @__io__ = nil @passed = nil @@current = self end |
Instance Attribute Details
#__name__ ⇒ Object (readonly)
:nodoc:
1044 1045 1046 |
# File 'lib/minitest/unit.rb', line 1044 def __name__ @__name__ end |
Class Method Details
.add_setup_hook(arg = nil, &block) ⇒ Object
Adds a block of code that will be executed before every TestCase is run. Equivalent to setup, but usable multiple times and without re-opening any classes.
All of the setup hooks will run in order after the setup method, if one is defined.
The argument can be any object that responds to #call or a block. That means that this call,
MiniTest::Unit::TestCase.add_setup_hook { puts "foo" }
… is equivalent to:
module MyTestSetup
def self.call
puts "foo"
end
end
MiniTest::Unit::TestCase.add_setup_hook MyTestSetup
The blocks passed to add_setup_hook take an optional parameter that will be the TestCase instance that is executing the block.
1240 1241 1242 1243 |
# File 'lib/minitest/unit.rb', line 1240 def self.add_setup_hook arg=nil, &block hook = arg || block @setup_hooks << hook end |
.add_teardown_hook(arg = nil, &block) ⇒ Object
Adds a block of code that will be executed after every TestCase is run. Equivalent to teardown, but usable multiple times and without re-opening any classes.
All of the teardown hooks will run in reverse order after the teardown method, if one is defined.
The argument can be any object that responds to #call or a block. That means that this call,
MiniTest::Unit::TestCase.add_teardown_hook { puts "foo" }
… is equivalent to:
module MyTestTeardown
def self.call
puts "foo"
end
end
MiniTest::Unit::TestCase.add_teardown_hook MyTestTeardown
The blocks passed to add_teardown_hook take an optional parameter that will be the TestCase instance that is executing the block.
1289 1290 1291 1292 |
# File 'lib/minitest/unit.rb', line 1289 def self.add_teardown_hook arg=nil, &block hook = arg || block @teardown_hooks << hook end |
.bench_exp(min, max, base = 10) ⇒ Object
Returns a set of ranges stepped exponentially from min to max by powers of base. Eg:
bench_exp(2, 16, 2) # => [2, 4, 8, 16]
20 21 22 23 24 25 |
# File 'lib/minitest/benchmark.rb', line 20 def self.bench_exp min, max, base = 10 min = (Math.log10(min) / Math.log10(base)).to_i max = (Math.log10(max) / Math.log10(base)).to_i (min..max).map { |m| base ** m }.to_a end |
.bench_linear(min, max, step = 10) ⇒ Object
Returns a set of ranges stepped linearly from min to max by step. Eg:
bench_linear(20, 40, 10) # => [20, 30, 40]
33 34 35 36 37 |
# File 'lib/minitest/benchmark.rb', line 33 def self.bench_linear min, max, step = 10 (min..max).step(step).to_a rescue LocalJumpError # 1.8.6 r = []; (min..max).step(step) { |n| r << n }; r end |
.bench_range ⇒ Object
Specifies the ranges used for benchmarking for that class. Defaults to exponential growth from 1 to 10k by powers of 10. Override if you need different ranges for your benchmarks.
See also: ::bench_exp and ::bench_linear.
61 62 63 |
# File 'lib/minitest/benchmark.rb', line 61 def self.bench_range bench_exp 1, 10_000 end |
.benchmark_methods ⇒ Object
Returns the benchmark methods (methods that start with bench_) for that class.
43 44 45 |
# File 'lib/minitest/benchmark.rb', line 43 def self.benchmark_methods # :nodoc: public_instance_methods(true).grep(/^bench_/).map { |m| m.to_s }.sort end |
.benchmark_suites ⇒ Object
Returns all test suites that have benchmark methods.
50 51 52 |
# File 'lib/minitest/benchmark.rb', line 50 def self.benchmark_suites TestCase.test_suites.reject { |s| s.benchmark_methods.empty? } end |
.current ⇒ Object
:nodoc:
1103 1104 1105 |
# File 'lib/minitest/unit.rb', line 1103 def self.current # :nodoc: @@current end |
.i_suck_and_my_tests_are_order_dependent! ⇒ Object
Call this at the top of your tests when you absolutely positively need to have ordered tests. In doing so, you’re admitting that you suck and your tests are weak.
1133 1134 1135 1136 1137 1138 |
# File 'lib/minitest/unit.rb', line 1133 def self.i_suck_and_my_tests_are_order_dependent! class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :alpha end end end |
.inherited(klass) ⇒ Object
:nodoc:
1140 1141 1142 1143 1144 |
# File 'lib/minitest/unit.rb', line 1140 def self.inherited klass # :nodoc: @@test_suites[klass] = true klass.reset_setup_teardown_hooks super end |
.reset ⇒ Object
:nodoc:
1122 1123 1124 |
# File 'lib/minitest/unit.rb', line 1122 def self.reset # :nodoc: @@test_suites = {} end |
.reset_setup_teardown_hooks ⇒ Object
:nodoc:
1207 1208 1209 1210 |
# File 'lib/minitest/unit.rb', line 1207 def self.reset_setup_teardown_hooks # :nodoc: @setup_hooks = [] @teardown_hooks = [] end |
.setup_hooks ⇒ Object
:nodoc:
1245 1246 1247 1248 1249 1250 1251 |
# File 'lib/minitest/unit.rb', line 1245 def self.setup_hooks # :nodoc: if superclass.respond_to? :setup_hooks then superclass.setup_hooks else [] end + @setup_hooks end |
.teardown_hooks ⇒ Object
:nodoc:
1294 1295 1296 1297 1298 1299 1300 |
# File 'lib/minitest/unit.rb', line 1294 def self.teardown_hooks # :nodoc: if superclass.respond_to? :teardown_hooks then superclass.teardown_hooks else [] end + @teardown_hooks end |
.test_methods ⇒ Object
:nodoc:
1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 |
# File 'lib/minitest/unit.rb', line 1154 def self.test_methods # :nodoc: methods = public_instance_methods(true).grep(/^test/).map { |m| m.to_s } case self.test_order when :random then max = methods.size methods.sort.sort_by { rand max } when :alpha, :sorted then methods.sort else raise "Unknown test_order: #{self.test_order.inspect}" end end |
.test_order ⇒ Object
:nodoc:
1146 1147 1148 |
# File 'lib/minitest/unit.rb', line 1146 def self.test_order # :nodoc: :random end |
.test_suites ⇒ Object
:nodoc:
1150 1151 1152 |
# File 'lib/minitest/unit.rb', line 1150 def self.test_suites # :nodoc: @@test_suites.keys.sort_by { |ts| ts.name.to_s } end |
Instance Method Details
#after_setup ⇒ Object
Runs before every test after setup. Use this to refactor test initialization.
1184 |
# File 'lib/minitest/unit.rb', line 1184 def after_setup; end |
#after_teardown ⇒ Object
Runs after every teardown. Use this to refactor test cleanup.
1205 |
# File 'lib/minitest/unit.rb', line 1205 def after_teardown; end |
#assert_performance(validation, &work) ⇒ Object
Runs the given work, gathering the times of each run. Range and times are then passed to a given validation proc. Outputs the benchmark name and times in tab-separated format, making it easy to paste into a spreadsheet for graphing or further analysis.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
validation = proc { |x, y| ... }
assert_performance validation do |n|
@obj.algorithm(n)
end
end
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
# File 'lib/minitest/benchmark.rb', line 83 def assert_performance validation, &work range = self.class.bench_range io.print "#{__name__}" times = [] range.each do |x| GC.start t0 = Time.now instance_exec(x, &work) t = Time.now - t0 io.print "\t%9.6f" % t times << t end io.puts validation[range, times] end |
#assert_performance_constant(threshold = 0.99, &work) ⇒ Object
Runs the given work and asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a given threshold. Note: because we’re testing for a slope of 0, R^2 is not a good determining factor for the fit, so the threshold is applied against the slope itself. As such, you probably want to tighten it from the default.
See www.graphpad.com/curvefit/goodness_of_fit.htm for more details.
Fit is calculated by #fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_constant 0.9999 do |n|
@obj.algorithm(n)
end
end
127 128 129 130 131 132 133 134 135 |
# File 'lib/minitest/benchmark.rb', line 127 def assert_performance_constant threshold = 0.99, &work validation = proc do |range, times| a, b, rr = fit_linear range, times assert_in_delta 0, b, 1 - threshold [a, b, rr] end assert_performance validation, &work end |
#assert_performance_exponential(threshold = 0.99, &work) ⇒ Object
Runs the given work and asserts that the times gathered fit to match a exponential curve within a given error threshold.
Fit is calculated by #fit_exponential.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_exponential 0.9999 do |n|
@obj.algorithm(n)
end
end
153 154 155 |
# File 'lib/minitest/benchmark.rb', line 153 def assert_performance_exponential threshold = 0.99, &work assert_performance validation_for_fit(:exponential, threshold), &work end |
#assert_performance_linear(threshold = 0.99, &work) ⇒ Object
Runs the given work and asserts that the times gathered fit to match a straight line within a given error threshold.
Fit is calculated by #fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_linear 0.9999 do |n|
@obj.algorithm(n)
end
end
173 174 175 |
# File 'lib/minitest/benchmark.rb', line 173 def assert_performance_linear threshold = 0.99, &work assert_performance validation_for_fit(:linear, threshold), &work end |
#assert_performance_power(threshold = 0.99, &work) ⇒ Object
Runs the given work and asserts that the times gathered curve fit to match a power curve within a given error threshold.
Fit is calculated by #fit_power.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
assert_performance_power 0.9999 do |x|
@obj.algorithm
end
end
193 194 195 |
# File 'lib/minitest/benchmark.rb', line 193 def assert_performance_power threshold = 0.99, &work assert_performance validation_for_fit(:power, threshold), &work end |
#before_setup ⇒ Object
Runs before every setup. Use this to refactor test initialization.
1189 |
# File 'lib/minitest/unit.rb', line 1189 def before_setup; end |
#before_teardown ⇒ Object
Runs after every test before teardown. Use this to refactor test initialization.
1200 |
# File 'lib/minitest/unit.rb', line 1200 def before_teardown; end |
#fit_error(xys) ⇒ Object
Takes an array of x/y pairs and calculates the general R^2 value.
202 203 204 205 206 207 208 |
# File 'lib/minitest/benchmark.rb', line 202 def fit_error xys = sigma(xys) { |x, y| y } / xys.size.to_f ss_tot = sigma(xys) { |x, y| (y - ) ** 2 } ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 } 1 - (ss_err / ss_tot) end |
#fit_exponential(xs, ys) ⇒ Object
To fit a functional form: y = ae^(bx).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingExponential.html
217 218 219 220 221 222 223 224 225 226 227 228 229 230 |
# File 'lib/minitest/benchmark.rb', line 217 def fit_exponential xs, ys n = xs.size xys = xs.zip(ys) sxlny = sigma(xys) { |x,y| x * Math.log(y) } slny = sigma(xys) { |x,y| Math.log(y) } sx2 = sigma(xys) { |x,y| x * x } sx = sigma xs c = n * sx2 - sx ** 2 a = (slny * sx2 - sx * sxlny) / c b = ( n * sxlny - sx * slny ) / c return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) } end |
#fit_linear(xs, ys) ⇒ Object
Fits the functional form: a + bx.
Takes x and y values and returns [a, b, r^2].
239 240 241 242 243 244 245 246 247 248 249 250 251 252 |
# File 'lib/minitest/benchmark.rb', line 239 def fit_linear xs, ys n = xs.size xys = xs.zip(ys) sx = sigma xs sy = sigma ys sx2 = sigma(xs) { |x| x ** 2 } sxy = sigma(xys) { |x,y| x * y } c = n * sx2 - sx**2 a = (sy * sx2 - sx * sxy) / c b = ( n * sxy - sx * sy ) / c return a, b, fit_error(xys) { |x| a + b * x } end |
#fit_power(xs, ys) ⇒ Object
To fit a functional form: y = ax^b.
Takes x and y values and returns [a, b, r^2].
261 262 263 264 265 266 267 268 269 270 271 272 273 |
# File 'lib/minitest/benchmark.rb', line 261 def fit_power xs, ys n = xs.size xys = xs.zip(ys) slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) } slnx = sigma(xs) { |x | Math.log(x) } slny = sigma(ys) { | y| Math.log(y) } slnx2 = sigma(xs) { |x | Math.log(x) ** 2 } b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2); a = (slny - b * slnx) / n return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) } end |
#io ⇒ Object
Return the output IO object
1110 1111 1112 1113 |
# File 'lib/minitest/unit.rb', line 1110 def io @__io__ = true MiniTest::Unit.output end |
#io? ⇒ Boolean
Have we hooked up the IO yet?
1118 1119 1120 |
# File 'lib/minitest/unit.rb', line 1118 def io? @__io__ end |
#passed? ⇒ Boolean
Returns true if the test passed.
1171 1172 1173 |
# File 'lib/minitest/unit.rb', line 1171 def passed? @passed end |
#run(runner) ⇒ Object
Runs the tests reporting the status to runner
1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 |
# File 'lib/minitest/unit.rb', line 1054 def run runner trap "INFO" do runner.report.each_with_index do |msg, i| warn "\n%3d) %s" % [i + 1, msg] end warn '' time = runner.start_time ? Time.now - runner.start_time : 0 warn "Current Test: %s#%s %.2fs" % [self.class, self.__name__, time] runner.status $stderr end if SUPPORTS_INFO_SIGNAL result = "" begin @passed = nil self.before_setup self.setup self.after_setup self.run_test self.__name__ result = "." unless io? @passed = true rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false result = runner.puke self.class, self.__name__, e ensure %w{ before_teardown teardown after_teardown }.each do |hook| begin self.send hook rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e result = runner.puke self.class, self.__name__, e end end trap 'INFO', 'DEFAULT' if SUPPORTS_INFO_SIGNAL end result end |
#run_setup_hooks ⇒ Object
:nodoc:
1253 1254 1255 1256 1257 1258 1259 1260 1261 |
# File 'lib/minitest/unit.rb', line 1253 def run_setup_hooks # :nodoc: self.class.setup_hooks.each do |hook| if hook.respond_to?(:arity) && hook.arity == 1 hook.call(self) else hook.call end end end |
#run_teardown_hooks ⇒ Object
:nodoc:
1302 1303 1304 1305 1306 1307 1308 1309 1310 |
# File 'lib/minitest/unit.rb', line 1302 def run_teardown_hooks # :nodoc: self.class.teardown_hooks.reverse.each do |hook| if hook.respond_to?(:arity) && hook.arity == 1 hook.call(self) else hook.call end end end |
#setup ⇒ Object
Runs before every test. Use this to refactor test initialization.
1178 |
# File 'lib/minitest/unit.rb', line 1178 def setup; end |
#sigma(enum, &block) ⇒ Object
Enumerates over enum mapping block if given, returning the sum of the result. Eg:
sigma([1, 2, 3]) # => 1 + 2 + 3 => 7
sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
282 283 284 285 |
# File 'lib/minitest/benchmark.rb', line 282 def sigma enum, &block enum = enum.map(&block) if block enum.inject { |sum, n| sum + n } end |
#teardown ⇒ Object
Runs after every test. Use this to refactor test cleanup.
1194 |
# File 'lib/minitest/unit.rb', line 1194 def teardown; end |
#validation_for_fit(msg, threshold) ⇒ Object
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
291 292 293 294 295 296 297 |
# File 'lib/minitest/benchmark.rb', line 291 def validation_for_fit msg, threshold proc do |range, times| a, b, rr = send "fit_#{msg}", range, times assert_operator rr, :>=, threshold [a, b, rr] end end |