# -*- rd -*-
README
Name
TestUnitExt
What’s this?
TestUnitExt extends the standard Test::Unit.
TestUnitExt provides some useful features:
* Emacs friendly backtrace format.
* runs tests depending on priority.
* supports attributes for each test.
* always shows tests result even if tests are interrupted.
* colorized output.
* outputs diff between expected and actual value.
* reports test result as XML format.
* adds pending/notification methods.
Author
Kouhei Sutou <[email protected]>
Licence
Ruby’s.
# == Mailing list # # None. needed?
Reference manual
((<URL:test-unit-ext.rubyforge.org/doc/>))
Dependency libraries
None
Usage
require 'test-unit-ext'
Reference
Attributes
You can add attributes to your test to get more useful information on failure. For example, you can add Bug ID like the following
class MyTest < Test::Unit::TestCase
bug 123
def test_invalid_input
assert_equal("OK", input)
end
end
In the above example, test_invalid_input test has an attribute that the test is for Bug #123.
You can also write like the following:
class MyTest < Test::Unit::TestCase
attribute :bug, 123
def test_invalid_input
assert_equal("OK", input)
end
end
That is, bug method is a convenience method. You can add any attributes to your test if you use attribute method.
Priority
It will be difficult that you drop into test and develop cycle when you have many tests and the tests take much time. To reduce the problem, you can select some tests from all tests and reduce elapsed time each test. It’s a problem that which tests are selected in each test.
In TestUnitExt, each test has its priority and runs tests are selected by probabilistic method. –priority command line option enables this feature.
A test that has higher priority will be ran in higher ratio. Each test doesn’t run all tests but tests that are ran each test are changed in each test. All tests will be ran after some test and develop cycles.
Tests that aren’t succeeded in the previous test are ran in the current test in spite of its priority. This means that important tests are ran without you do something. You will keep your lightweight test and develop cycles without additional works.
Here are a sample priority specification. Priority specification effects followed tests like public and private.
class MyTest < Test::Unit::TestCase
priority :must
def test_must
# will be always ran
end
def test_must2
# will be always ran too
end
priority :import
def test_import
# will be almost ran
end
priority :high
def test_high
# will be ran in high probability
end
priority :normal
def test_normal
# will be ran in fifty-fifty probability
# tests without priority specification has normal priority
end
priority :low
def test_low
# will be sometimes ran
end
priority :never
def test_never
# never be ran
end
end
Pending
You may write a test for a function that is not implemented. In the case, the test will be failed. It’s correct that the test is failed but the test means ‘the function is not implemented’. To state your intent, pend method is provided.
Please state your intent by your test code by using the method.
class MyTest < Test::Unit::TestCase
def test_minor_function
pend("Should implement")
assert_equal("Good!", MyModule.minor_function)
end
end
Notification
You may put some messages in test. For example, “this test is omitted because XXX module is not found on the environment.”
Those messages can be displayed by puts but it causes test result output. To prevent this, notify method is provided. notify method doesn’t display those messages in the place but display those messages in test result summary like “failure”, “error” and so on. You can leave those messages without breaking test result output by using notify method.
class MyTest < Test::Unit::TestCase
def test_with_other_module
unless MyModule.have_XXX?
notify("XXX module isn't found. skip this test.")
return
end
assert_equal("XXX Module!!!", MyModule.use_XXX)
end
end
XML report
Test result can be reported as XML format if –xml-report option is specified. A reported XML has the following structure:
<report>
<result>
<test-case>
<name>TEST CASE NAME</name>
<description>DESCRIPTION OF TEST CASE (if exists)</description>
</test-case>
<test>
<name>TEST NAME</name>
<description>DESCRIPTION OF TEST CASE (if exists)</description>
<option><!-- ATTRIBUTE INFORMATION (if exists) -->
<name>ATTRIBUTE NAME (e.g.: bug)</name>
<value>ATTRIBUTE VALUE (e.g.: 1234)</value>
</option>
<option>
...
</option>
</test>
<status>TEST RESULT ([success|failure|error|pending|notification])</status>
<detail>DETAIL OF TEST RESULT (if exists)</detail>
<backtrace><!-- BACKTRACE (if exists) -->
<entry>
<file>FILE NAME</file>
<line>LINE</line>
<info>ADDITIONAL INFORMATION</info>
</entry>
<entry>
...
</entry>
</backtrace>
<elapsed>ELAPSED TIME (e.g.: 0.000010)</elapsed>
</result>
<result>
...
</result>
...
</report>
Thanks
* ...