class MiniTest::Unit::TestCase
Subclass TestCase
to create your own tests. Typically you'll want a TestCase
subclass per implementation class.
Public Class Methods
Returns a set of ranges stepped exponentially from min
to max
by powers of base
. Eg:
bench_exp(2, 16, 2) # => [2, 4, 8, 16]
# File lib/minitest/benchmark.rb, line 20 def self.bench_exp min, max, base = 10 min = (Math.log10(min) / Math.log10(base)).to_i max = (Math.log10(max) / Math.log10(base)).to_i (min..max).map { |m| base ** m }.to_a end
Returns a set of ranges stepped linearly from min
to max
by step
. Eg:
bench_linear(20, 40, 10) # => [20, 30, 40]
# File lib/minitest/benchmark.rb, line 33 def self.bench_linear min, max, step = 10 (min..max).step(step).to_a rescue LocalJumpError # 1.8.6 r = []; (min..max).step(step) { |n| r << n }; r end
Specifies the ranges used for benchmarking for that class. Defaults to exponential growth from 1 to 10k by powers of 10. Override if you need different ranges for your benchmarks.
See also: ::bench_exp
and ::bench_linear
.
# File lib/minitest/benchmark.rb, line 61 def self.bench_range bench_exp 1, 10_000 end
Returns all test suites that have benchmark methods.
# File lib/minitest/benchmark.rb, line 50 def self.benchmark_suites TestCase.test_suites.reject { |s| s.benchmark_methods.empty? } end
Call this at the top of your tests when you absolutely positively need to have ordered tests. In doing so, you're admitting that you suck and your tests are weak.
# File lib/minitest/unit.rb, line 1306 def self.i_suck_and_my_tests_are_order_dependent! class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :alpha end end end
Make diffs for this TestCase
use pretty_inspect so that diff in assert_equal can be more details. NOTE: this is much slower than the regular inspect but much more usable for complex objects.
# File lib/minitest/unit.rb, line 1319 def self.make_my_diffs_pretty! require 'pp' define_method :mu_pp do |o| o.pretty_inspect end end
Call this at the top of your tests when you want to run your tests in parallel. In doing so, you're admitting that you rule and your tests are awesome.
# File lib/minitest/unit.rb, line 1332 def self.parallelize_me! require "minitest/parallel_each" class << self undef_method :test_order if method_defined? :test_order define_method :test_order do :parallel end end end
Public Instance Methods
Runs the given work
, gathering the times of each run. Range and times are then passed to a given validation
proc. Outputs the benchmark name and times in tab-separated format, making it easy to paste into a spreadsheet for graphing or further analysis.
Ranges are specified by ::bench_range
.
Eg:
def bench_algorithm validation = proc { |x, y| ... } assert_performance validation do |n| @obj.algorithm(n) end end
# File lib/minitest/benchmark.rb, line 83 def assert_performance validation, &work range = self.class.bench_range io.print "#{__name__}" times = [] range.each do |x| GC.start t0 = Time.now instance_exec(x, &work) t = Time.now - t0 io.print "\t%9.6f" % t times << t end io.puts validation[range, times] end
Runs the given work
and asserts that the times gathered fit to match a constant rate (eg, linear slope == 0) within a given threshold
. Note: because we're testing for a slope of 0, R^2 is not a good determining factor for the fit, so the threshold is applied against the slope itself. As such, you probably want to tighten it from the default.
See www.graphpad.com/curvefit/goodness_of_fit.htm for more details.
Fit is calculated by fit_linear
.
Ranges are specified by ::bench_range
.
Eg:
def bench_algorithm assert_performance_constant 0.9999 do |n| @obj.algorithm(n) end end
# File lib/minitest/benchmark.rb, line 127 def assert_performance_constant threshold = 0.99, &work validation = proc do |range, times| a, b, rr = fit_linear range, times assert_in_delta 0, b, 1 - threshold [a, b, rr] end assert_performance validation, &work end
Runs the given work
and asserts that the times gathered fit to match a exponential curve within a given error threshold
.
Fit is calculated by fit_exponential
.
Ranges are specified by ::bench_range
.
Eg:
def bench_algorithm assert_performance_exponential 0.9999 do |n| @obj.algorithm(n) end end
# File lib/minitest/benchmark.rb, line 153 def assert_performance_exponential threshold = 0.99, &work assert_performance validation_for_fit(:exponential, threshold), &work end
Runs the given work
and asserts that the times gathered fit to match a straight line within a given error threshold
.
Fit is calculated by fit_linear
.
Ranges are specified by ::bench_range
.
Eg:
def bench_algorithm assert_performance_linear 0.9999 do |n| @obj.algorithm(n) end end
# File lib/minitest/benchmark.rb, line 193 def assert_performance_linear threshold = 0.99, &work assert_performance validation_for_fit(:linear, threshold), &work end
Runs the given work
and asserts that the times gathered fit to match a logarithmic curve within a given error threshold
.
Fit is calculated by fit_logarithmic
.
Ranges are specified by ::bench_range
.
Eg:
def bench_algorithm assert_performance_logarithmic 0.9999 do |n| @obj.algorithm(n) end end
# File lib/minitest/benchmark.rb, line 173 def assert_performance_logarithmic threshold = 0.99, &work assert_performance validation_for_fit(:logarithmic, threshold), &work end
Runs the given work
and asserts that the times gathered curve fit to match a power curve within a given error threshold
.
Fit is calculated by fit_power
.
Ranges are specified by ::bench_range
.
Eg:
def bench_algorithm assert_performance_power 0.9999 do |x| @obj.algorithm end end
# File lib/minitest/benchmark.rb, line 213 def assert_performance_power threshold = 0.99, &work assert_performance validation_for_fit(:power, threshold), &work end
Takes an array of x/y pairs and calculates the general R^2 value.
See: en.wikipedia.org/wiki/Coefficient_of_determination
# File lib/minitest/benchmark.rb, line 222 def fit_error xys y_bar = sigma(xys) { |x, y| y } / xys.size.to_f ss_tot = sigma(xys) { |x, y| (y - y_bar) ** 2 } ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 } 1 - (ss_err / ss_tot) end
To fit a functional form: y = ae^(bx).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingExponential.html
# File lib/minitest/benchmark.rb, line 237 def fit_exponential xs, ys n = xs.size xys = xs.zip(ys) sxlny = sigma(xys) { |x,y| x * Math.log(y) } slny = sigma(xys) { |x,y| Math.log(y) } sx2 = sigma(xys) { |x,y| x * x } sx = sigma xs c = n * sx2 - sx ** 2 a = (slny * sx2 - sx * sxlny) / c b = ( n * sxlny - sx * slny ) / c return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) } end
Fits the functional form: a + bx.
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFitting.html
# File lib/minitest/benchmark.rb, line 282 def fit_linear xs, ys n = xs.size xys = xs.zip(ys) sx = sigma xs sy = sigma ys sx2 = sigma(xs) { |x| x ** 2 } sxy = sigma(xys) { |x,y| x * y } c = n * sx2 - sx**2 a = (sy * sx2 - sx * sxy) / c b = ( n * sxy - sx * sy ) / c return a, b, fit_error(xys) { |x| a + b * x } end
To fit a functional form: y = a + b*ln(x).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingLogarithmic.html
# File lib/minitest/benchmark.rb, line 259 def fit_logarithmic xs, ys n = xs.size xys = xs.zip(ys) slnx2 = sigma(xys) { |x,y| Math.log(x) ** 2 } slnx = sigma(xys) { |x,y| Math.log(x) } sylnx = sigma(xys) { |x,y| y * Math.log(x) } sy = sigma(xys) { |x,y| y } c = n * slnx2 - slnx ** 2 b = ( n * sylnx - sy * slnx ) / c a = (sy - b * slnx) / n return a, b, fit_error(xys) { |x| a + b * Math.log(x) } end
To fit a functional form: y = ax^b.
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingPowerLaw.html
# File lib/minitest/benchmark.rb, line 304 def fit_power xs, ys n = xs.size xys = xs.zip(ys) slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) } slnx = sigma(xs) { |x | Math.log(x) } slny = sigma(ys) { | y| Math.log(y) } slnx2 = sigma(xs) { |x | Math.log(x) ** 2 } b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2); a = (slny - b * slnx) / n return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) } end
Return the output IO object
# File lib/minitest/unit.rb, line 1283 def io @__io__ = true MiniTest::Unit.output end
Have we hooked up the IO yet?
# File lib/minitest/unit.rb, line 1291 def io? @__io__ end
Returns true if the test passed.
# File lib/minitest/unit.rb, line 1374 def passed? @passed end
Runs the tests reporting the status to runner
# File lib/minitest/unit.rb, line 1219 def run runner trap "INFO" do runner.report.each_with_index do |msg, i| warn "\n%3d) %s" % [i + 1, msg] end warn '' time = runner.start_time ? Time.now - runner.start_time : 0 warn "Current Test: %s#%s %.2fs" % [self.class, self.__name__, time] runner.status $stderr end if SUPPORTS_INFO_SIGNAL start_time = Time.now result = "" begin @passed = nil self.before_setup self.setup self.after_setup self.run_test self.__name__ result = "." unless io? time = Time.now - start_time runner.record self.class, self.__name__, self._assertions, time, nil @passed = true rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = Skip === e time = Time.now - start_time runner.record self.class, self.__name__, self._assertions, time, e result = runner.puke self.class, self.__name__, e ensure %w{ before_teardown teardown after_teardown }.each do |hook| begin self.send hook rescue *PASSTHROUGH_EXCEPTIONS raise rescue Exception => e @passed = false runner.record self.class, self.__name__, self._assertions, time, e result = runner.puke self.class, self.__name__, e end end trap 'INFO', 'DEFAULT' if SUPPORTS_INFO_SIGNAL end result end
Runs before every test. Use this to set up before each test run.
# File lib/minitest/unit.rb, line 1382 def setup; end
Enumerates over enum
mapping block
if given, returning the sum of the result. Eg:
sigma([1, 2, 3]) # => 1 + 2 + 3 => 7 sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
# File lib/minitest/benchmark.rb, line 325 def sigma enum, &block enum = enum.map(&block) if block enum.inject { |sum, n| sum + n } end
Runs after every test. Use this to clean up after each test run.
# File lib/minitest/unit.rb, line 1388 def teardown; end
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
# File lib/minitest/benchmark.rb, line 334 def validation_for_fit msg, threshold proc do |range, times| a, b, rr = send "fit_#{msg}", range, times assert_operator rr, :>=, threshold [a, b, rr] end end