py.test feature overview

mature command line testing tool

py.test is a command line tool to collect, run and report about automated tests. It runs well on Linux, Windows and OSX and on Python 2.4 through to 3.1 versions. It is used in many projects, ranging from running 10 thousands of tests to a few inlined tests on a command line script. As of version 1.2 you can also generate a no-dependency py.test-equivalent standalone script that you can distribute along with your application.

extensive easy plugin system

py.test delegates almost all aspects of its operation to plugins. It is suprisingly easy to add command line options or do other kind of add-ons and customizations. This can be done per-project or by distributing a global plugin. One can can thus modify or add aspects for purposes such as:

  • reporting extensions
  • customizing collection and execution of tests
  • running and managing non-python tests
  • managing domain-specific test state setup
  • adding non-python tests into the run, e.g. driven by data files

distributing tests to your CPUs and SSH accounts

Through the use of the separately released pytest-xdist plugin you can seemlessly distribute runs to multiple CPUs or remote computers through SSH and sockets. This plugin also offers a --looponfailing mode which will continously re-run only failing tests in a subprocess.

supports several testing practises and methods

py.test supports many testing methods conventionally used in the Python community. It runs traditional unittest.py, doctest.py, supports xUnit style setup and nose specific setups and test suites. It offers minimal no-boilerplate model for configuring and deploying tests written as simple Python functions or methods. It also integrates coverage testing with figleaf or Javasript unit- and functional testing.

integrates well with CI systems

py.test can produce JUnitXML style output as well as formatted "resultlog" files that can be postprocessed by Continous Integration systems such as Hudson or Buildbot easily. It also provides command line options to control test configuration lookup behaviour or ignoring certain tests or directories.

no-boilerplate test functions with Python

automatic Python test discovery

By default, all python modules with a test_*.py filename are inspected for finding tests:

  • functions with a name beginning with test_
  • classes with a leading Test name and test prefixed methods.
  • unittest.TestCase subclasses

parametrizing test functions and functional testing

py.test offers the unique funcargs mechanism for setting up and passing project-specific objects to Python test functions. Test Parametrization happens by triggering a call to the same test function with different argument values. For doing fixtures using the funcarg mechanism makes your test and setup code more efficient and more readable. This is especially true for functional tests which might depend on command line options and a setup that needs to be shared across a whole test run.

per-test capturing of output, including subprocesses

By default, py.test captures all writes to stdout/stderr. Output from print statements as well as from subprocesses is captured. When a test fails, the associated captured outputs are shown. This allows you to put debugging print statements in your code without being overwhelmed by all the output that might be generated by tests that do not fail.

assert with the assert statement

py.test allows to use the standard python assert statement for verifying expectations and values in Python tests. For example, you can write the following in your tests:

assert hasattr(x, 'attribute')

to state that your object has a certain attribute. In case this assertion fails you will see the value of x. Intermediate values are computed by executing the assert expression a second time. If you execute code with side effects, e.g. read from a file like this:

assert f.read() != '...'

then you may get a warning from pytest if that assertions first failed and then succeeded.

asserting expected exceptions

In order to write assertions about exceptions, you can use py.test.raises as a context manager like this:

with py.test.raises(ZeroDivisionError):
    1 / 0

and if you need to have access to the actual exception info you may use:

with py.test.raises(RuntimeError) as excinfo:
    def f():
        f()
    f()

# do checks related to excinfo.type, excinfo.value, excinfo.traceback

If you want to write test code that works on Python2.4 as well, you may also use two other ways to test for an expected exception:

py.test.raises(ExpectedException, func, *args, **kwargs)
py.test.raises(ExpectedException, "func(*args, **kwargs)")

both of which execute the specified function with args and kwargs and asserts that the given ExpectedException is raised. The reporter will provide you with helpful output in case of failures such as no exception or wrong exception.

information-rich tracebacks, PDB introspection

A lot of care is taken to present useful failure information and in particular nice and concise Python tracebacks. This is especially useful if you need to regularly look at failures from nightly runs, i.e. are detached from the actual test running session. Here are example tracebacks for a number of failing test functions. You can modify traceback printing styles through the command line. Using the --pdb` option you can automatically activate a PDB Python debugger when a test fails.

skip or expect-to-fail a test

py.test has a dedicated skipping plugin that allows to define

  • define "skip" outcomes indicating a platform or a dependency mismatch.
  • "xfail" outcomes indicating an "expected failure" either with with or without running a test.
  • skip and xfail outcomes can be applied at module, class or method level or even only for certain argument sets of a parametrized function.

Looping on the failing test set

py.test --looponfailing (implemented through the external pytest-xdist plugin) allows to run a test suite, memorize all failures and then loop over the failing set of tests until they all pass. It will re-start running the tests when it detects file changes in your project.