Pytest API and builtin fixtures¶
This is a list of pytest.*
API functions and fixtures.
For information on plugin hooks and objects, see Writing plugins.
For information on the pytest.mark
mechanism, see Marking test functions with attributes.
For the below objects, you can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
import pytest
help(pytest)
Invoking pytest interactively¶
-
main
(args=None, plugins=None)[source]¶ return exit code, after performing an in-process test run.
Parameters: - args – list of command line arguments.
- plugins – list of plugin objects to be auto-registered during initialization.
More examples at Calling pytest from Python code
Helpers for assertions about Exceptions/Warnings¶
-
raises
(expected_exception, *args, **kwargs)[source]¶ Assert that a code block/function call raises
expected_exception
and raise a failure exception otherwise.This helper produces a
ExceptionInfo()
object (see below).If using Python 2.5 or above, you may use this function as a context manager:
>>> with raises(ZeroDivisionError): ... 1/0
Changed in version 2.10.
In the context manager form you may use the keyword argument
message
to specify a custom failure message:>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"): ... pass ... Failed: Expecting ZeroDivisionError
Note
When using
pytest.raises
as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:>>> with raises(OSError) as exc_info: assert 1 == 1 # this will execute as expected raise OSError(errno.EEXISTS, 'directory exists') assert exc_info.value.errno == errno.EEXISTS # this will not execute
Instead, the following approach must be taken (note the difference in scope):
>>> with raises(OSError) as exc_info: assert 1 == 1 # this will execute as expected raise OSError(errno.EEXISTS, 'directory exists') assert exc_info.value.errno == errno.EEXISTS # this will now execute
Or you can specify a callable by passing a to-be-called lambda:
>>> raises(ZeroDivisionError, lambda: 1/0) <ExceptionInfo ...>
or you can specify an arbitrary callable with arguments:
>>> def f(x): return 1/x ... >>> raises(ZeroDivisionError, f, 0) <ExceptionInfo ...> >>> raises(ZeroDivisionError, f, x=0) <ExceptionInfo ...>
A third possibility is to use a string to be executed:
>>> raises(ZeroDivisionError, "f(0)") <ExceptionInfo ...>
-
class
ExceptionInfo
(tup=None, exprinfo=None)[source]¶ wraps sys.exc_info() objects and offers help for navigating the traceback.
-
type
= None¶ the exception class
-
value
= None¶ the exception instance
-
tb
= None¶ the exception raw traceback
-
typename
= None¶ the exception type name
-
traceback
= None¶ the exception traceback (_pytest._code.Traceback instance)
-
exconly
(tryshort=False)[source]¶ return the exception as a string
when ‘tryshort’ resolves to True, and the exception is a _pytest._code._AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning)
-
getrepr
(showlocals=False, style='long', abspath=False, tbfilter=True, funcargs=False)[source]¶ return str()able representation of this exception info. showlocals: show locals per traceback entry style: long|short|no|native traceback style tbfilter: hide entries (where __tracebackhide__ is true)
in case of style==native, tbfilter and showlocals is ignored.
-
Note
Similar to caught exception objects in Python, explicitly clearing local references to returned
ExceptionInfo
objects can help the Python interpreter speed up its garbage collection.Clearing those references breaks a reference cycle (
ExceptionInfo
–> caught exception –> frame stack raising the exception –> current frame stack –> local variables –>ExceptionInfo
) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. See the official Pythontry
statement documentation for more detailed information.-
class
Examples at Assertions about expected exceptions.
-
deprecated_call
(func=None, *args, **kwargs)[source]¶ assert that calling
func(*args, **kwargs)
triggers aDeprecationWarning
orPendingDeprecationWarning
.This function can be used as a context manager:
>>> with deprecated_call(): ... myobject.deprecated_method()
Note: we cannot use WarningsRecorder here because it is still subject to the mechanism that prevents warnings of the same type from being triggered twice for the same module. See #1190.
Comparing floating point numbers¶
-
class
approx
(expected, rel=None, abs=None)[source]¶ Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.
Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:
>>> 0.1 + 0.2 == 0.3 False
This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:
>>> abs((0.1 + 0.2) - 0.3) < 1e-6 True
However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations.
1e-6
is good for numbers around1
, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.The
approx
class performs floating-point comparisons using a syntax that’s as intuitive as possible:>>> from pytest import approx >>> 0.1 + 0.2 == approx(0.3) True
The same syntax also works on sequences of numbers:
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) True
By default,
approx
considers numbers within a relative tolerance of1e-6
(i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was0.0
, because nothing but0.0
itself is relatively close to0.0
. To handle this case less surprisingly,approx
also considers numbers within an absolute tolerance of1e-12
of its expected value to be equal. Infinite numbers are another special case. They are only considered equal to themselves, regardless of the relative tolerance. Both the relative and absolute tolerances can be changed by passing arguments to theapprox
constructor:>>> 1.0001 == approx(1) False >>> 1.0001 == approx(1, rel=1e-3) True >>> 1.0001 == approx(1, abs=1e-3) True
If you specify
abs
but notrel
, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of1e-6
will still be considered unequal if they exceed the specified absolute tolerance. If you specify bothabs
andrel
, the numbers will be considered equal if either tolerance is met:>>> 1 + 1e-8 == approx(1) True >>> 1 + 1e-8 == approx(1, abs=1e-12) False >>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12) True
If you’re thinking about using
approx
, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0)
: True if the relative tolerance is met w.r.t. eithera
orb
or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. botha
andb
, this test is symmetric (i.e. neithera
norb
is a “reference value”). You have to specify an absolute tolerance if you want to compare to0.0
because there is no tolerance by default. Only available in python>=3.5. More information...numpy.isclose(a, b, rtol=1e-5, atol=1e-8)
: True if the difference betweena
andb
is less that the sum of the relative tolerance w.r.t.b
and the absolute tolerance. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. Support for comparing sequences is provided bynumpy.allclose
. More information...unittest.TestCase.assertAlmostEqual(a, b)
: True ifa
andb
are within an absolute tolerance of1e-7
. No relative tolerance is considered and the absolute tolerance cannot be changed, so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses ofunittest.TestCase
and it’s ugly because it doesn’t follow PEP8. More information...a == pytest.approx(b, rel=1e-6, abs=1e-12)
: True if the relative tolerance is met w.r.t.b
or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t.b
, this test is asymmetric and you can think ofb
as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.
Raising a specific test outcome¶
You can use the following functions in your test, fixture or setup functions to force a certain test outcome. Note that most often you can rather use declarative marks, see Skip and xfail: dealing with tests that can not succeed.
-
fail
(msg='', pytrace=True)[source]¶ explicitly fail an currently-executing test with the given Message.
Parameters: pytrace – if false the msg represents the full failure information and no python traceback will be reported.
-
skip
(msg='')[source]¶ skip an executing test with the given message. Note: it’s usually better to use the pytest.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details.
Fixtures and requests¶
To mark a fixture function:
Tutorial at pytest fixtures: explicit, modular, scalable.
The request
object that can be used from fixture functions.
Builtin fixtures/function arguments¶
You can ask for available builtin or project-custom fixtures by typing:
$ pytest -q --fixtures
cache
Return a cache object that can persist state between testing sessions.
cache.get(key, default)
cache.set(key, value)
Keys must be a ``/`` separated value, where the first part is usually the
name of your plugin or application to avoid clashes with other cache users.
Values can be any object handled by the json stdlib module.
capsys
Enable capturing of writes to sys.stdout/sys.stderr and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple.
capfd
Enable capturing of writes to file descriptors 1 and 2 and make
captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple.
doctest_namespace
Inject names into the doctest namespace.
pytestconfig
the pytest config object with access to command line opts.
record_xml_property
Add extra xml properties to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
monkeypatch
The returned ``monkeypatch`` fixture provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
monkeypatch.delitem(obj, name, raising=True)
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)
All modifications will be undone after the requesting
test function or fixture has finished. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
recwarn
Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings
See http://docs.python.org/library/warnings.html for information
on warning categories.
tmpdir_factory
Return a TempdirFactory instance for the test session.
tmpdir
Return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
no tests ran in 0.12 seconds