Skip to content

Commit 16341fb

Browse files
committed
commit-doc
1 parent b1ccf16 commit 16341fb

File tree

5 files changed

+209
-12
lines changed

5 files changed

+209
-12
lines changed

doc/en/new-docs/user.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Writing tests
1111
* :ref:`Test directory structure <testdirectory>`
1212
* :ref:`Asserting about exceptions with pytest.raises <pytestraises>`
1313
* :ref:`Using pytest.mark to group tests <pytestmarkbasic>`
14-
* :ref:`Skip and skipif marks <skipping>`
14+
* :ref:`Skip and skipif marks <skippingbasic>`
1515
* :ref:`xfail mark <xfail>`
1616
* :ref:`parametrize basics <parametrizebasic>`
1717
* :ref:`Fixture basics <fixturebasic>`
Lines changed: 30 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,35 @@
1+
.. _index: exceptions, pytest.raises
12
.. _`pytestraises`:
23

34
Asserting about exceptions with pytest.raises
45
=============================================
56

6-
TODO
7+
An important aspect of unit testing is checking boundary/edge cases, and behaviour in the case of unexpected input. If you have a function that in some cases raises an exception, you can confirm this is working as expected using a context manager called ``pytest.raises``. Example::
8+
9+
import pytest
10+
11+
def test_zero_division():
12+
with pytest.raises(ZeroDivisionError):
13+
1 / 0
14+
15+
16+
If an exception is not raised in a ``pytest.raises`` block, the test will fail. Example::
17+
18+
import pytest
19+
20+
def test_zero_division():
21+
with pytest.raises(ZeroDivisionError):
22+
2 / 1
23+
24+
25+
Running this will give the result::
26+
27+
_______________________ test_zero_division ____________________________
28+
29+
def test_zero_division():
30+
with pytest.raises(ZeroDivisionError):
31+
> 2 / 1
32+
E Failed: DID NOT RAISE
33+
34+
35+
A related concept is that of making a test as "expected to fail", or xfail (TODO-User-xfail).

doc/en/new-docs/user/pytestmark.rst

Lines changed: 88 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,93 @@
1+
.. _index: mark
12
.. _`pytestmarkbasic`:
23

34
Grouping tests with pytest.mark
45
===============================
56

6-
TODO
7+
The ``pytest.mark`` decorator can be used to add metadata to tests. This is useful to note related tests and to select groups of tests to be run. In the following example, only one test has the mark "webtest"::
8+
9+
# content of test_server.py
10+
11+
import pytest
12+
13+
@pytest.mark.webtest
14+
def test_send_http():
15+
pass # perform some webtest test for your app
16+
17+
def test_something_quick():
18+
pass
19+
20+
def test_another():
21+
pass
22+
23+
24+
class TestClass:
25+
def test_method(self):
26+
pass
27+
28+
29+
You can then restrict a test run to only run tests marked with ``webtest`` by using the "-m" command line option::
30+
31+
$ py.test -v -m webtest
32+
======= test session starts ========
33+
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
34+
cachedir: .cache
35+
rootdir: $REGENDOC_TMPDIR, inifile:
36+
collecting ... collected 4 items
37+
38+
test_server.py::test_send_http PASSED
39+
40+
======= 3 tests deselected by "-m 'webtest'" ========
41+
======= 1 passed, 3 deselected in 0.12 seconds ========
42+
43+
Or the inverse, running all tests except the webtest ones::
44+
45+
$ py.test -v -m "not webtest"
46+
======= test session starts ========
47+
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- $PYTHON_PREFIX/bin/python3.5
48+
cachedir: .cache
49+
rootdir: $REGENDOC_TMPDIR, inifile:
50+
collecting ... collected 4 items
51+
52+
test_server.py::test_something_quick PASSED
53+
test_server.py::test_another PASSED
54+
test_server.py::TestClass::test_method PASSED
55+
56+
======= 1 tests deselected by "-m 'not webtest'" ========
57+
======= 3 passed, 1 deselected in 0.12 seconds ========
58+
59+
60+
You may use ``pytest.mark`` decorators with classes to apply markers to all of
61+
its test methods::
62+
63+
# content of test_mark_classlevel.py
64+
65+
import pytest
66+
67+
@pytest.mark.webtest
68+
class TestClass:
69+
def test_startup(self):
70+
pass
71+
72+
def test_startup_and_more(self):
73+
pass
74+
75+
This is equivalent to directly applying the decorator to the
76+
two test functions.
77+
78+
79+
80+
Some built-in markers offer extra functionality instead of grouping tests, for example:
81+
82+
* :ref:`skipif <skipif??>` - skip a test function if a certain condition is met
83+
* :ref:`xfail <xfail??>` - produce an "expected failure" outcome if a certain
84+
condition is met
85+
* :ref:`parametrize <parametrizemark??>` to perform multiple calls
86+
to the same test function
87+
88+
See also
89+
90+
* TODO (Basic - useful command line options)
91+
* TODO(Advanced - adding extra functionality to marks)
92+
* TODO (Advanced - ini - registering markers)
93+
* TODO (Plugin author - adding a custom marker from a plugin)

doc/en/new-docs/user/skip.rst

Lines changed: 0 additions & 9 deletions
This file was deleted.

doc/en/new-docs/user/skipping.rst

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
.. _index: skip, skipif
2+
.. _`skippingbasic`:
3+
4+
Skipping
5+
========
6+
7+
If your software runs on multiple platforms, or supports multiple versions of
8+
different dependencies, it is likely that you will encounter bugs or strange
9+
edge cases that only occur in one particular environment. In this case, it can
10+
be useful to write a test for that should only be exercised on that
11+
environment, and in other cases it doesn't need to be run - it can be skipped.
12+
13+
If you wish to skip something conditionally then you can use ``skipif`` instead.
14+
Here is an example of marking a test function to be skipped when run on a
15+
Python 2 interpreter::
16+
17+
import sys
18+
@pytest.mark.skipif(sys.version_info < (3, 0),
19+
reason="requires Python3")
20+
def test_function():
21+
...
22+
23+
During test function setup the condition ("sys.version_info >= (3, 0)") is
24+
checked. If it evaluates to True, the test function will be skipped with the
25+
specified reason. Note that pytest enforces specifying a reason in order to
26+
report meaningful "skip reasons" (e.g. when using the command line options
27+
``-rs``). If the condition is a string, it will be evaluated as python
28+
expression.
29+
30+
Example output::
31+
32+
$ py.test -rs
33+
======= test session starts ========
34+
platform linux -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
35+
rootdir: $REGENDOC_TMPDIR, inifile:
36+
collected 1 items
37+
38+
test_function.py s
39+
======= short test summary info ========
40+
SKIP [1] test_function.py:4: requires Python3
41+
======= 1 skipped in 0.01 seconds ========
42+
43+
44+
45+
Re-using skipif decorators
46+
--------------------------
47+
48+
You can also define the decorator once and re-use it::
49+
50+
import sys
51+
python3_only = pytest.mark.skipif(sys.version_info < (3, 0),
52+
reason="requires Python3")
53+
@python3_only
54+
def test_function1():
55+
...
56+
57+
@python3_only
58+
def test_function2():
59+
...
60+
61+
@python3_only
62+
def test_function3():
63+
...
64+
65+
For larger test suites it's usually a good idea to have one file where you
66+
define the markers which you then consistently apply throughout your test
67+
suite.
68+
69+
70+
Unconditional skip
71+
------------------
72+
73+
To skip a test without a condition use the ``pytest.mark.skip`` decorator which
74+
may be passed an optional ``reason``:
75+
76+
.. code-block:: python
77+
78+
@pytest.mark.skip(reason="no way of currently testing this")
79+
def test_the_unknown():
80+
...
81+
82+
If you are skipping a test because it fails (e.g. a bug in your software that
83+
you want to track) the ``xfail`` marker is probably more appropiate.
84+
85+
86+
See also
87+
--------
88+
89+
* Test results page
90+
* xfail

0 commit comments

Comments
 (0)